Neuroscience, psychoacoustics and sonification

Hubert Ho and Psyche Loui

Professors Hubert Ho of Northeastern University and Psyche Loui of Weslayan University led the second part of the Active Listening workshop with a session called “Neuroscience, psychoacoustics and sonification.” Loui, a psychology professor, spoke the small crowd over Skype from Connecticut to discuss the workings of the brain in relation to sound.

Loui is the director of the Music, Imaging, and Neural Dynamics Lab at Wesleyan, where she researches the neuroscience of music cognition and perception, among related issues. She is working to understand how to best design sound-based visualizations, or sonifications.

“To me, trying to define good sounds or good sights fundamentally has to do with how our minds perceive these things,” she said. “We need to understand a little bit about the brain and how it works and how it enables perceptual experience.”

Loui explained that the brain perceives sound through groups of neurons that start to make temporal patterns — patterns that change over time — and spike at various points. The patterns and shapes of those neurons are the attributes that matter the most.

In a brief history of neuroscience research, Loui noted physician Hans Berger began to record electric activity in the brain by putting electrodes on people’s scalps in the 1920s — a technique now called electroencephalography, or EEG. In the process, he discovered many interesting things about brains. For one, in his recordings he noticed that when someone is relaxed and about to fall asleep there is a rapid fluctuation in the EEG.

“Because that was first rhythmic brain activity he was able to observe, he called that alpha activity,” Loui said.

Decades later in 1965, experimental composer Alvin Lucier stuck electrodes on his own head and used drums to amplify the alpha activity and convert it into sound — a piece he called “Music for Solo Performer.” It was the first use of EEG sonification, a term for transforming brain activity into sound.

“I think what’s really interesting to think through is the idea that brain activities happen in time,” Loui said. “Sonification might be a really useful way of taking advantage of our auditory system and temporal properties, the ability to process sounds quickly, and using that natural correspondence to make better sense of data as they’re unfolding also in time.”

In a more recent sonification, Loui used EEG to record the minds of people with epilepsy during baseline brain activity and while they were having seizures, which at their root are electrical storms happening in the brain. Loui’s product uses sound to show the difference between the two states, particularly the additional range and faster speed of electric activity in the brain during a seizure.

She hopes this sonification will be developed into a device to help those with epilepsy or their caretakers detect and predict seizures.

Following Loui, Ho presented on a number of ways to reassess the nature of how people visualize auditory, sonic information.

“One of the things visualization of auditory information lacks is semantic or symbolic differentiation,” he said. “I can identify the fact that I hear some kind of high-frequency tone, but I can’t tell if it’s a bird or it’s a piccolo or it’s a young child screaming. It could be any number of those things.”

Ho expressed frustration that so much research into visualizing sound still relies on the sonogram. “I’m really looking for something that reflects not physical reality, but psychophysical reality,” he said. He found it in one tool, based on a script from an Australian research team. The tool aggregates psychophysical parameters to allow users to process sound and then displays a pitch-salient spectrum that is based on underlying psychoacoustic principles.

Ho also highlighted important audio vocabulary for audiences throughout his portions of the talk.

He first noted the difference between physical and psychophysical elements of sound. The former refers to information communicated by the vibrations of air particles, including power, intensity, speed of sound, wavelength and frequency. Meanwhile, the latter deals with the way we perceive and scale those various physical parameters and assign them psychoacoustic meaning.

Ho also warned the audience not to confuse sound pressure level — or measured intensity of sound — with the actual loudness. “Loudness is your psychophysical parameter,” he said. “Sound pressure level, that’s your physical parameter.”

If you have a certain sound pressure level, you can run into different loudness. “When a tone sweeps up into higher frequencies, your ears become much more sensitive to the sound,” Ho said, demonstrating the concept by playing a ring at a single sound pressure that got louder as the frequency rose. At its peak, audience members strained at the loudness, making motions to cover their ears.

Ho paid special attention to timbre, which he said is often misunderstood. He said any number of physical parameters can contribute to an understanding of timbre, but there should be more psychophysical experimentation into which of these are most meaningful.

In recent scholarly publications, music and psychology experts Stephen McAdams and Meghan Goodchild define timbre as “a set of auditory attributes in addition to those of pitch, loudness, duration and spatial position that both carry musical qualities and contribute to sound source recognition and identification.” Ho noted this is a more recent attitude toward timbre, which kids were previously taught is assigned to a particular instrument, like a flute or clarinet.

“What this more recent definition of timbre tells us is that notion is somewhat erroneous because the timbre of one instrument can change depending on the frequency, depending on the loudness, depending even on the fingerings used,” Ho said.

There’s a lot of research going into auditory display, especially related to timbre. Much of that visualization includes metaphorical representation — not just traditional physical mapping — and large amounts of qualitative information.

To conclude the session, Loui wrapped up by noting that good data visualization and sonification needs to cater to cognitive systems. “You need to know what you’re trying to show and then you need to know how to get the cognitive system to appreciate that difference,” she said.

writeup by Paxtyn Merten