Program

In our visual society, we often forget the fundamental role of hearing. Unlike our eyes, our ears are constantly on and capture information 360 degrees. Visual observation emphasizes the distance between spectator and object, but listening puts us in the center. However, we have a nascent understanding of the sounds of everyday environments, spaces, and activities. Regardless, the auditory experience deeply affects our emotions, orientation, and sense of place. This collaborative workshop for CAMD students and faculty explores the role of sound and hearing as an information medium. We investigate opportunities for artists, designers, and scholars to take advantage of auditory phenomena as a mode of information acquisition, data exploration, and storytelling.

However, the many facets of auditory culture have to be experienced, not just read or discussed. Therefore, our workshop will offer an experiential introduction of the many possibilities of sound for film and media.

Program

Northeastern University, March 15. 2018

1. Hearing as environmental Information

Sam Auinger and Dietmar Offenhuber, 2:00 pm – 2:40 pm
Active listening can teach us to discern and articulate what goes on around us sonically more consciously. The first part introduces practices and techniques of active listening and discusses the cultural relevance of hearing.

2. Neuroscience, psychoacoustics and sonification

Hubert Ho and Psyche Loui, 2:40 pm – 3:20 pm
The second part of the workshop focuses on how the fields of sonification and visualization can incorporate and articulate some of this more recent scholarship.

Coffee Break – 10 minutes

3. Narrative strategies and acoustic ecologies

Bobette Buster, 3:30 pm – 4:15 pm
The third section focuses on hearing as a source and construction of meaning — for humans but also for animals — and offers a synthesis for creative practice.

4. Music, sound, and space in film

Matthew McDonald 4:15 pm – 5:00 pm
Music and sound are critical components of how physical space is defined in films. In this section of the workshop, we will explore how sonic elements work both with and against images to provide information about the environments in which film narratives take place and to convey symbolic meaning. Students will be introduced to concepts for discussing audiovisual relationships in film and then apply these to specific examples.

Hearing as Environmental Information

Sam Auinger and Dietmar Offenhuber

“Do me a favor and just close, for a minute, your eyes,” sound artist Sam Auinger says over Skype video from Germany. “Let me perform some types of sounds in the room and then let’s just talk about it.”

In a fourth-floor classroom at Northeastern University, Dietmar Offenhuber then drops three separate handfuls of coins onto a desk. Each time he drops a stack, the coins clink and rattle on the surface. But each handful rings at a slightly different pitch, and the students, professors and scholars who are packed into the room hold their breaths, squeeze their eyelids shut and listen for the slight variation.

“All right, so which coins do you think are quarters?” asks Offenhuber, a Northeastern professor of information design and visualization. The largest number of people vote for the third stack. Offenhuber reveals it was actually the second handful that contained quarters, but the third contained dimes, which are made of the same material. “This is just a brief demonstration that we are actually very good at distinguishing the materiality of objects on a very subtle level, even if we are not aware of it,” he says.

The duo and their coin trick led the first presentation at a March 15 workshop, “Active Listening.” Their talk, a 45-minute lesson called “Hearing as Environmental Information,” aimed to teach attendees about active listening techniques and the importance of hearing in culture.

Auinger, who studies the sociopolitical aspects of soundscapes, described the four different modalities of listening people use to interpret their sonic environments. The modalities — causal, semantic, reduced, and emotional — each have different effects on life, and especially memory. Causal listening means trying to identify the event that caused a certain sound. Semantic listening involves decoding and interpreting language, signals, or cues. Reduced listening is the focused concentration on a single sonic aspect. And emotional listening acknowledges all the emotional associations a familiar sound creates.

People constantly perform causal modality, and Auinger says it informs and guides their visual senses. “Causal modality is modality which lets us know a dog is barking, a car is honking, my wife is shouting, there are car keys over there. This is where a sound gets named — through this modality, we identify the action around us.”

The semantic modality involves abstracting patterns, phrases and pronunciations. “I have kind of a distinct dialect for English, but still you’re able to decode my words because your brain is looking for the phrase,” Auinger noted.

The third modality is reduced listening. For example, Auinger described tuning a guitar, during which musicians don’t focus on the strings themselves or the sound all the strings make, but on the pitch of their sound and whether they are in tune.

Commenting on emotional modality, Auinger said it is not just the ears, but the entire body human body that is affected by and sensitive to sound. The emotional modality also reflects the sound quality of the space they’re in. “Imagine working in a sports bar,” he says, “and it’s really, really too loud. As a result, you become more nervous. You really feel exposed to a tense atmosphere.”

Recognizing the different modalities of hearing can be cultivated as a technique of listening that allows us to become more aware of our surroundings. For example, by actively focusing on reduced modality, Auinger says people can realize the independence between sound elements and movements. “You will also maybe get a different picture of yourself, how you actually listen, how you remember sounds … you can actually start to build up a kind of memory for sounds.”

After the listening modalities, Auinger listed four more categories that influence an individual’s experience of sound: the physics of sound, psychoacoustics, the sonic culture and the personal history.

To explain the influence of physics, Offenhuber played the sound of church bells from his computer. Auinger noted the sound of a bell depends on its shape, which determines the frequency composition of the sound waves that enter the air. Furthermore, it is influenced by the speed of sound in the air, which varies with temperature, and the complex reflections of the sound wave from different surfaces. The result is a complex layering of different frequencies and overtone structures. “The thing that may be the most crucial to understand is that all the phenomena we hear more or less come down to the speed of sound and how different frequencies are reflected or absorbed,” Auinger said.

With regard to psychoacoustics, the human ear and brain processes different frequencies differently – what we hear is not a direct translation of the physical soundwaves. Auinger pointed to research that shows people of different ages don’t hear the same frequencies equally. These differences in perception are often physiological, but to the same extent also cultural. Finally, our own history of past experiences shapes how we hear.

Offenhuber noted he and Auinger are majorly concerned with architecture, explaining that the architectural space of the city a person lives in shapes everything they hear. “By listening to all these phenomena, we are able to learn something about the spaces,” Offenhuber said. “About not only the physical, material qualities, but also the social use and practices.”

Practicing active listening habits can help people understand the sounds they take in on a daily basis and the way they interpret and understand sounds.

“When you do this more often, you will at first realize how much your own listening and the way you understand sounds is really shaped by your culture, your personal history, your age and your mood,” Auinger said. “These are all situations or moments you could experience by consciously listening from time to time wherever you are and try to remember and try to understand what’s really going on.”

writeup by Paxtyn Merten

Neuroscience, psychoacoustics and sonification

Hubert Ho and Psyche Loui

Professors Hubert Ho of Northeastern University and Psyche Loui of Weslayan University led the second part of the Active Listening workshop with a session called “Neuroscience, psychoacoustics and sonification.” Loui, a psychology professor, spoke the small crowd over Skype from Connecticut to discuss the workings of the brain in relation to sound.

Loui is the director of the Music, Imaging, and Neural Dynamics Lab at Wesleyan, where she researches the neuroscience of music cognition and perception, among related issues. She is working to understand how to best design sound-based visualizations, or sonifications.

“To me, trying to define good sounds or good sights fundamentally has to do with how our minds perceive these things,” she said. “We need to understand a little bit about the brain and how it works and how it enables perceptual experience.”

Loui explained that the brain perceives sound through groups of neurons that start to make temporal patterns — patterns that change over time — and spike at various points. The patterns and shapes of those neurons are the attributes that matter the most.

In a brief history of neuroscience research, Loui noted physician Hans Berger began to record electric activity in the brain by putting electrodes on people’s scalps in the 1920s — a technique now called electroencephalography, or EEG. In the process, he discovered many interesting things about brains. For one, in his recordings he noticed that when someone is relaxed and about to fall asleep there is a rapid fluctuation in the EEG.

“Because that was first rhythmic brain activity he was able to observe, he called that alpha activity,” Loui said.

Decades later in 1965, experimental composer Alvin Lucier stuck electrodes on his own head and used drums to amplify the alpha activity and convert it into sound — a piece he called “Music for Solo Performer.” It was the first use of EEG sonification, a term for transforming brain activity into sound.

“I think what’s really interesting to think through is the idea that brain activities happen in time,” Loui said. “Sonification might be a really useful way of taking advantage of our auditory system and temporal properties, the ability to process sounds quickly, and using that natural correspondence to make better sense of data as they’re unfolding also in time.”

In a more recent sonification, Loui used EEG to record the minds of people with epilepsy during baseline brain activity and while they were having seizures, which at their root are electrical storms happening in the brain. Loui’s product uses sound to show the difference between the two states, particularly the additional range and faster speed of electric activity in the brain during a seizure.

She hopes this sonification will be developed into a device to help those with epilepsy or their caretakers detect and predict seizures.

Following Loui, Ho presented on a number of ways to reassess the nature of how people visualize auditory, sonic information.

“One of the things visualization of auditory information lacks is semantic or symbolic differentiation,” he said. “I can identify the fact that I hear some kind of high-frequency tone, but I can’t tell if it’s a bird or it’s a piccolo or it’s a young child screaming. It could be any number of those things.”

Ho expressed frustration that so much research into visualizing sound still relies on the sonogram. “I’m really looking for something that reflects not physical reality, but psychophysical reality,” he said. He found it in one tool, based on a script from an Australian research team. The tool aggregates psychophysical parameters to allow users to process sound and then displays a pitch-salient spectrum that is based on underlying psychoacoustic principles.

Ho also highlighted important audio vocabulary for audiences throughout his portions of the talk.

He first noted the difference between physical and psychophysical elements of sound. The former refers to information communicated by the vibrations of air particles, including power, intensity, speed of sound, wavelength and frequency. Meanwhile, the latter deals with the way we perceive and scale those various physical parameters and assign them psychoacoustic meaning.

Ho also warned the audience not to confuse sound pressure level — or measured intensity of sound — with the actual loudness. “Loudness is your psychophysical parameter,” he said. “Sound pressure level, that’s your physical parameter.”

If you have a certain sound pressure level, you can run into different loudness. “When a tone sweeps up into higher frequencies, your ears become much more sensitive to the sound,” Ho said, demonstrating the concept by playing a ring at a single sound pressure that got louder as the frequency rose. At its peak, audience members strained at the loudness, making motions to cover their ears.

Ho paid special attention to timbre, which he said is often misunderstood. He said any number of physical parameters can contribute to an understanding of timbre, but there should be more psychophysical experimentation into which of these are most meaningful.

In recent scholarly publications, music and psychology experts Stephen McAdams and Meghan Goodchild define timbre as “a set of auditory attributes in addition to those of pitch, loudness, duration and spatial position that both carry musical qualities and contribute to sound source recognition and identification.” Ho noted this is a more recent attitude toward timbre, which kids were previously taught is assigned to a particular instrument, like a flute or clarinet.

“What this more recent definition of timbre tells us is that notion is somewhat erroneous because the timbre of one instrument can change depending on the frequency, depending on the loudness, depending even on the fingerings used,” Ho said.

There’s a lot of research going into auditory display, especially related to timbre. Much of that visualization includes metaphorical representation — not just traditional physical mapping — and large amounts of qualitative information.

To conclude the session, Loui wrapped up by noting that good data visualization and sonification needs to cater to cognitive systems. “You need to know what you’re trying to show and then you need to know how to get the cognitive system to appreciate that difference,” she said.

writeup by Paxtyn Merten

Neuroscience, psychoacoustics and sonification

Hubert Ho and Psyche Loui

Professors Hubert Ho of Northeastern University and Psyche Loui of Weslayan University led the second part of the Active Listening workshop with a session called “Neuroscience, psychoacoustics and sonification.” Loui, a psychology professor, spoke the small crowd over Skype from Connecticut to discuss the workings of the brain in relation to sound.

Loui is the director of the Music, Imaging, and Neural Dynamics Lab at Wesleyan, where she researches the neuroscience of music cognition and perception, among related issues. She is working to understand how to best design sound-based visualizations, or sonifications.

“To me, trying to define good sounds or good sights fundamentally has to do with how our minds perceive these things,” she said. “We need to understand a little bit about the brain and how it works and how it enables perceptual experience.”

Loui explained that the brain perceives sound through groups of neurons that start to make temporal patterns — patterns that change over time — and spike at various points. The patterns and shapes of those neurons are the attributes that matter the most.

In a brief history of neuroscience research, Loui noted physician Hans Berger began to record electric activity in the brain by putting electrodes on people’s scalps in the 1920s — a technique now called electroencephalography, or EEG. In the process, he discovered many interesting things about brains. For one, in his recordings he noticed that when someone is relaxed and about to fall asleep there is a rapid fluctuation in the EEG.

“Because that was first rhythmic brain activity he was able to observe, he called that alpha activity,” Loui said.

Decades later in 1965, experimental composer Alvin Lucier stuck electrodes on his own head and used drums to amplify the alpha activity and convert it into sound — a piece he called “Music for Solo Performer.” It was the first use of EEG sonification, a term for transforming brain activity into sound.

“I think what’s really interesting to think through is the idea that brain activities happen in time,” Loui said. “Sonification might be a really useful way of taking advantage of our auditory system and temporal properties, the ability to process sounds quickly, and using that natural correspondence to make better sense of data as they’re unfolding also in time.”

In a more recent sonification, Loui used EEG to record the minds of people with epilepsy during baseline brain activity and while they were having seizures, which at their root are electrical storms happening in the brain. Loui’s product uses sound to show the difference between the two states, particularly the additional range and faster speed of electric activity in the brain during a seizure.

She hopes this sonification will be developed into a device to help those with epilepsy or their caretakers detect and predict seizures.

Following Loui, Ho presented on a number of ways to reassess the nature of how people visualize auditory, sonic information.

“One of the things visualization of auditory information lacks is semantic or symbolic differentiation,” he said. “I can identify the fact that I hear some kind of high-frequency tone, but I can’t tell if it’s a bird or it’s a piccolo or it’s a young child screaming. It could be any number of those things.”

Ho expressed frustration that so much research into visualizing sound still relies on the sonogram. “I’m really looking for something that reflects not physical reality, but psychophysical reality,” he said. He found it in one tool, based on a script from an Australian research team. The tool aggregates psychophysical parameters to allow users to process sound and then displays a pitch-salient spectrum that is based on underlying psychoacoustic principles.

Ho also highlighted important audio vocabulary for audiences throughout his portions of the talk.

He first noted the difference between physical and psychophysical elements of sound. The former refers to information communicated by the vibrations of air particles, including power, intensity, speed of sound, wavelength and frequency. Meanwhile, the latter deals with the way we perceive and scale those various physical parameters and assign them psychoacoustic meaning.

Ho also warned the audience not to confuse sound pressure level — or measured intensity of sound — with the actual loudness. “Loudness is your psychophysical parameter,” he said. “Sound pressure level, that’s your physical parameter.”

If you have a certain sound pressure level, you can run into different loudness. “When a tone sweeps up into higher frequencies, your ears become much more sensitive to the sound,” Ho said, demonstrating the concept by playing a ring at a single sound pressure that got louder as the frequency rose. At its peak, audience members strained at the loudness, making motions to cover their ears.

Ho paid special attention to timbre, which he said is often misunderstood. He said any number of physical parameters can contribute to an understanding of timbre, but there should be more psychophysical experimentation into which of these are most meaningful.

In recent scholarly publications, music and psychology experts Stephen McAdams and Meghan Goodchild define timbre as “a set of auditory attributes in addition to those of pitch, loudness, duration and spatial position that both carry musical qualities and contribute to sound source recognition and identification.” Ho noted this is a more recent attitude toward timbre, which kids were previously taught is assigned to a particular instrument, like a flute or clarinet.

“What this more recent definition of timbre tells us is that notion is somewhat erroneous because the timbre of one instrument can change depending on the frequency, depending on the loudness, depending even on the fingerings used,” Ho said.

There’s a lot of research going into auditory display, especially related to timbre. Much of that visualization includes metaphorical representation — not just traditional physical mapping — and large amounts of qualitative information.

To conclude the session, Loui wrapped up by noting that good data visualization and sonification needs to cater to cognitive systems. “You need to know what you’re trying to show and then you need to know how to get the cognitive system to appreciate that difference,” she said.

writeup by Paxtyn Merten

Music, sound, and space in film

Matthew McDonald

The final segment of the Active Listening workshop, “Music, sound, and space in film,” featured discussion of how physical space is defined in films led by Northeastern music professor Matthew McDonald.

McDonald’s specialty is music in film. “Film music is in some ways a really hard thing to study because it thwarts active listening,” he said. “More often than not, music is meant to work on us subliminally.” To illustrate how films use sound to define setting, he had audience members listen closely to a segment of the movie “No Country for Old Men” — a 2007 western thriller — two times without the visuals and then had audience members respond to what they heard.

After the second listening portion of the exercise, attendee Carolina Herrera, a Northeastern undergraduate music student studying composition, pointed out the soft, understated wind blowing in the background that intensifies as the narrator pivots into a conversation about a local boy killing a girl. McDonald said this is a commonality in Coen brothers’ soundtracks: to use the sound of wind at the beginning of a film, often connected to a voiceover.

Event attendee Minko Dimov, the founder and creative director of impromptustudio, said the thunder in the beginning made it seem like there was a storm approaching, but the wind takes command as the narrator talks about the past, creating a daydream-like atmosphere.

But then, Dimov pointed out, there are abrupt mechanical noises — metal doors opening, the clinking of metal, tires spinning on a road. These noises mark coming back to reality out of this dream world.

As they tried to describe the change in the narrator’s voice through the scene, attendees found they didn’t have the vocabulary to describe it. McDonald said this is common in the soundscape field — we often don’t have the quality of language to describe the sounds and impressions we hear.

McDonald also pointed out that there are certain “sync points” that contribute to the “phrasing” of sorts of the “audiovisual canvas,” allowing for the comparison and intertwining of sounds and images.

Afterward, McDonald played the clip again, this time allowing the audience to watch the visuals that accompany the sound.

Dietmar Offenhuber, a Northeastern professor of information design and visualization, said he liked to compare the sense of distance created by the sound and reinforced by the imagery. “The auditory space also gives you a circumstance of distance,” he said. “Things are far away, but you still notice them.”

Herrera said there was a cyclic element to the scene. The sounds and images convey a storm coming. Then there is calm. And after the storm comes, there will be calm again. “It’s setting up the trajectory of the film,” McDonald replied, “which is often something that a title sequence will do — and certainly music can be a big part of that: giving you a sense of the larger trajectory.”

writeup by Paxtyn Merten