Sam Auinger and Dietmar Offenhuber
“Do me a favor and just close, for a minute, your eyes,” sound artist Sam Auinger says over Skype video from Germany. “Let me perform some types of sounds in the room and then let’s just talk about it.”
In a fourth-floor classroom at Northeastern University, Dietmar Offenhuber then drops three separate handfuls of coins onto a desk. Each time he drops a stack, the coins clink and rattle on the surface. But each handful rings at a slightly different pitch, and the students, professors and scholars who are packed into the room hold their breaths, squeeze their eyelids shut and listen for the slight variation.
“All right, so which coins do you think are quarters?” asks Offenhuber, a Northeastern professor of information design and visualization. The largest number of people vote for the third stack. Offenhuber reveals it was actually the second handful that contained quarters, but the third contained dimes, which are made of the same material. “This is just a brief demonstration that we are actually very good at distinguishing the materiality of objects on a very subtle level, even if we are not aware of it,” he says.
The duo and their coin trick led the first presentation at a March 15 workshop, “Active Listening.” Their talk, a 45-minute lesson called “Hearing as Environmental Information,” aimed to teach attendees about active listening techniques and the importance of hearing in culture.
Auinger, who studies the sociopolitical aspects of soundscapes, described the four different modalities of listening people use to interpret their sonic environments. The modalities — causal, semantic, reduced, and emotional — each have different effects on life, and especially memory. Causal listening means trying to identify the event that caused a certain sound. Semantic listening involves decoding and interpreting language, signals, or cues. Reduced listening is the focused concentration on a single sonic aspect. And emotional listening acknowledges all the emotional associations a familiar sound creates.
People constantly perform causal modality, and Auinger says it informs and guides their visual senses. “Causal modality is modality which lets us know a dog is barking, a car is honking, my wife is shouting, there are car keys over there. This is where a sound gets named — through this modality, we identify the action around us.”
The semantic modality involves abstracting patterns, phrases and pronunciations. “I have kind of a distinct dialect for English, but still you’re able to decode my words because your brain is looking for the phrase,” Auinger noted.
The third modality is reduced listening. For example, Auinger described tuning a guitar, during which musicians don’t focus on the strings themselves or the sound all the strings make, but on the pitch of their sound and whether they are in tune.
Commenting on emotional modality, Auinger said it is not just the ears, but the entire body human body that is affected by and sensitive to sound. The emotional modality also reflects the sound quality of the space they’re in. “Imagine working in a sports bar,” he says, “and it’s really, really too loud. As a result, you become more nervous. You really feel exposed to a tense atmosphere.”
Recognizing the different modalities of hearing can be cultivated as a technique of listening that allows us to become more aware of our surroundings. For example, by actively focusing on reduced modality, Auinger says people can realize the independence between sound elements and movements. “You will also maybe get a different picture of yourself, how you actually listen, how you remember sounds … you can actually start to build up a kind of memory for sounds.”
After the listening modalities, Auinger listed four more categories that influence an individual’s experience of sound: the physics of sound, psychoacoustics, the sonic culture and the personal history.
To explain the influence of physics, Offenhuber played the sound of church bells from his computer. Auinger noted the sound of a bell depends on its shape, which determines the frequency composition of the sound waves that enter the air. Furthermore, it is influenced by the speed of sound in the air, which varies with temperature, and the complex reflections of the sound wave from different surfaces. The result is a complex layering of different frequencies and overtone structures. “The thing that may be the most crucial to understand is that all the phenomena we hear more or less come down to the speed of sound and how different frequencies are reflected or absorbed,” Auinger said.
With regard to psychoacoustics, the human ear and brain processes different frequencies differently – what we hear is not a direct translation of the physical soundwaves. Auinger pointed to research that shows people of different ages don’t hear the same frequencies equally. These differences in perception are often physiological, but to the same extent also cultural. Finally, our own history of past experiences shapes how we hear.
Offenhuber noted he and Auinger are majorly concerned with architecture, explaining that the architectural space of the city a person lives in shapes everything they hear. “By listening to all these phenomena, we are able to learn something about the spaces,” Offenhuber said. “About not only the physical, material qualities, but also the social use and practices.”
Practicing active listening habits can help people understand the sounds they take in on a daily basis and the way they interpret and understand sounds.
“When you do this more often, you will at first realize how much your own listening and the way you understand sounds is really shaped by your culture, your personal history, your age and your mood,” Auinger said. “These are all situations or moments you could experience by consciously listening from time to time wherever you are and try to remember and try to understand what’s really going on.”
writeup by Paxtyn Merten