Myth #8
Listening requires us to hear, meaning deaf people (and those with hearing loss) can’t listen
“Normal hearing” is typically defined as a hearing threshold of 20 dB or better in both ears. According to the World Health Organization, hearing loss (HL) is organized into categories with values representing the softest sounds a person can detect:
slight HL, 21–40 dB;
moderate HL, 41–60 dB;
severe HL, 61–80 dB; and
profound (including deafness), 80 db and higher.
By this definition, and by the way most people use the term in everyday speech, “deaf” is the inability to hear most sounds.
Although deafness may mean an inability to process most sounds through the ear (and associated cognitive architecture), deaf people still have the ability to listen.
Even if we just define listening as understanding aural information, those with profound hearing loss can still engage in that activity by reading lips. Think about the last time you tried to listen to a friend in a noisy room. Chances are you did a bit of lip reading to understand them. Even for those of us who can “hear,” our listening is more than meets the ears. What we “hear” is partially a function of the sounds we are processing, but also influenced by what we see.
Perhaps the most powerful evidence for the convergence of auditory and visual information is the McGurk effect, which describes higher levels of visual influence compared to auditory influence, on speech perception. Sound is based on how our brains process auditory, visual, and other sensory information. There are multiple neurons and cerebral pathways in the way of what we hear and what we perceive, meaning that what we hear may be more or less in line with what was said. This seems to fall in line with other ways of thinking about human communication. (e.g., what you say is not as important as how you say it,’ see Myth 2).
The McGurk effect is found when people are exposed to the auditory component of one sound and the visual component of a second one. In these cases, people often report “hearing” a third, different sound. In the original study, McGurk and MacDonald (1976) showed participants a film of a young woman repeating the syllable /ba/. When dubbed with lip movements for /ga/, participants reported hearing /da/. These effects disappeared when participants either listened to the sound or watched the undubbed version of the video.
Similarly, Jones and Callan (2003) extended this work using fMRI technology to determine the relation of brain activation and the degree of auditory/visual integration of speech information. Participants indicated higher levels of visual influence on speech perception when audio and visual stimuli were mismatched. Interestingly, brain activation of the left occipito-temporal junction, an area that processes visual motion, was found to be higher. This finding suggests that auditory information moderates visual processing, impacting perception.
In addition to lip reading, deaf people also feel sounds, although it’s more accurate to say that we all feel sound; it’s just that deaf people have a heightened ability to do so. In an amazing Ted Talk, Evelyn Glennie tells the story of how her mentor and teacher Ron Forbes taught her to feel music, enabling her to go on to be one of the most decorated percussionists of the last century (as one example, she was awarded the “Nobel Prize of Music” in 2015). As she explained in this video, “We can all physically feel sound. We just have to pay attention.”