hearing in time slides used for talk to accompany roger go to yellow three … sarah hawkins* and...

15
Hearing in Time Slides used for talk to accompany Roger go to yellow three … Sarah Hawkins* and Antje Heinrich** * Centre for Music and Science, University of Cambridge ** Medical Research Council Institute of Hearing Research, Nottingham The material in this document is Copyright and is to be used only for educational purposes

Upload: daniella-briggs

Post on 01-Jan-2016

214 views

Category:

Documents


1 download

TRANSCRIPT

Slide 1

Hearing in TimeSlides used for talk to accompanyRoger go to yellow three Sarah Hawkins* and Antje Heinrich**

*Centre for Music and Science, University of Cambridge**Medical Research Council Institute of Hearing Research, Nottingham

The material in this document is Copyrightand is to be used only for educational purposes1people often understand remarkably of sung textand many go to listen to the music without caring about the words

but there are times when you want to hear the words, and people whose hearing is impaired, or who are not native speakers of the language, may feel especially disadvantaged What have performers learned so far?2To understand a single voice, listeners must correctly group together the sounds that come from a single source (singer)

To understand polyphonic texts, listeners must distinguish each individual stream that comprise the set of competing voices

Rhythm and relative pitch are important in this3Auditory streaming3

Adapted from Bob Carlyon's website(http://www.mrc-cbu.cam.ac.uk/research/speech-language/hearing/#_streaming)

Auditory streaming: one sound source or two?This is a demonstration of one factor influencing whether the brain processes different pitches as coming from one place/source or two.Click on the upper loudspeaker icon. Most people hear the two pitches as coming from a single sound source.Click on the lower loudspeaker icon. By the time the clip finishes, most people hear the two pitches as coming from two different sound sources.

4

% correctNumber of distractor voices0123100608040200Unison increases intelligibility evenin the absence of distractor voicesThe power of UnisonWhat have auditory scientists learned so far?1 target voice2 target voices3 target voicesWhen there are no distractors, intelligibility is good (90% or better). Increasing the number of distractor voices decreases intelligibility.5

% correctNumber of distracter voices0123100608040200Unison increases intelligibility evenin the absence of distracter voicesTo ensure or maintain intelligibility,have AT LEAST as many target voices singing in unison as there are distractersThe power of UnisonWhat have auditory scientists learned so far?1 target voice2 target voices3 target voices6

% correctNumber of distracter voices0123100608040200Number of distracter voices0123100608040200The power of UnisonWhat have auditory scientists learned so far?1 target voice2 target voices3 target voicesNative English speakersThe same patterns are seen for native and non-native speakers -- but non-native speakers tend to be more disadvantaged in harder conditionsincluding forNon-native English speakers7

The biggest difference between language groups occurs with ONE target voiceNon-native speakers have disproportionate trouble whenthere are more distracter voices than target voicesand when there is onlyone target and one distracter Language proficiencyWhat have auditory scientists learned so far?Compared with native speakers:-% correct6040200% correct100608040200nativenon-nativeT number of target voices D number of distracters1 T2 D2 T3 D2 T2 D3 T3 D1 T0 D1 T1 D8Is the extra intelligibility of unison simply the result of the particular skill of experienced singers?Is unison more intelligible because it is louder? Or that the sound comes from more spatial locations?intelligibility of combinations ofmale vs female voices (vs of individuals)physical location relative to one anotherin a roomparticular types of words and music(lab experiments)Still to be explored9http://en.wikipedia.org/wiki/File:Human_head_and_brain_diagram.svg

How does the brain process auditory streams and speech in noisy environments?10http://www.cognitiveneurosciencearena.com/brain-scans/brunswick/brunswick04.php

How does the brain process auditory streams and speech in noisy environments?Brain schematic with the 4 cortical lobes; as there are no imaging studies on intelligibility of sung speech, we will use results studies using from spoken speech

11

http://www.cognitiveneurosciencearena.com/brain-scans/brunswick/brunswick04.phpSuperior Temporal GyrusMiddle T GInferior T GHow does the brain process auditory streams and speech in noisy environments?This slide shows a flipped version of the same brain. The arrows point to the two lobes that are most intimately involved in sound processing: the temporal lobe and the frontal lobe. The area marked in red is the primary auditory cortex; this is the first (earliest) area that processes sound after it reaches the cortex. It responds to ALL sounds and reacts to any acoustic differences between them

12Peelle JE, Johnsrude IS & Davis MH (2010). Hierarchical processing for speech in human auditory cortex and beyond. Frontiers in human neuroscience, 4.

Word comprehensionSentence comprehensionSemantic representations?Action/production?Reconstruct speech

If the task is speech comprehension, then other areas in addition to primary auditory cortex are involved; these areas provide a consistent neural response to speech (words, phonemes) regardless of their acoustic differences (who says them and what the acoustic environment is). These include large portions of superior temporal gyrus, both anterior and posterior. Both neuroimaging and lesion data suggest that single word comprehension activates posterior areas in both hemispheres; whereas connected speech and sentence comprehension activates anterior areas mainly in the left hemisphere. Left inferior frontal areas are densely connected to temporal auditory areas, and it is suggested that they recover meaning when the speech is difficult to hear.How does the brain process auditory streams and speech in noisy environments?

13

How does the brain process auditory streams?Bee MA & Micheyl C (2008). The cocktail party problem: What is it? How can it be solved? And why should animal behaviourists study it? J Comparative Psych, 122, 235-251

Bigger frequency separation between A and B => more likely that two streams heard => greater activation in primary auditory cortexWilson EC, Melcher JR, Micheyl C, Gutschalk A & Oxenham AJ (2007). Cortical fMRI activation to sequences of tones alternating in frequency: Relationship to perceived rate and streaming. J Neurophysiol, 97, 2230-2238)

Stream segregation is typically studied with tones tones are either similar in frequency and are thus heard as a single stream, or are different in frequency and are heard as two different streams The reason that the activation for stream segregation is mostly seen in PAC might have to do with use of tones as stimuli the brain activation in response to whether it is a single stream or two streams: in primary auditory cortex (red blob)14Deike S, Gaschler-Markefski B, Brechmann A & Scheich H (2004). Auditory stream segregation relying on timbre involves left auditory cortex. Neuroreport, 15(9), 1511 1514.

Listened to a stream of tones interleaved from TWO instruments: trumpet and organListened to stream of tones from ONE instrument: either trumpet or organ

More activation when two streams had to be grouped and segregated How does the brain process auditory streams and speech in noisy environments?To do the task (perceive small changes in one instruments melody), listeners had to group the melodies of each instrument, this led to increased activity in lateral primary auditory cortex 15