Coding of vocalizations by single neurons in ventrolateral prefrontal cortex

Download Coding of vocalizations by single neurons in ventrolateral prefrontal cortex

Post on 19-Dec-2016




0 download

Embed Size (px)


<ul><li><p>sski</p><p>Article history:Received 26 December 2012Received in revised form20 June 2013Accepted 16 July 2013Available online xxx</p><p>also provide information about the individual uttering the call, andthus would include information on gender, social status, body size,and reproductive status (Hauser and Marler, 1993; Hauser, 1996;</p><p>d Morton, 1998).vocalizations cand communication</p><p>amine the specicry cortex encode.d that neurons in</p><p>marmoset twitter calls (Wang et al., 1995). In addition, it has beenshown that the auditory cortex in marmosets has a specializedpitch processing region (Bendor and Wang, 2005) and that mar-mosets use both temporal and spectral cues to discriminate pitch(Bendor et al., 2012). Neurons in primary auditory cortex that arenot responsive to pure tones are highly selective to complex fea-tures of sounds, features that are found in vocalizations (Sadagopanand Wang, 2009).</p><p>* Corresponding author. University of Rochester Medical Center, Dept. ofNeurobiology and Anatomy, Box 603, 601 Elmwood Ave., Rochester, NY 14642, USA.Tel.: 1 585 273 1469; fax: 1 585 442 8766.</p><p>Contents lists availab</p><p>Hearing R</p><p>.e l</p><p>Hearing Research xxx (2013) 1e9E-mail address: (L.M. Romanski).alarm call) which each elicit a unique response. Vocalizations can marmoset auditory cortex lock to the temporal envelope of naturalPrevious work has identied an auditory region in the ventrallateral prefrontal cortex (VLPFC) that is responsive to complexsounds including species-specic vocalizations (Romanski andGoldman-Rakic, 2002). Species-specic vocalizations are complexsound stimuli with varying temporal and spectral features whichcan provide unique information to the listener. The type of calldelivered can indicate different behavioral contexts (food call vs.</p><p>Bradbury and Vehrencamp, 1998; Owings anKnowing how the brain encodes species-speciccontribute to our understanding of language anprocessing.</p><p>Neurophysiological studies have begun to exacoustic features which neurons in the auditoWork by Wang and colleagues has demonstrate0378-5955/$ e see front matter 2013 Published by</p><p>Please cite this article in press as: Plakke, B.,(2013), activity in single prefrontal neurons has been correlated with behavioral responses, rules, taskvariables and stimulus features. In the non-human primate, neurons recorded in ventrolateral prefrontalcortex (VLPFC) have been found to respond to species-specic vocalizations. Previous studies have foundmultisensory neurons which respond to simultaneously presented faces and vocalizations in this region.Behavioral data suggests that face and vocal information are inextricably linked in animals and humansand therefore may also be tightly linked in the coding of communication calls in prefrontal neurons. Inthis study we therefore examined the role of VLPFC in encoding vocalization call type information.Specically, we examined previously recorded single unit responses from the VLPFC in awake, behavingrhesus macaques in response to 3 types of species-specic vocalizations made by 3 individual callers.Analysis of responses by vocalization call type and caller identity showed thatw19% of cells had a maineffect of call type with fewer cells encoding caller. Classication performance of VLPFC neurons wasw42% averaged across the population. When assessed at discrete time bins, classication performancereached 70 percent for coos in the rst 300 ms and remained above chance for the duration of theresponse period, though performance was lower for other call types. In light of the sub-optimal classi-cation performance of the majority of VLPFC neurons when only vocal information is present, and therecent evidence that most VLPFC neurons are multisensory, the potential enhancement of classicationwith the addition of accompanying face information is discussed and additional studies recommended.Behavioral and neuronal evidence has shown a considerable benet in recognition and memory per-formance when faces and voices are presented simultaneously. In the natural environment both facialand vocalization information is present simultaneously and neural systems no doubt evolved to integratemultisensory stimuli during recognition.</p><p>This article is part of a Special Issue entitled . 2013 Published by Elsevier B.V.a r t i c l e i n f o a b s t r a c tReview</p><p>Coding of vocalizations by single neuroncortex</p><p>Bethany Plakke, Mark D. Diltz, Lizabeth M. RomanDept. Neurobiology &amp; Anatomy, Univ. of Rochester, Box 603, Rochester, NY 14642, USA</p><p>journal homepage: wwwElsevier B.V.</p><p>et al., Coding of vocalizations13.07.011in ventrolateral prefrontal</p><p>*</p><p>le at ScienceDirect</p><p>esearch</p><p>sevier .com/locate/hearesby single neurons in ventrolateral prefrontal cortex, Hearing Research</p></li><li><p>g ReOther key areas involved in vocalization processing include thebelt and parabelt regions of auditory cortex. While neurons inauditory core areas are responsive to simple stimuli, complexsounds including noise bursts and vocalizations activate regions ofthe belt and parabelt (Rauschecker et al., 1995, 1997; Rauschecker,1998; Kikuchi et al., 2010). Recently, it has been reported that theleft belt and parabelt are more active to complex spectral temporalpatterns (Joly et al., 2012). Furthermore, the selectivity of singleneurons in the anterolateral belt for vocalizations is similar to thatof VLPFC (Romanski et al., 2005; Tian et al., 2001) which arereciprocally connected (Romanski et al., 1999a,b).</p><p>This hierarchy of complex sound processing continues along thesuperior temporal plane (STP) (Kikuchi et al., 2010; Poremba et al.,2003) to the temporal pole (Poremba et al., 2004). An area on thesupratemporal plane has also been identied as a vocalization area inmacaque monkeys (Petkov et al., 2008) and cells in this region aremoreselective to individualvoices thancall type(Perrodinetal.,2011).These auditory regions including the belt, parabelt, and STP all projectto VLPFC (Hackett et al., 1999; Romanski et al., 1999a), which is pre-sumed to be the apex of complex auditory processing in the brain.</p><p>Previous work has shown that VLPFC neurons are preferentiallydriven by species-specic vocalizations compared to pure tones,noise bursts and other complex sounds (Romanski and Goldman-Rakic, 2002). In terms of vocalization coding, when non-humanprimates are tested in passive listening conditions without spe-cic discrimination tasks, single unit recordings from VLPFC neu-rons show similar responses to calls that are similar in acousticmorphology (Romanski et al., 2005). Examination of VLPFCneuronal responses using a small set of call type categories with anoddball-type task, has suggested that auditory cells in VLPFC mightencode semantic category information (Gifford et al., 2005),including the discrimination between vocalizations that indicatefood vs. non-food (Cohen et al., 2006).</p><p>Importantly, it has been shown that VLPFC cells are multisen-sory (Sugihara et al., 2006). These multisensory neurons are esti-mated to represent more than half of the population of theanterolateral VLPFC, including areas 12/47 and 45 (Romanski, 2012;Sugihara et al., 2006). Vocalization call type coding by cells maythen be more dependent upon an integration of not only acousticfeatures but also the appropriate facial gesture ormouthmovementthat accompanies the vocalization. Relying on responses to only theauditory component of a vocalization might then lead to less ac-curate recognition and categorization in regards to call type butalso especially to caller identity.</p><p>We asked how well VLPFC neurons encoded the vocalization calltype or caller identity of a species-specic vocalizations using previ-ously recorded data (Romanski et al., 2005) to 3 specic vocalizationcall types from three individual callers, (one coo, grunt, and girneyfrom each of 3 macaque monkeys). We then analyzed the neuralresponse using linear discriminate analysis to assess how accuratelycells classied the different call types (coos, grunts, and girneys).</p><p>1. Materials and methods</p><p>This analysis and review relies on neurophysiological recordingsof 110 cells from Romanski et al. (2005). Methods are described indetail in Romanski et al. (2005) with modications to the dataanalysis described here.</p><p>1.1. Neurophysiological recordings</p><p>As previously described (Romanski et al., 2005) we madeextracellular recordings in two rhesus monkeys (Macaca mulatta)during the performance of a xation task during which auditory</p><p>B. Plakke et al. / Hearin2stimuli including species-specic vocalizations were presented. All</p><p>Please cite this article in press as: Plakke, B., et al., Coding of vocalizations(2013), were in accordance with NIH standards and wereapproved by the University of Rochester Care and Use of ResearchAnimals committee. Ventrolateral prefrontal areas 12/47 and 45were targeted (Preuss and Goldman-Rakic, 1991; Romanski andGoldman-Rakic, 2002; Petrides and Pandya, 2002).</p><p>1.2. Apparatus and stimuli</p><p>All training and recording was performed in a sound-attenuatedroom, lined with Sonex (Acoustical Solutions, Inc). Auditory stimuliwere presented to the monkeys by either a pair of Audix PH5-vsspeakers (frequency response 3 dB, 75e20,000 Hz) located oneither side of a center monitor, or a centrally located Yamaha MSP5monitor speaker (frequency response 50 Hz e 40 kHz), located 30inches from the monkeys head. The auditory stimuli ranged from65 to 80 dB SPLmeasured at the level of themonkeys ear with a B &amp;K sound level meter, and a Realistic audio monitor.</p><p>In this report we have focused on prefrontal responses to 3Macaque Vocalizations (Coo, Grunt and Girney) recorded on theisland of Cayo Santiago by Dr. Marc Hauser (Hauser and Marler,1993). The three vocalizations are all normally given during socialexchanges and were uttered by 3 adult female rhesus macaques onCayo Santiago. Thus, in this experiment 3 call types (CO, GT and GY)by three female callerswere used yielding a 3 3 call matrix (Fig.1).Coos (CO) are given during social interactions including grooming,upon nding food of low value andwhen separated from the group.Grunts (GT) are given during social interaction such as an approachto groom, and upon the discovery of a low value food item andgirneys (GY) are given during grooming and when females attemptto handle infants. In addition to social context, vocalization types inthe macaque vocal repertoire have also been categorized accordingto the presence (or absence) of particular acoustic features (Fig. 1;Hauser and Marler, 1993; Hauser, 1996). Grunts are recognized asNoisy calls along with growls and pant threats while coos aremarked by the presence of a harmonic stack, oftenwith a dominantfundamental frequency and some evidence of vocal tract ltering.Girneys are considered tonal calls and have dominant energy at asingle or narrow band of frequencies. Brain regions which respondto species-specic vocalizations could do so on the basis of acoustic,behavioral context or emotional features of various calls.</p><p>1.3. Experimental procedure</p><p>Each day the monkeys were brought to the experimentalchamber and were prepared for extracellular recording as previ-ously published (Romanski et al., 2005). Each isolated unit wastested with a battery of auditory and visual stimuli which havebeen discussed in additional studies (Romanski et al., 2005;Sugihara et al., 2006). For this analysis cells were tested with the9 auditory stimuli (3 call types 3 callers) which were repeated 8e12 times in a randomized design. During each trial a central xationpoint appeared and subjects xated for a 500 ms pretrial xationperiod. Then the vocalization was presented and xation wasmaintained. After the vocalization terminated a 500 ms post-stimulus xation period occurred. A juice reward was delivered atthe termination of the post-stimulus xation period and the xa-tion requirement was then released. Losing xation at any timeduring the task resulted in an aborted trial. There was a 2e3 s inter-trial interval, after which the xation point would appear andsubjects could voluntarily begin a new trial.</p><p>1.4. Data analysis</p><p>The unit activity was read into Matlab (Mathworks) and SPSS</p><p>search xxx (2013) 1e9where rasters, histograms and spike density plots of the data could</p><p>by single neurons in ventrolateral prefrontal cortex, Hearing Research</p></li><li><p>nts</p><p>g ReCoos Gru</p><p>A)</p><p>B)</p><p>C)</p><p>12K</p><p>6K</p><p>0</p><p>quen</p><p>cy (H</p><p>z)</p><p>12K</p><p>6K</p><p>0</p><p>12K</p><p>6K</p><p>20K</p><p>10K</p><p>0</p><p>20K</p><p>10K</p><p>0</p><p>20K</p><p>10K</p><p>0 0</p><p>0 0435</p><p>418</p><p>B. Plakke et al. / Hearinbe viewed and printed. For analysis purposes, mean ring rateswere measured for a period of 500 ms in the intertrial interval (SA),and for 600 ms during the stimulus presentation epoch. Severaltime bins during the stimulus presentation period were used forthis analysis as dened below.</p><p>Signicant changes in ring rate during presentation of any ofthe vocalizations were detected using a t-test which compared thespike rate during the intertrial interval with that of the stimulusperiod. Any cell with a response that was signicantly differentfrom the spontaneous ring rate measured in the intertrial interval(p 0.05) was considered vocalization responsive. The effects ofvocalization call type and caller identity were examined with a 2-way MANOVA across multiple bins of the stimulus period usingSPSS. Cells which were signicant for call type or identity werefurther analyzed.</p><p>1.4.1. Vocalization selectivityA selectivity index (SI) was calculated using the absolute value</p><p>of the averaged responses to each stimulus minus the baselinering rate. The SI is a measure of the depth of selectivity across all 9vocalizations stimuli presented and is dened as:</p><p>SI n</p><p>Xni1</p><p>l1=lmax!,</p><p>n 1</p><p>where n is the total number of stimuli, l1 is the ring rate of theneuron to the ith stimulus and lmax is the neurons maximum ringrate to one of the stimuli (Wirth et al., 2009). Thus, if a neuron is</p><p>Fre</p><p>0 00 548</p><p>Time (ms) 0</p><p>Fig. 1. The spectrograms and waveforms for the 9 vocalizations used in the current studyGirneys) that were given by 3 callers in rows A, B and C. The calls have been previously chcallers (Hauser and Marler, 1993; see Methods).</p><p>Please cite this article in press as: Plakke, B., et al., Coding of vocalizations(2013),</p><p>12K</p><p>6K</p><p>0</p><p>12K</p><p>6K</p><p>0</p><p>12K</p><p>6K</p><p>0</p><p>0280 472</p><p>176 1136</p><p>search xxx (2013) 1e9 3selective and responds to only one stimulus and not to any otherstimulus, the SI would be 1. If the neuron responded identically toall stimuli in the list, the SI would be 0.</p><p>1.4.2. Cluster analysis of the neuronal responseTo determine whether prefrontal neurons responded to sounds</p><p>that were similar because of call type or because they were fromthe same identity caller, (i.e. on the basis of membership in afunctional category), we performed a cluster analysis as was donein previous studies (Romanski et al., 2005). For each cell, wecomputed the mean ring rate during the stimulus period...</p></li></ul>


View more >