psychological science - psychology.uiowa.edu · psychological science onlinefirst, ... ducted four...

12
http://pss.sagepub.com/ Psychological Science http://pss.sagepub.com/content/early/2013/08/30/0956797613486981 The online version of this article can be found at: DOI: 10.1177/0956797613486981 published online 3 September 2013 Psychological Science Kazumichi Matsumiya Face Seeing a Haptically Explored Face: Visual Facial-Expression Aftereffect From Haptic Adaptation to a Published by: http://www.sagepublications.com On behalf of: Association for Psychological Science can be found at: Psychological Science Additional services and information for http://pss.sagepub.com/cgi/alerts Email Alerts: http://pss.sagepub.com/subscriptions Subscriptions: http://www.sagepub.com/journalsReprints.nav Reprints: http://www.sagepub.com/journalsPermissions.nav Permissions: What is This? - Sep 3, 2013 OnlineFirst Version of Record >> by Andrew Hollingworth on September 10, 2013 pss.sagepub.com Downloaded from

Upload: vuthuy

Post on 27-Jun-2018

225 views

Category:

Documents


0 download

TRANSCRIPT

http://pss.sagepub.com/Psychological Science

http://pss.sagepub.com/content/early/2013/08/30/0956797613486981The online version of this article can be found at:

 DOI: 10.1177/0956797613486981

published online 3 September 2013Psychological ScienceKazumichi Matsumiya

FaceSeeing a Haptically Explored Face: Visual Facial-Expression Aftereffect From Haptic Adaptation to a

  

Published by:

http://www.sagepublications.com

On behalf of: 

  Association for Psychological Science

can be found at:Psychological ScienceAdditional services and information for    

  http://pss.sagepub.com/cgi/alertsEmail Alerts:

 

http://pss.sagepub.com/subscriptionsSubscriptions:  

http://www.sagepub.com/journalsReprints.navReprints:  

http://www.sagepub.com/journalsPermissions.navPermissions:  

What is This? 

- Sep 3, 2013OnlineFirst Version of Record >>

by Andrew Hollingworth on September 10, 2013pss.sagepub.comDownloaded from

Psychological ScienceXX(X) 1 –11© The Author(s) 2013Reprints and permissions: sagepub.com/journalsPermissions.navDOI: 10.1177/0956797613486981pss.sagepub.com

Research Article

The question of how the brain processes faces is a fun-damental problem in cognitive science, neuroscience, and psychology. This problem is psychologically impor-tant because faces convey information that facilitates social communication, in that facial information allows us to assess a person’s identity and emotional state. Although most studies of face perception have considered only facial signals from the visual modality (Calder, Rhodes, Johnson, & Haxby, 2011), recent evidence has suggested that the visual system may potentially respond to nonvi-sual facial signals for face processing. For example, humans are able to haptically recognize a face (Dopjans, Bülthoff, & Wallraven, 2012; Kilgour & Lederman, 2002, 2006; Lederman et al., 2007), and a visually presented face can affect the haptic discrimination of facial expres-sion (Klatzky, Abramowicz, Hamilton, & Lederman, 2011) and tactile perception on the face (Serino, Pizzoferrato, & Ladavas, 2008). However, a direct effect of nonvisual facial signals on the visual system has not been demon-strated. In the research reported here, I addressed this issue by making use of a face-adaptation paradigm.

Face adaptation is a powerful paradigm for investigat-ing the neural representations involved in face process-ing. One measure for such an adaptation is the face aftereffect (FAE), in which adaptation to a presented face belonging to a facial category, such as expression or identity, causes a subsequently neutral face to be per-ceived as belonging to an opposite facial category (Adams, Gray, Garner, & Graf, 2010; Anderson & Wilson, 2005; Fox & Barton, 2007; Jiang, Blanz, & O’Toole, 2006; Leopold, O’Toole, Vetter, & Blanz, 2001; Rhodes et al., 2004; Skinner & Benton, 2010; Webster, Kaping, Mizokami, & Duhamel, 2004; Webster & MacLin, 1999). The finding of larger FAEs for upright relative to inverted faces (Rhodes, Evangelista, & Jeffery, 2009) is consistent with a locus of the fusiform face area (Kanwisher & Yovel,

486981 PSSXXX10.1177/0956797613486981MatsumiyaCross-Modal Face Aftereffectresearch-article2013

Corresponding Author:Kazumichi Matsumiya, Research Institute of Electrical Communication, Tohoku University, 2-1-1, Katahira, Aoba-ku, Sendai 980-8577, Japan E-mail: [email protected]

Seeing a Haptically Explored Face: Visual Facial-Expression Aftereffect From Haptic Adaptation to a Face

Kazumichi MatsumiyaResearch Institute of Electrical Communication, Tohoku University

AbstractCurrent views on face perception assume that the visual system receives only visual facial signals. However, I show that the visual perception of faces is systematically biased by adaptation to a haptically explored face. Recently, face aftereffects (FAEs; the altered perception of faces after adaptation to a face) have been demonstrated not only in visual perception but also in haptic perception; therefore, I combined the two FAEs to examine whether the visual system receives face-related signals from the haptic modality. I found that adaptation to a haptically explored facial expression on a face mask produced a visual FAE for facial expression. This cross-modal FAE was not due to explicitly imaging a face, response bias, or adaptation to local features. Furthermore, FAEs transferred from vision to haptics. These results indicate that visual face processing depends on substrates adapted by haptic faces, which suggests that face processing relies on shared representation underlying cross-modal interactions.

Keywordsface processing, cross-modal interaction, adaptation, face perception, cognitive neuroscience

Received 10/1/12; Revision accepted 3/25/13

Psychological Science OnlineFirst, published on September 3, 2013 as doi:10.1177/0956797613486981

by Andrew Hollingworth on September 10, 2013pss.sagepub.comDownloaded from

2 Matsumiya

2006). FAEs are likely associated with the adaptation of face-selective neurons in higher-level areas of the cortex, although this has yet to be demonstrated (Calder et al., 2011). Although face adaptation is generally assumed to occur for complex visual attributes that are products of higher-order processing, a recent study has demonstrated that FAEs can also occur in haptic perception of faces (Matsumiya, 2012). In the present study, I combined two FAEs in the visual and haptic modalities. If the perceptual system of one sensory modality responds to signals from the other sensory modality, then aftereffects should occur across sensory modalities. Indeed, it has been reported that such cross-modal aftereffects occur in motion per-ception (Kitagawa & Ichihara, 2002; Konkle, Wang, Hayward, & Moore, 2009).

I investigated whether the haptic exploration of a face can produce visual FAEs. If the visual system receives facial signals from the haptic modality, then adaptation to a haptically explored face should yield visual FAEs. I con-ducted four experiments to test this prediction. In Experiments 1 through 3, participants haptically explored a face mask and then judged the expression of a test visual face. In Experiment 1, I assessed whether visual FAEs occurred from haptic adaptation to a face. I then tested whether haptic-to-visual FAEs depended on the explicitly imagined faces (Experiment 2a), whether hap-tic-to-visual FAEs occurred from adaptation to local fea-tures of haptic stimuli (Experiment 2b), and whether haptic-to-visual FAEs resulted from a response bias rather than perceptual adaptation (Experiment 3). Finally, Experiment 4 was designed to examine whether FAEs also transfer from vision to haptics.

Experiment 1

Method

Stimuli and apparatus.  An exemplar of the front view of a male face was taken from a human model included in a computer-generated-imagery software application (Poser; Smith Micro Software, Aliso Viejo, CA). Two types of facial expressions (happy and sad) were generated from the exemplar using this software, and three-dimensional masks of these faces were made from epoxy-cured resin for haptic face stimuli (Fig. 1a; 17 cm × 26 cm). Using the same exemplar, I created visual face stimuli (26° × 39°) by morphing the happy and sad faces to create a continuum of expressions (Fig. 1b). The participant sat in a dark room with his or her head immo-bilized with forehead and chin rests at a distance of 37 cm from a display. The visual stimuli were presented on a 19-in. Samsung SyncMaster 997MB cathode-ray-tube monitor (1,280 × 1,024 pixels, 75-Hz refresh rate) con-trolled using a Dell Precision T3400 computer running Reachin API (Reachin Technologies; Hässelby, Sweden).

The participant haptically explored the face mask, which was concealed below a mirror, during adaptation and then viewed the visual stimuli via the mirror (Fig. 2a).

Participants.  Twenty-one participants (17 males, 4 females; age range = 21–39 years) were recruited for Experiment 1. All participants had normal or corrected-to-normal vision.

Preliminary tests.  In the first preliminary test sessions before the main experiment, participants were asked to discriminate between two emotional expressions (happi-ness and sadness) by haptically exploring a face mask concealed below the mirror and report their answer. Par-ticipants explored each face mask until they were able to provide an answer. The two face masks were explored five times each in random order.

After the first preliminary tests, I had each participant take part in the second preliminary test session involving the same stimulus sequence as a main experiment (see Adaptation Trials) in order to measure the time it took for the participants to remove their hands from a face mask after exploring the face mask haptically. During this ses-sion, the participants were instructed to press two buttons, one for each hand, near the face mask after they had hap-tically explored the face mask for 5 s during adaptation. The time it took for the participants to press the buttons after adaptation was, on average, 1.01 s (SD = 0.12 s). On the basis of this data, participants were given 1.5 s to remove their hands from the face mask after adaptation in the main experiments. A test stimulus was then presented for 0.5 s.

Adaptation trials.  I measured the magnitude of the visual FAE using the method of constant stimuli. The morph rate of the test visual face was varied from 30% to 50% sadness in 4% increments (Fig. 1b). The participant haptically explored the face mask for 5 s during adapta-tion while looking at a fixation point presented on the display. Next, the fixation point was shifted to a new position (1.0° below the first fixation point) for presenta-tion of the test stimulus. The participant made an imme-diate saccade to the new position and removed his or her hands from the face mask as soon as possible. Then, 1.5 s after the shift of the fixation point, the test stimulus (a visual face) was presented for 0.5 s. The stimulus sequence is depicted in Figure 2b. After the test presenta-tion, the participant made a two-alternative forced-choice response, classifying the test stimulus as happy or sad.

Three adaptation conditions were defined: (a) adapta-tion to a happy face mask, (b) adaptation to a sad face mask, and (c) no adaptation. Experiment 1 consisted of two sessions of 48 trials for each adaptation condition, with the order of sessions counterbalanced between participants.

by Andrew Hollingworth on September 10, 2013pss.sagepub.comDownloaded from

Cross-Modal Face Aftereffect 3

Analysis.  In the first preliminary tests, the percentage of correct responses for discriminating between the two emo-tional expressions (happy vs. sad) was calculated for each face mask. A t test was performed to confirm whether the percentages of correct responses were larger than the chance level of 50%. For Experiment 1, the magnitude of

the FAE was defined as the shift in the 50% criterion value (the point of subjective equality; PSE) for each adaptation condition relative to the nonadaptation condition. For each participant’s data set (16 observations per morph value for each condition), the PSE for each condition was estimated from the mean of the fitted cumulative Gaussian

aFace Masks

Happy Sad

Happy Sad

Visual Images

b

30% 34% 38% 42% 46% 50%

Percentage of Sadness in Morphed Face

Fig. 1.  Haptic and visual stimuli. Haptic stimuli consisted of face masks with happy and sad expressions, and visual stimuli consisted of images based on the face masks (a). The two visual stimuli were morphed to create faces showing six degrees of sadness (b).

by Andrew Hollingworth on September 10, 2013pss.sagepub.comDownloaded from

4 Matsumiya

psychometric curve using probit analysis. The PSE esti-mated for each participant was used as a sample to per-form the statistical analysis. To determine whether the FAE

magnitude differed among the adaptation conditions, I performed a repeated measures analysis of variance (ANOVA) with the adaptation conditions as factors.

W

T

2

CRT

Mirror

Face Mask

a

bHaptic Adaptation (5.0 s)

Participant RemovesHands From Face Mask (1.5 s)

Visual Test (0.5 s)

Time

cHaptic Adaptation (Experiment 2a: 5.0 s;Experiment 3: 2.5–20.0 s)

Participant RemovesHands From Face Mask (1.5 s)

Visual Test (0.5 s)

One Letter or Number Appears Every 200 ms

Time

Fig. 2.  Experimental apparatus and procedure. The apparatus (a) consisted of a CRT display showing visual stimuli, which the participant viewed via a mirror. The display surface was set to be perpendicular to the line of sight. The participant placed his or her hands on a face mask concealed beneath the mirror. In Experiment 1 (b), during adaptation, the participant haptically explored the face mask for 5 s while looking at a fixation point presented on the display. The fixation point was then shifted to a new position for the presentation of the test stimulus, and the participant removed his or her hands from the face mask and made an immediate saccade to the new position. Next, 1.5 s after the shift, the test stimulus (a visual face) was presented for 0.5 s, and the participant classified it as either happy or sad. Experiments 2a and 3 (c) were similar to Experiment 1 except that a stream of digits (0–9) and capital letters (the 26 letters of the English alphabet) were presented in the center of the display during adaptation in Experiments 2a and 3, and in Experiment 3, the duration of the adaptation (2.5 s, 5 s, 10 s, or 20 s) was varied. Under the visual-attentional-load condition, the participant was required to count the number of digits that were presented in the stream during adaptation.

by Andrew Hollingworth on September 10, 2013pss.sagepub.comDownloaded from

Cross-Modal Face Aftereffect 5

Results and discussion

In the preliminary test, the facial expression was discrimi-nated at an average rate of 94.8% (SE = 1.4%) for the happy face mask and at an average rate of 95.4% (SE = 1.4%) for the sad face mask. A t test confirmed that the percentage of correct responses for each face mask was significantly higher than the chance level of 50%—happy face mask: t(60) = 32.08, p < 0.0001; sad face mask: t(60) = 33.53, p < 0.0001. Results from this preliminary test confirmed that all participants could correctly dis-criminate between the happy and sad face masks, even without learning, which is consistent with previous find-ings (Lederman et al., 2007).

Following the preliminary test, 21 participants were tested in Experiment 1. I found that after haptic adapta-tion to a facial expression, visual face perception was biased away from the expression of the adapting haptic face (Fig. 3a). Haptic adaptation to a facial expression shifted the psychometric curve from the nonadaptation condition such that the visual faces were perceived as having the opposite facial expression. The 50% criterion values (PSEs) were estimated using probit analysis. The PSE changed significantly across the adaptation condi-tions (Fig. 3b), F(2, 40) = 35.61, p < 0.0001: The PSE for each of the adaptation conditions was significantly differ-ent from that for the nonadaptation condition—happy face mask: t(20) = 8.41 (Bonferroni corrected), p < .0001; sad face mask: t(20) = 3.66, p < .005. FAE magnitude was defined as the curve shift between the adaptation and nonadaptation conditions (Fig. 3c). FAE magnitude after adaptation to a happy face mask was significantly different from that after adaptation to a sad face mask, F(1, 20) = 66.40, p < .0001. These results indicate that adaptation to a haptic face with a specific facial expres-sion causes a subsequently viewed face to be perceived as having the opposite facial expression.

Experiments 2a and 2b

Could haptic-to-visual FAEs simply be a by-product of participants’ explicitly imaging a visual face while hapti-cally exploring a face? A recent study showed that visu-ally imagined faces can induce FAEs (Ryu, Borrmann, & Chaudhuri, 2008). Therefore, I ran a control experiment (Experiment 2a) in which participants’ attention and working memory were occupied to help suppress explicit visual imagery. Moreover, haptic-to-visual FAEs might occur from adaptation to local features of haptic stimuli. To exclude this possibility, I used inverted faces (Experiment 2b). Previous work has shown larger FAEs for upright faces than for inverted faces (Rhodes et al., 2009). If haptic-to-visual FAEs are not due to adaptation to local haptic features, FAE magnitude for inverted faces should be reduced relative to that for upright faces.

Method

Twenty new participants (17 males, 3 females; age range = 21–27 years) were recruited for Experiments 2a and 2b. A visual stream of digits (numbers 0–9; 3° × 3°) or capital letters (the 26 letters of the English alphabet; 3° × 3°) was presented in the center of the display during adaptation (Fig. 2c). Experiment 2a was identical to Experiment 1 except for the presentation of this stream, and Experiment 2b was identical to Experiment 1 except for the presentation of the stream and the orientation of the faces. In the stream, one digit or letter was presented for 30 ms, with a stimulus onset asynchrony of 200 ms. During adaptation, the participant was required to actively attend to the visual stream of digits or letters, which were presented at fixation (visual-attentional-load condition), or to passively view the stream (passive-view-ing condition). After the test presentation, the participant made a two-alternative forced-choice response to classify the test stimulus as happy or sad; participants who had actively attended to the stream also reported whether the total number of streamed digits was even or odd.

In Experiment 2a, haptic and visual face stimuli were presented in an upright orientation, and four adaptation conditions were defined as follows: (a) adaptation to a happy face mask with visual attentional load; (b) adapta-tion to a sad face mask with visual attentional load; (c) adaptation to a happy face mask with passive viewing; and (d) adaptation to a sad face mask with passive view-ing. In Experiment 2b, haptic and visual face stimuli were presented in an inverted orientation, and two adaptation conditions were defined as follows: (a) adaptation to a happy face mask with visual attentional load and (b) adaptation to a sad face mask with visual attentional load. Each participant performed the experiments in two sessions consisting of 48 trials for each adaptation condition, with the order counterbalanced between participants.

For data analysis, I performed a repeated measures ANOVA with the visual-attentional-load and passive-viewing conditions as factors to determine whether FAE magnitude differed among these conditions in Experiment 2a. In addition, for the visual-attentional-load/upright-face condition (Experiment 2a), the passive-viewing/upright-face condition (Experiment 2a), and the visual-attentional-load/inverted-face condition (Experiment 2b), a t test was performed to confirm whether the shift in the PSE values was larger than 0%.

Results and discussion

Under the visual-attentional-load condition, the partici-pants counted the number of digits presented in the stream. No participant reported explicitly imagining a face during adaptation to a haptic face. Nevertheless, I

by Andrew Hollingworth on September 10, 2013pss.sagepub.comDownloaded from

6 Matsumiya

a

c

100

50

0

Perc

enta

ge o

f “Sa

d” R

espo

nses

20 40

Percentage of Sadness in Morphed Face

Happy-Face Adaptation

Sad-Face Adaptation

No Face Adaptation

2

0

–2

Afte

reffe

ct M

agni

tude

(%)

Happy Face Sad Face

p < .0001Happy

Sad

b

30

40

Happy Face No Face Sad Face

Adaptation

Poin

t of S

ubje

ctiv

e Eq

ualit

y (%

) p < .0001

p < .005

Adaptation

Fig. 3.  Results of Experiment 1 (N = 21). The graph in (a) shows psychometric functions for visual facial-expression discrimination with and with-out adaptation to haptic faces for the three adaptation conditions. The graph in (b) shows values for the point of subjective equality as a function of adaptation condition. The graph in (c) shows the mean magnitude of the cross-modal face aftereffect as a function of adaptation condition. Posi-tive values denote bias toward a happy face, and negative values denote bias toward a sad face. Error bars represent standard errors of the mean.

by Andrew Hollingworth on September 10, 2013pss.sagepub.comDownloaded from

Cross-Modal Face Aftereffect 7

again found that adaptation to an upright haptic face pro-duced visual FAEs, in both the passive-viewing and visual-attentional-load conditions (Figs. 4a and 4b; for results from the analysis of the PSE, see Figure S1a in the Supplemental Material available online). FAE magnitude was defined as the curve shift between the conditions of adaptation to a happy face mask and adaptation to a sad face mask. It significantly increased from 0 in both the passive-viewing and visual-attentional-load conditions

(Fig. 4b)—passive-viewing condition: t(19) = 6.74, p < .0001; visual-attentional-load condition: t(19) = 7.06, p < .0001. FAE magnitude was not significantly different between these two conditions, F(1, 19) = 0.0062, p = .94, n.s. These results suggest that haptic-to-visual FAEs are not dependent on participants’ explicitly imaging visual faces while exploring a face haptically. FAE magnitude was smaller in Experiment 2a than in Experiment 1, which suggests a contribution of explicit visual imagery,

a

b

100

50

0

Perc

enta

ge o

f “Sa

d” R

espo

nses

Passive Viewing Visual Attentional Load

20 40

Percentage of Sadness in Morphed Face20 40

Afte

reffe

ct M

agni

tude

(%)

4

0

PassiveViewing

VisualAttentional Load

100

50

0

4

0

1 10Adaptation Time (s)

Afte

reffe

ct M

agni

tude

(%)

c

p < .0001p < .0001

p = .94

2 2

100

50

0

20 40

Visual Attentional Load

p = .73

VisualAttentional Load

Upright Face Inverted Face

Upright Face Inverted Face

4

0

2

Fig. 4.  Results of Experiments 2a, 2b, and 3 (N = 20). The graph in (a) shows psychometric functions for visual facial-expression dis-crimination with and without visual attentional load during adaptation to an upright face (Experiment 2a) and with visual attentional load during adaptation to an inverted face (Experiment 2b), for the happy (represented by solid circles) and sad (represented by open squares) face-adaptation conditions. The graph in (b) shows the mean magnitude of the cross-modal face aftereffect (FAE) with and without visual attentional load during adaptation to an upright face (Experiment 2a) and with visual attentional load during adaptation to an inverted face (Experiment 2b). The graph in (c) shows the mean magnitude of the cross-modal FAE as a function of adaptation time (Experiment 3). Error bars represent standard errors of the mean.

by Andrew Hollingworth on September 10, 2013pss.sagepub.comDownloaded from

8 Matsumiya

and haptic-to-visual FAEs occurred without spatial over-lap between vision and haptics, which suggests cross-modal interactions in face-centered coordinates (see the Visual Imagery and Relative Location sections of the Supplemental Material).

I confirmed that the participants could discriminate between happy and sad face masks even when they were presented in an inverted orientation (see Preliminary Test for Experiment 2b in the Supplemental Material). However, the haptic-to-visual FAE disappeared with inverted faces (Figs. 4a and 4b; for results from the PSE analysis, see Figure S1a in the Supplemental Material). FAE magnitude did not significantly increase from 0 (Fig. 4b), t(19) = 0.35, p = .73, n.s. These results indicate that haptic-to-visual FAEs are not due to adaptation to local features.

Experiment 3

To exclude the possibility that haptic-to-visual FAEs are the result of a response bias rather than perceptual adap-tation, I measured haptic-to-visual FAEs as a function of adaptation time. If perceptual adaptation produces a haptic-to-visual FAE, then the magnitude of the afteref-fect should depend on the adaptation time.

Method

Twenty new participants (18 males, 2 females; age range = 21–31 years) were recruited for Experiment 3. Experiment 3 was identical to Experiment 2a except for the duration of the adaptation. In Experiment 3, two adaptation condi-tions were defined as follows: (a) adaptation to a happy face mask with visual attentional load and (b) adaptation to a sad face mask with visual attentional load. The dura-tion of the adaptation period (2.5 s, 5 s, 10 s, or 20 s) varied randomly from trial to trial. Therefore, each participant in Experiment 3 completed eight sessions of 48 trials each for each adaptation condition, with the order of conditions counterbalanced. For data analysis, a repeated measures ANOVA was performed with the adaptation durations as factors.

Results and discussion

FAE magnitude increased with adaptation time (Fig. 4c), F(3, 19) = 21.88, p < .0001 (for results from the PSE analy-sis, see Figure S1b in the Supplemental Material). Its dependence on adaptation time was similar to that evi-denced by traditional perceptual aftereffects and stan-dard FAEs (Leopold, Rhodes, Muller, & Jeffery, 2005). These results suggest that perceptual adaptation, rather than a response bias, produces haptic-to-visual FAEs.

Experiment 4

Method

I examined whether adaptation to a visual face induces a haptic FAE. If the processing of visual and haptic faces depends on shared representations, FAEs should also transfer not only from haptics to vision but from vision to haptics. Participants adapted visually to a happy or sad face for 20 s, after which they haptically explored a neu-tral face mask for 5 s (Fig. 5a). Each participant explored the test face mask once for each adaptation condition. I then calculated the percentage of “sad” responses from participants for each adaptation condition. (For full meth-ods, see the Experiment 4 section in the Supplemental Material.)

Results and discussion

A larger percentage of “sad” responses occurred after visual adaptation to the happy face than after visual adaptation to the sad face even though both of these conditions used the same neutral face mask as the test stimulus (Fig. 5b), χ2(1, N = 16) = 4.57, p < .05. (For a comparison between the adaptation and nonadaptation conditions, see the Experiment 4 section in the Supplemental Material.) This finding indicates that cross-modal FAEs operate not only from haptics to vision but also from vision to haptics, which suggests the existence of shared facial representations between vision and haptics.

General Discussion

The present results demonstrate that visual face process-ing is susceptible to adaptation in the haptic modality. Using a face-adaptation paradigm, I found that adapta-tion in the haptic modality to a face belonging to one facial category causes an illusion in the visual modality of faces in the opposite facial category. Importantly, this haptic-to-visual FAE was not due to explicit visual imag-ery generated from exploring a face haptically, to adapta-tion to low-level features of a haptic face stimulus, or to response bias. Furthermore, visual face adaptation caused a haptic FAE.

A critical issue is whether perceptual adaptation causes haptic-to-visual FAEs. A characteristic of perceptual adap-tation is that the magnitude of an aftereffect increases logarithmically with longer exposure to the adaptation stimulus (Leopold et al., 2005). Consistent with this account of perceptual adaptation, the present results showed that the magnitude of the haptic-to-visual FAE linearly increased with adaptation time in semi-log coordinates (Fig. 4c), revealing the logarithmic relationship between adaptation

by Andrew Hollingworth on September 10, 2013pss.sagepub.comDownloaded from

Cross-Modal Face Aftereffect 9

time and FAE magnitude. This suggests that perceptual adaptation produces haptic-to-visual FAEs: The haptic-to-visual FAE is a genuine cross-modal aftereffect, in the same class as cross-modal motion aftereffects (Konkle & Moore, 2009).

What is the underlying mechanism for haptic-to-visual FAEs? FAEs are generally assumed to occur because of changes in response properties of face-selective neurons in higher-level areas of the visual cortex. However, hap-tic-to-visual FAEs demonstrate that the processing of visual faces depends on substrates adapted by haptic faces. Furthermore, visual-to-haptic FAEs demonstrate that the processing of haptic faces depends on substrates adapted by visual faces. These findings suggest that facial information may be coded in a shared representation between vision and haptics in the brain. Indeed, a recent neuroimaging study using functional MRI (fMRI) has shown that a cortical network involving the inferior fron-tal gyrus, inferior parietal lobe, and superior temporal sulcus contains partially overlapping neural substrates for

visual and haptic face processing (Kitada, Johnsrude, Kochiyama, & Lederman, 2010).

Although this fMRI finding is consistent with the view that facial information is coded in a shared neural repre-sentation between vision and haptics, questions remain. The spatial resolution of current fMRI technology is coarse, and a typical voxel may include a few million neurons (Grill-Spector & Sayres, 2008; Logothetis, 2008). For that reason, it is possible that what appear in fMRI to be the same regions activated in visual and haptic tasks are in fact neighboring but distinct neural populations for visual and haptic face processing, which raises the pos-sibility that haptic face processing and visual face pro-cessing may be isolated and not show cross-modal interactions. However, the present results provide strong behavioral evidence that haptic facial signals can interact with visual facial representations. This clearly supports the fMRI finding that facial signals from the visual and haptic modalities are represented with partially overlap-ping neural populations.

Perc

enta

ge o

f “Sa

d” R

espo

nses

100

50

0Happy Face Sad Face

Visual Face Adaptation

Happy Sad

Visual-Adaptation Stimuli

Neutral

Haptic Test Stimulus

a b

p < .05

Fig. 5.  Stimuli (a) and results (b) of Experiment 4 (N = 16). Visual-adaptation stimuli were the same as those used in Experiment 1, but this experiment included a haptic test stimulus with a neutral expression. The graph (b) shows the percentage of “sad” responses to the neutral face mask for each adaptation condition.

by Andrew Hollingworth on September 10, 2013pss.sagepub.comDownloaded from

10 Matsumiya

Many everyday objects and events provide us with cues that are available to different sensory systems. It is implicitly assumed that we engage in multisensory pro-cessing when encountering information from multiple sensory modalities (e.g., in speech perception or object recognition; Ernst & Bülthoff, 2004; Kamachi, Hill, Lander, & Vatikiotis-Bateson, 2003; Roseboom, Nishida, Fujisaki, & Arnold, 2011; Spence & Driver, 2004; Stein & Stanford, 2008), but not in cases in which information from a single sensory modality is predominantly used (e.g., face recog-nition; Calder et al., 2011). However, previous studies have shown that the visual and haptic systems both show capability for face processing (Kilgour & Lederman, 2002, 2006; Lederman et al., 2007; Matsumiya, 2012), even though we usually recognize faces through sight and almost never explore them haptically in daily life. Although recent studies have demonstrated that vision influences haptic facial judgments (Dopjans, Wallraven, & Bülthoff, 2009; Klatzky et al., 2011), current views on face processing hold that vision dominates facial judgments in the haptic modality. However, the present results demon-strate that haptics can influence the perception of visual faces, and also demonstrate that vision can influence the perception of haptic faces. These findings suggest that face processing can show bidirectional cross-modal inter-actions. Together with the view that face processing depends on shared neural processing between visual and haptic faces, these findings suggest that even face pro-cessing is essentially multisensory, which is consistent with the emerging view that much of neocortex is multi-sensory (Ghazanfar & Schroeder, 2006). Furthermore, since these findings indicate that haptic facial signals interact with visual face perception, they imply possible future directions in visual and telecommunication appli-cations and in the development of aids for the visually impaired.

The present results show that haptic-to-visual FAEs occur with upright faces but not with inverted faces, which suggests that configural, not featural, facial infor-mation is shared across vision and haptics. In this study, however, the magnitude of the haptic-to-visual FAE was well below that of the visual-to-visual FAE reported in other studies (Afraz & Cavanagh, 2008; Fox & Barton, 2007). One possible reason for this difference is that visual face adaptation might include not only configural information but also visual-specific information, such as information about facial color; therefore, the magnitude of the visual-to-visual FAE should be larger than that of the haptic-to-visual FAE. Indeed, recent studies of face-recognition memory have shown that facial information is largely modality specific but that configural facial infor-mation can be shared across vision and haptics (Casey & Newell, 2007; Dopjans et al., 2009). Taken together, these results imply that face processing might not be based on

a single visuo-haptic representation alone: Separate uni-modal representations might be integrated into a single bimodal representation. In fact, such a model has been proposed for object processing (Lacey, Pappas, Kreps, Lee, & Sathian, 2009).

It has been shown that visual FAEs occur not only for facial expression (Adams et al., 2010; Fox & Barton, 2007; Skinner & Benton, 2010; Webster et al., 2004) but also for facial identity (Afraz & Cavanagh, 2008; Leopold et al., 2001; Rhodes et al., 2009). However, the present study revealed cross-modal FAEs only for facial expression. Future research is needed to examine whether cross-modal FAEs also occur for facial identity.

Previous studies of cross-modal interactions in face processing have examined the effect of nonvisual signals, such as auditory or haptic signals, on face perception through the simultaneous presentation of visual and non-visual stimuli (de Gelder & Vroomen, 2000; Klatzky et al., 2011; Smith, Grabowecky, & Suzuki, 2007). However, the present results, yielded by experiments using an adapta-tion paradigm, demonstrate that the presentation of a haptic face alone can influence the successive visual face perception, revealing that visual face processing depends on substrates adapted by haptic faces. Moreover, the present results demonstrate that the presentation of a visual face alone can influence successive haptic face perception, revealing that haptic face processing depends on substrates adapted by visual faces. These findings suggest that face processing relies on shared neural rep-resentations underlying cross-modal interactions.

Author Contributions

K. Matsumiya is the sole author of this article and is responsible for its content.

Acknowledgments

I thank Hanae Ishi for teaching me how to morph visual face stimuli.

Declaration of Conflicting Interests

The author declared that he had no conflicts of interest with respect to his authorship or the publication of this article.

Funding

This work was supported by a Grant-in-Aid for Scientific Research on Innovative Areas, “Face Perception and Recogni-tion” from MEXT KAKENHI (23119704) and by the Research Institute of Electrical Communication, Tohoku University Original Research Support Program to K. Matsumiya.

Supplemental Material

Additional supporting information may be found at http://pss .sagepub.com/content/by/supplemental-data

by Andrew Hollingworth on September 10, 2013pss.sagepub.comDownloaded from

Cross-Modal Face Aftereffect 11

References

Adams, W. J., Gray, K. L., Garner, M., & Graf, E. W. (2010). High-level face adaptation without awareness. Psychological Science, 21, 205–210.

Afraz, S. R., & Cavanagh, P. (2008). Retinotopy of the face after-effect. Vision Research, 48, 42–54.

Anderson, N. D., & Wilson, H. R. (2005). The nature of syn-thetic face adaptation. Vision Research, 45, 1815–1828.

Calder, A. J., Rhodes, G., Johnson, M. H., & Haxby, J. V. (2011). The Oxford handbook of face perception. New York, NY: Oxford University Press.

Casey, S. J., & Newell, F. N. (2007). Are representations of unfamiliar faces independent of encoding modality? Neuropsychologia, 45, 506–513.

de Gelder, B., & Vroomen, J. (2000). The perception of emo-tions by ear and by eye. Cognition & Emotion, 14, 289–311.

Dopjans, L., Bülthoff, H. H., & Wallraven, C. (2012). Serial exploration of faces: Comparing vision and touch. Journal of Vision, 12(1), Article 6. Retrieved from http://www .journalofvision.org/content/12/1/6.full

Dopjans, L., Wallraven, C., & Bülthoff, H. H. (2009). Cross-modal transfer in visual and haptic face recognition. IEEE Transactions on Haptics, 2, 236–240.

Ernst, M. O., & Bülthoff, H. H. (2004). Merging the senses into a robust percept. Trends in Cognitive Sciences, 8, 162–169.

Fox, C. J., & Barton, J. J. (2007). What is adapted in face adap-tation? The neural representations of expression in the human visual system. Brain Research, 1127, 80–89.

Ghazanfar, A. A., & Schroeder, C. E. (2006). Is neocortex essen-tially multisensory? Trends in Cognitive Sciences, 10, 278–285.

Grill-Spector, K., & Sayres, R. (2008). Object recognition: Insights from advances in fMRI methods. Current Directions in Psychological Science, 17, 73–79.

Jiang, F., Blanz, V., & O’Toole, A. J. (2006). Probing the visual representation of faces with adaptation: A view from the other side of the mean. Psychological Science, 17, 493– 500.

Kamachi, M., Hill, H., Lander, K., & Vatikiotis-Bateson, E. (2003). “Putting the face to the voice”: Matching identity across modality. Current Biology, 13, 1709–1714.

Kanwisher, N., & Yovel, G. (2006). The fusiform face area: A cortical region specialized for the perception of faces. Philosophical Transactions of the Royal Society B: Biological Sciences, 361, 2109–2128.

Kilgour, A. R., & Lederman, S. J. (2002). Face recognition by hand. Perception & Psychophysics, 64, 339–352.

Kilgour, A. R., & Lederman, S. J. (2006). A haptic face-inversion effect. Perception, 35, 921–931.

Kitada, R., Johnsrude, I. S., Kochiyama, T., & Lederman, S. J. (2010). Brain networks involved in haptic and visual iden-tification of facial expressions of emotion: An fMRI study. NeuroImage, 49, 1677–1689.

Kitagawa, N., & Ichihara, S. (2002). Hearing visual motion in depth. Nature, 416, 172–174.

Klatzky, R. L., Abramowicz, A., Hamilton, C., & Lederman, S. J. (2011). Irrelevant visual faces influence haptic identification of facial expressions of emotion. Attention, Perception, & Psychophysics, 73, 521–530.

Konkle, T., & Moore, C. I. (2009). What can crossmodal after-effects reveal about neural representation and dynamics? Communicative & Integrative Biology, 2, 479–481.

Konkle, T., Wang, Q., Hayward, V., & Moore, C. I. (2009). Motion aftereffects transfer between touch and vision. Current Biology, 19, 745–750.

Lacey, S., Pappas, M., Kreps, A., Lee, K., & Sathian, K. (2009). Perceptual learning of view-independence in visuo-haptic object representations. Experimental Brain Research, 198, 329–337.

Lederman, S. J., Klatzky, R. L., Abramowicz, A., Salsman, K., Kitada, R., & Hamilton, C. (2007). Haptic recognition of static and dynamic expressions of emotion in the live face. Psychological Science, 18, 158–164.

Leopold, D. A., O’Toole, A. J., Vetter, T., & Blanz, V. (2001). Prototype-referenced shape encoding revealed by high-level aftereffects. Nature Neuroscience, 4, 89–94.

Leopold, D. A., Rhodes, G., Muller, K. M., & Jeffery, L. (2005). The dynamics of visual adaptation to faces. Proceedings of the Royal Society B: Biological Sciences, 272, 897–904.

Logothetis, N. K. (2008). What we can do and what we cannot do with fMRI. Nature, 453, 869–878.

Matsumiya, K. (2012). Haptic face aftereffect. i-Perception, 3, 97–100.

Rhodes, G., Evangelista, E., & Jeffery, L. (2009). Orientation-sensitivity of face identity aftereffects. Vision Research, 49, 2379–2385.

Rhodes, G., Jeffery, L., Watson, T. L., Jaquet, E., Winkler, C., & Clifford, C. W. (2004). Orientation-contingent face afteref-fects and implications for face-coding mechanisms. Current Biology, 14, 2119–2123.

Roseboom, W., Nishida, S., Fujisaki, W., & Arnold, D. H. (2011). Audio-visual speech timing sensitivity is enhanced in clut-tered conditions. PloS ONE, 6(4), e18309. Retrieved from http://www.plosone.org/article/info:doi/10.1371/journal .pone.0018309

Ryu, J. J., Borrmann, K., & Chaudhuri, A. (2008). Imagine Jane and identify John: Face identity aftereffects induced by imagined faces. PloS ONE, 3(5), e2195. Retrieved from http://www.plosone.org/article/info:doi/10.1371/journal .pone.0002195

Serino, A., Pizzoferrato, F., & Ladavas, E. (2008). Viewing a face (especially one’s own face) being touched enhances tactile perception on the face. Psychological Science, 19, 434–438.

Skinner, A. L., & Benton, C. P. (2010). Anti-expression afteref-fects reveal prototype-referenced coding of facial expres-sions. Psychological Science, 21, 1248–1253.

Smith, E. L., Grabowecky, M., & Suzuki, S. (2007). Auditory-visual crossmodal integration in perception of face gender. Current Biology, 17, 1680–1685.

Spence, C., & Driver, J. (2004). Crossmodal space and crossmo-dal attention. New York, NY: Oxford University Press.

Stein, B. E., & Stanford, T. R. (2008). Multisensory integration: Current issues from the perspective of the single neuron. Nature Reviews Neuroscience, 9, 255–266.

Webster, M. A., Kaping, D., Mizokami, Y., & Duhamel, P. (2004). Adaptation to natural facial categories. Nature, 428, 557–561.

Webster, M. A., & MacLin, O. H. (1999). Figural aftereffects in the perception of faces. Psychonomic Bulletin & Review, 6, 647–653.

by Andrew Hollingworth on September 10, 2013pss.sagepub.comDownloaded from