supporting information - pnas...2018/03/18  · supporting information carlos f....

13
Supporting Information Carlos F. Benitez-Quiroz a,b , Ramprakash Srinivasan a,b , and Aleix M. Martinez a,b,1 a Department of Electrical and Computer Engineering; b Center for Cognitive and Brain Sciences, The Ohio State University, Columbus, OH 43210 SI Text Results Are Not Dependent on Skin Color To demonstrate that the results of Experiments 1-4 are not dependent on the expresser’s skin color, we divide the images of our database into four subsets. The first subset includes the images of individuals with darkest skin tones. Subsequent subsets include individuals with lighter and lighter skin tones. The fourth set has the images of the individuals with lightest skin tones. These subsets were obtained by clustering the skin luminance of the neutral face of each individual using k-means. This yielded four clusters of darkest to lightest skin tone, with 41, 41, 53 and 48 images in each of the clusters, respectively. Figure S3 shows the classification accuracies over all emo- tion categories for each of these skin colors. Figures S3a-b show the results of the computational analysis of Experiments 1 and 2. Figures S3c-h plot the results of the behavioral experiments, Experiments 3 and 4. Note there are two experiments within Experiment 3. The first is a 2-alternative forced choice (2AFC) experiment. The second is a 6-alternative forced choice (6AFC) experiment. As one can appreciate in these plots, there is no bias over any skin tone. That is, the perception of the diagnostic color features is equally visible in people of different skin colors. Dimensionality of the Color Space To further explore the form and dimensionality of the color space, we performed a Principal Component Analysis (PCA). Specifically, we wish to assess if there is an underlying color space of less than 18 (orthogonal or nearly orthogonal) di- mensions. If the 18 dimensions of our color space are indeed (nearly) orthogonal to one another, this would suggest that the space is defined by the 18 emotion categories used in Experi- ment 1 and 2. If, however, some of these emotion categories are highly correlated, then the dimensions of the color space will be embedded in a space of lower dimensionality. PCA allows to test this – if the dimensions of our space mostly represent uncorrelated information, then the 18 eigenvalues of the eigenvectors of our space will be similar to one another, rather than decrease exponentially. Figure S5 shows the plot of the eigenvalues of the color space of the 18 emotions of Experiments 1 and 2. As it can be appreciated in the figure, all dimensions have similar variances and are thus necessary to represent the 18 emotion categories; i.e., there is no space of lower dimensionality that can represent these 18 emotions. Another way to assess the correlation of these dimensions is by computing the principal angle between the dimensions defined by the vectors ¯ xj . Recall that these vectors define the dimensions of the 18 emotion categories. The average principal angle between pairs of dimensions (emotion categories) is 88.3 o (max: 89.98 o , min: 83.97 o ). Again, this shows that the dimensions defining distinct emotion categories are nearly orthogonal to one another and, thus, uncorrelated. Asymmetries We note that the results in Figure S4 show some minor asym- metries between the left and right side of the face. For example, the expression of happiness is noted to have a stronger discrim- ination on the left-hand side of the mouth in the yellow-blue channel but a preference for the right-hand side of the mouth in the red-green channel. Asymmetries like this one are com- mon in the production of AUs, e.g., under stress and in posed emotions (3). This may be the reason why these asymmetries are also observable in Figure S4. However, one may wonder if these asymmetries are due to noise in our data. If so, one might expect to obtained higher recognition accuracies by averaging the left and right tessellations. Thus, we repeated Experiments 1 and 2 but using the average of the tessellations on the left and right of the face, i.e., making the discriminant maps symmetric. This yielded the following classification ac- curacies: 48.36% (chance=5.5%) for the k-way classification in Experiment 1 and 74.53% (chance=50%) for the 2-way clas- sifier in Experiment 2. These results are equivalent to those reported in the main paper, 50.15% and 76.77%, suggesting these asymmetries are not due to noise. Given the above result, we wondered if using only the tessellations on one side of face would yield the same results as using both sides. To test this, we ran Experiments 1 and 2 with only the left or right tessellations. The results of the k-way classifier are: 43.67% (right) and 41.23% (left). The results of the 2-way classifier are: 72.61% (right) and 73.38% (left). As it can be appreciated in these results, there is some loss in recognition accuracy when only using the tessellations of one side of the face. This suggest that both sides may contribute to recognition. However, the contribution of one side of the face given the information of the other side is small. Extended Methods Databases. We used images of 184 individuals expressing 18 emotion categories, plus neutral. These images are from (5). We also used the images of spontaneous expressions of 27 individuals given in (21). To avoid any bias of our results due to gender, race, ethnicity or skin color, we selected individuals of both genders as well as multiple skin colors, races and ethnicities. A few example images were shown in Figure Figure S1a. Compound Facial Expressions of Emotion (CFEE) database (5). This database includes images of 21 facial expressions. These images (Figure S1a) have been extensively validated for consistency of production and recognition (7). The emotion categories are: happy, sad, angry, disgusted, surprised, fearful, happily surprised, happily disgusted, sadly fearful, sadly angry, sadly surprised, sadly disgusted, fearfully angry, fearfully surprised, fearfully disgusted, angrily surprised, 1

Upload: others

Post on 20-Jun-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Supporting Information - PNAS...2018/03/18  · Supporting Information Carlos F. Benitez-Quiroza,b,Ramprakash Srinivasana,b, andAleix M. Martineza,b,1 aDepartment of Electrical and

Supporting InformationCarlos F. Benitez-Quiroza,b, Ramprakash Srinivasana,b, and Aleix M. Martineza,b,1

aDepartment of Electrical and Computer Engineering; bCenter for Cognitive and Brain Sciences, The Ohio State University, Columbus, OH 43210

SI Text

Results Are Not Dependent on Skin Color

To demonstrate that the results of Experiments 1-4 are notdependent on the expresser’s skin color, we divide the imagesof our database into four subsets. The first subset includesthe images of individuals with darkest skin tones. Subsequentsubsets include individuals with lighter and lighter skin tones.The fourth set has the images of the individuals with lightestskin tones. These subsets were obtained by clustering the skinluminance of the neutral face of each individual using k-means.This yielded four clusters of darkest to lightest skin tone, with41, 41, 53 and 48 images in each of the clusters, respectively.

Figure S3 shows the classification accuracies over all emo-tion categories for each of these skin colors. Figures S3a-b showthe results of the computational analysis of Experiments 1 and2. Figures S3c-h plot the results of the behavioral experiments,Experiments 3 and 4. Note there are two experiments withinExperiment 3. The first is a 2-alternative forced choice (2AFC)experiment. The second is a 6-alternative forced choice (6AFC)experiment. As one can appreciate in these plots, there isno bias over any skin tone. That is, the perception of thediagnostic color features is equally visible in people of differentskin colors.

Dimensionality of the Color Space

To further explore the form and dimensionality of the colorspace, we performed a Principal Component Analysis (PCA).Specifically, we wish to assess if there is an underlying colorspace of less than 18 (orthogonal or nearly orthogonal) di-mensions. If the 18 dimensions of our color space are indeed(nearly) orthogonal to one another, this would suggest that thespace is defined by the 18 emotion categories used in Experi-ment 1 and 2. If, however, some of these emotion categoriesare highly correlated, then the dimensions of the color spacewill be embedded in a space of lower dimensionality. PCAallows to test this – if the dimensions of our space mostlyrepresent uncorrelated information, then the 18 eigenvalues ofthe eigenvectors of our space will be similar to one another,rather than decrease exponentially.

Figure S5 shows the plot of the eigenvalues of the colorspace of the 18 emotions of Experiments 1 and 2. As it can beappreciated in the figure, all dimensions have similar variancesand are thus necessary to represent the 18 emotion categories;i.e., there is no space of lower dimensionality that can representthese 18 emotions.

Another way to assess the correlation of these dimensionsis by computing the principal angle between the dimensionsdefined by the vectors xj . Recall that these vectors define thedimensions of the 18 emotion categories. The average principalangle between pairs of dimensions (emotion categories) is88.3o (max: 89.98o, min: 83.97o). Again, this shows thatthe dimensions defining distinct emotion categories are nearlyorthogonal to one another and, thus, uncorrelated.

Asymmetries

We note that the results in Figure S4 show some minor asym-metries between the left and right side of the face. For example,the expression of happiness is noted to have a stronger discrim-ination on the left-hand side of the mouth in the yellow-bluechannel but a preference for the right-hand side of the mouthin the red-green channel. Asymmetries like this one are com-mon in the production of AUs, e.g., under stress and in posedemotions (3). This may be the reason why these asymmetriesare also observable in Figure S4. However, one may wonderif these asymmetries are due to noise in our data. If so, onemight expect to obtained higher recognition accuracies byaveraging the left and right tessellations. Thus, we repeatedExperiments 1 and 2 but using the average of the tessellationson the left and right of the face, i.e., making the discriminantmaps symmetric. This yielded the following classification ac-curacies: 48.36% (chance=5.5%) for the k-way classificationin Experiment 1 and 74.53% (chance=50%) for the 2-way clas-sifier in Experiment 2. These results are equivalent to thosereported in the main paper, 50.15% and 76.77%, suggestingthese asymmetries are not due to noise.

Given the above result, we wondered if using only thetessellations on one side of face would yield the same resultsas using both sides. To test this, we ran Experiments 1 and2 with only the left or right tessellations. The results of thek-way classifier are: 43.67% (right) and 41.23% (left). Theresults of the 2-way classifier are: 72.61% (right) and 73.38%(left). As it can be appreciated in these results, there is someloss in recognition accuracy when only using the tessellationsof one side of the face. This suggest that both sides maycontribute to recognition. However, the contribution of oneside of the face given the information of the other side is small.

Extended Methods

Databases. We used images of 184 individuals expressing 18emotion categories, plus neutral. These images are from (5).We also used the images of spontaneous expressions of 27individuals given in (21). To avoid any bias of our results dueto gender, race, ethnicity or skin color, we selected individualsof both genders as well as multiple skin colors, races andethnicities. A few example images were shown in FigureFigure S1a.

Compound Facial Expressions of Emotion (CFEE)database (5). This database includes images of 21 facialexpressions. These images (Figure S1a) have been extensivelyvalidated for consistency of production and recognition (7).The emotion categories are: happy, sad, angry, disgusted,surprised, fearful, happily surprised, happily disgusted, sadlyfearful, sadly angry, sadly surprised, sadly disgusted, fearfullyangry, fearfully surprised, fearfully disgusted, angrily surprised,

1

Page 2: Supporting Information - PNAS...2018/03/18  · Supporting Information Carlos F. Benitez-Quiroza,b,Ramprakash Srinivasana,b, andAleix M. Martineza,b,1 aDepartment of Electrical and

angrily disgusted, disgustedly surprised, appalled, hatred andawed. The dataset also includes the neutral face. A neutralface is defined as an expression without AU activation, i.e., nomuscle movement. A sample facial expression of each emotioncategory is shown in Figure S1b. Appalled, hatred and awedare not included in the present study because the first two arevariations of angrily disgusted and the third is a variant offearfully surprised.

Data collection in CFEE was carefully executed to makesure subjects being filmed experienced the expressed emotionsas much as possible. To achieve this, subjects were given anexample situation where each specific emotion would be felt.Subjects were asked to place themselves in that situation andthink how they would react when in that situation. There wasan emphasis on feeling the emotion, i.e., subjects were askedto experience the emotion as if in the actual situation. Hence,we expect people to respond similarly to how they would inthe real world, including vascular changes. This is importantin the present study. Nonetheless, we also include images ofspontaneous expressions in our analysis. This is describednext.

Denver Intensity of Spontaneous Facial Action(DISFA) database (21). 27 subjects (12 women, 2 His-panics, 3 Asian, 1 African American) were filmed while theyviewed videos intended to elicit spontaneous emotion expres-sion. Sample facial expressions of the emotion categories inDISFA are in Figure S1c. Action units were manually coded inthe database by expert coders. This allowed us to identify theemotion categories with a sufficient number of sample images.We identified four emotion categories: happy, surprised, sad,and angrily surprised. DISFA also includes a large numberof neutral expressions. The images of these spontaneous ex-pressions allowed us to test if our results extend to non-posedexpressions. We find this to be the case. Our results onDISFA are equivalent to those of CFEE for the four commonexpressions.

Emotion Category CI PowerHappy [.8,.93] 1Sad [.43,.62] 1Angry [.43,.57] 1Surprised [.6,.69] 1Disgusted [.49,.63] 1Fearful [.27,.45] 1Happily Surprised [.7,.8] 1Happily Disgusted [.59,.79] 1Sadly Fearful [.14,.25] 1Sadly Angry [.36,.5] 1Sadly Surprised [.34,.55] 1Sadly Disgusted [.2,.33] 1Fearfully Angry [.26,.41] 1Fearfully Surprised [.38,.47] 1Fearfully Disgusted [.31,.49] 1Angrily Surprised [.35,.49] 1Angrily Disgusted [.35,.54] 1Disgustedly Surprised [.44,.57] 1

Table S1. Experiment 1 uses a large number of images of facial ex-pressions, including faces of both genders, many races and skin col-ors. This experiment evaluates a total of 3,312 images. The largenumber of images is used to increase power. This is illustrated inthe results of the power analysis given in this table. C.I.=confidenceinterval. Chance=5.5%.

Power analysis. We performed a power analysis on the resultsof Experiments 1, 2, 3 and 4. These results are in TablesS1-S6. These results demonstrate the robustness of the results.Power is mostly at 1 in all experiments and expressions. Theseresults are stronger than those of statistical significance givenas p-values.

2 | Benitez-Quiroz et al.

Page 3: Supporting Information - PNAS...2018/03/18  · Supporting Information Carlos F. Benitez-Quiroza,b,Ramprakash Srinivasana,b, andAleix M. Martineza,b,1 aDepartment of Electrical and

Emotion Category CI PowerHappy [.89,.9] 1Sad [.75,.76] 1Angry [.81,.81] 1Surprised [.84,.85] 1Disgusted [.69,.7] 1Fearful [.84,.85] 1Happily Surprised [.84,.84] 1Happily Disgusted [.76,77] 1Sadly Fearful [.81,.82] 1Sadly Angry [.76,.77] 1Sadly Surprised [.81,.82] 1Sadly Disgusted [.71,.72] 1Fearfully Angry [.78,.79] 1Fearfully Surprised [.74,.75] 1Fearfully Disgusted [.63,.65] 1Angrily Surprised [.7,.71] 1Angrily Disgusted [.69,.7] 1Disgustedly Surprised [.68,.69] 1

Table S2. Experiment 2 uses the same large number of images offacial expressions as Experiment 1. This large dataset provides ex-cellent power. Chance=50%.

CI PowerNeutral [.956,.957] 0.999Happy [.984,.985] 0.999Sad [.964,.968] 1Surprise [.979,.984] 1Angrily surprised [.961,.964] 1

Table S3. Power analysis of the k-way classification of the 4 emotioncategories in DISFA. Chance=20%.

CI PowerHappy [.61,.82] 1Sad [.62,.89] 1Angry [.55,.76] 1Disgust [.58,.80] 1Happy Disgusted [.52,.76] 1Fearfully surprised [.56,.83] 1

Table S4. Power analysis of the 2AFC experiment in Experiment 3.Chance=50%.

Emotion Category CI PowerHappy [.40,.60] 1Sad [.22,.39] 0.95Angry [.25,.46] 1Disgust [.18,.30] 1Happy Disgusted [.07,.16] 0.59Fearfully surprised [.26,.43] 1

Table S5. Power analysis of the 6AFC experiment in Experiment 3.Chance=16%.

Emotion Category CI PowerHappy [.59,.78] 1Sad [.60,.96] 1Angry [.62,79] 1Disgust [.54,.71] 1Happy Disgusted [.52,.73] 0.99Fearfully surprised [.53,.69] 1

Table S6. Power analysis of Experiment 4. Chance=50%.

Benitez-Quiroz et al. PNAS | March 11, 2018 | vol. XXX | no. XX | 3

Page 4: Supporting Information - PNAS...2018/03/18  · Supporting Information Carlos F. Benitez-Quiroza,b,Ramprakash Srinivasana,b, andAleix M. Martineza,b,1 aDepartment of Electrical and

a.

4 | Benitez-Quiroz et al.

Page 5: Supporting Information - PNAS...2018/03/18  · Supporting Information Carlos F. Benitez-Quiroza,b,Ramprakash Srinivasana,b, andAleix M. Martineza,b,1 aDepartment of Electrical and

b.

Benitez-Quiroz et al. PNAS | March 11, 2018 | vol. XXX | no. XX | 5

Page 6: Supporting Information - PNAS...2018/03/18  · Supporting Information Carlos F. Benitez-Quiroza,b,Ramprakash Srinivasana,b, andAleix M. Martineza,b,1 aDepartment of Electrical and

c.

Fig. S1. a. Our database of face images includes 184 people expressing 18 distinct emotion categories plus neutral. Shown here are sample images of the neutral faceof 24 individuals as well as their facial expression of happiness. Our database includes images of men and women. Caucasians, Asians, Hispanics and African Americaare well represented in the database. This includes people with many skin tones, as shown in the above sample images. Reprinted with permission from ref. 5. b. Sampleimages of the facial expressions of emotion and the neutral face used in this paper. From left to right and top to bottom: neutral, happy, sad, angry, surprised, disgusted, fearful,happily surprised, happily disgusted, sadly fearful, sadly angry, sadly surprised, sadly disgusted, fearfully angry, fearfully surprised, fearfully disgusted, angrily surprised, angrilydisgusted, disgustedly surprised. c. Sample images of some of the facial expressions in DISFA. Reprinted with permission from ref. 21.

6 | Benitez-Quiroz et al.

Page 7: Supporting Information - PNAS...2018/03/18  · Supporting Information Carlos F. Benitez-Quiroza,b,Ramprakash Srinivasana,b, andAleix M. Martineza,b,1 aDepartment of Electrical and

Fig. S2. We define the internal and external components of the face using the 87 anatomical landmark points shown in the left image. These landmark points define thetriangular areas shown in the right image. These local regions are obtained using Delaunay triangulation. This triangulation yields a total of 142 local areas. Note, however, thatsix of these triangular areas define the interior of the mouth and sixteen the inside of the eyes. Since blood changes in these areas are not visible, these triangular areas areremoved from further consideration. This means the color representation of each facial expression is given by the remaining120 triangular local areas. This color representationis given by as the mean and standard deviation of each of the three channels (L, M, S) in each of these 120 triangular areas. This yields the feature vector xij defined in thepaper. The final color feature representation is computed as the deviation of xij from the color model of the neutral face, xn. Formally, xij = xij − xn, where j specifiesthe emotion category and i the subject’s identity.

Benitez-Quiroz et al. PNAS | March 11, 2018 | vol. XXX | no. XX | 7

Page 8: Supporting Information - PNAS...2018/03/18  · Supporting Information Carlos F. Benitez-Quiroza,b,Ramprakash Srinivasana,b, andAleix M. Martineza,b,1 aDepartment of Electrical and

a. b.

c. d.

e. f.

g. h.

Fig. S3. Classification accuracies of the four subsets of skin tones of the faces in Experiments 1-4. The first subset (labeled “1" in the plot) includes individuals with the darkestskin tones. Subsequent sets include individuals with lighter and lighter skin tones. The fourth set (labeled “4" in the plot) includes the individuals with lightest skin tones. a.Average classification accuracy over all emotion categories for each of the four subsets of skin tones in Experiment 1. b. Average classification accuracy over all emotioncategories for each of the four subsets of skin tones in Experiment 2. c. Classification accuracy of the facial expression specified in the x-axis for each of the four subsetsof skin tones in Experiment 3 (2AFC). d. Average classification accuracy over all emotion categories for each of the four subsets of skin tones in Experiment 3 (2AFC). e.Classification accuracy of the facial expression specified in the x-axis for each of the four subsets of skin tones in Experiment 3 (6AFC). f. Average classification accuracy overall emotion categories for each of the four subsets of skin tones in Experiment 3 (6AFC). g. Classification accuracy of the facial expression specified in the x-axis for each ofthe four subsets of skin tones in Experiment 4. h. Average classification accuracy over all emotion categories for each of the four subsets of skin tones in Experiment 4.

8 | Benitez-Quiroz et al.

Page 9: Supporting Information - PNAS...2018/03/18  · Supporting Information Carlos F. Benitez-Quiroza,b,Ramprakash Srinivasana,b, andAleix M. Martineza,b,1 aDepartment of Electrical and

Happy Sad Angry

Surprised Disgusted Fearful

Happily Surprised Happily Disgusted Sadly Fearful

Sadly Angry Sadly Surprised Sadly Disgusted

Fearfully Angry Fearfully Surprised Fearfully Disgusted

Angrily Surprised Angrily Disgusted Disgustedly Surpriseda.

Benitez-Quiroz et al. PNAS | March 11, 2018 | vol. XXX | no. XX | 9

Page 10: Supporting Information - PNAS...2018/03/18  · Supporting Information Carlos F. Benitez-Quiroza,b,Ramprakash Srinivasana,b, andAleix M. Martineza,b,1 aDepartment of Electrical and

Color Shape Color Shape Color Shape

Happy Sad AngryColor Shape Color Shape Color Shape

Surprise Disgust FearfulColor Shape Color Shape Color Shape

Happily surprised Happily disgusted Fearfully sadColor Shape Color Shape Color Shape

Sadly angry Sadly surprised Sadly disgustedColor Shape Color Shape Color Shape

Fearfully angry Fearfully surprised Fearfully disgustedColor Shape Color Shape Color Shape

Angrily surprised Angrily disgusted Disgustedly surprisedb.

Fig. S4. a. Shown here are the contributions of the two color channels in opponent color space (yellow-blue, red-green) to the discrimination of each emotion category. Darkblue indicates the area is less relevant to discriminate the target emotion from the rest. Yellow and reds indicate more importance of the color features in that area are todiscriminate the target emotion. It is important to note that the color features and areas most relevant to discriminate emotion varies across categories. That is, the facialcolor features diagnostic of an emotion category are different from those defining other emotions. For example, some of the cheek and chin local areas of the yellow-bluechannel are very discriminative of happiness but not of other emotions. The above images are given by the eigenvector of LDA associated to a non-zero eigenvalue. Formally,let ΣX

−1SBV = VΛ be the solution of the LDA classifier, with V = (v1,v2, . . . ,vb), Λ = diag (λ1, λ2, . . . , λb), and λ1 > λ2 = · · · = λb = 0. The elementsof v1 = (v1,1, . . . , v1,502)T specify the contribution of each of the facial color features in each of the 120 local area. b. Shown here are the two faces in a plus a faceidentifying the most discriminant areas of shape changes given by the movements of the facial muscles (AUs). This was done by using the area and angles of the triangles ofthe local areas (tessellations) of the face instead of the isoluminant color features and applying the same machine learning method described in the paper. This yielded theresults shown on the third (right-most) face images. Thus, we now show the discriminability of each local area of the face based on color alone (left two face images) and basedon shape alone (right face images). This is also indicated with the words "color" and "shape" on top of the faces. Note that there are 54 faces, 3 faces for each of the 18emotion categories.

10 | Benitez-Quiroz et al.

Page 11: Supporting Information - PNAS...2018/03/18  · Supporting Information Carlos F. Benitez-Quiroza,b,Ramprakash Srinivasana,b, andAleix M. Martineza,b,1 aDepartment of Electrical and

Fig. S5. Plot of eigenvalues: x-axis specifies the eigenvalue number, y-axis the percentage of variance.

Fig. S6. Experiment 3 consists of 6 blocks, as shown in the top left image. Each block tests the visual perception of a target emotion j. The target emotion is indicated tosubjects at the beginning of the block in text form and by showing a sample image of a facial expression of that emotion Iij . Each block includes 20 trials (top right image).Each trial is a two-alternative forced choice experiment. A sample trial is illustrated in the bottom image. The trial starts with a 500 ms blank screen, followed by a 500 msfixation cross and, then, the image pair Iij , Iik , j 6= k. Participants indicate whether the left or right image expresses emotion j more clearly. Selection is done by keypress.

Benitez-Quiroz et al. PNAS | March 11, 2018 | vol. XXX | no. XX | 11

Page 12: Supporting Information - PNAS...2018/03/18  · Supporting Information Carlos F. Benitez-Quiroza,b,Ramprakash Srinivasana,b, andAleix M. Martineza,b,1 aDepartment of Electrical and

Happy Sad Angry Disgusted Happily FearfullyDisgusted Surprised

Sad Happy Angry Disgusted Happily FearfullyDisgusted Surprised

Angry Happy Sad Disgusted Happily FearfullyDisgusted Surprised

Disgusted Happy Sad Angry Happily FearfullyDisgusted Surprised

Happily Happy Sad Angry Disgusted Fearfullydisgusted Surprised

Fearfully Happy Sad Angry Disgusted Happilysurprised Disgusted

Fig. S7. The first column shows the images with the facial movements (AUs) and color model of emotion specified in the text under the image, I+ij

. Each row includes images

with the facial movements (AUs) of the emotion category specified by the text under the far-left image and the color model given in the text under each image, Ikij . Colors in

these images have been enhanced to more clearly indicate differences between emotions. Images used in Experiment 4 (with α = 1) were shown in Figure 3a. Images areavailable from the authors.

12 | Benitez-Quiroz et al.

Page 13: Supporting Information - PNAS...2018/03/18  · Supporting Information Carlos F. Benitez-Quiroza,b,Ramprakash Srinivasana,b, andAleix M. Martineza,b,1 aDepartment of Electrical and

Fig. S8. Experiment 4 has 6 blocks and 60 trials per block. Each block tests the visual perception of a target emotion j. The target emotion is indicated to subjects at thebeginning of the block in text form and by showing a sample image of a facial expression of that emotion Iij ,. Each trial is a two-alternative forced choice experiment. A sampletrial is illustrated in the bottom image. The trial starts with a 500 ms blank screen, followed by a 500 ms fixation cross and, then, either the image pair Iij , I+

ijor Iij , Ik

ij .Participants indicate whether the left or right image expresses emotion j more clearly. Selection is done by keypress.

Benitez-Quiroz et al. PNAS | March 11, 2018 | vol. XXX | no. XX | 13