motivation database and features - university of texas at

1
Ladder Networks for Emotion Recognition Evaluation and Conclusions Database and Features Motivation Erik Jonsson School of Engineering & Computer Science at the University of Texas at Dallas, Richardson, Texas 75080, USA This work was funded by NSF CAREER award IIS-1453781 Srinivas Parthasarathy, Carlos Busso Background § Conventionally emotion attributes are individually modelled § Jointly learning multiple attributes regularizes models better, improving performance § Requires labeled data - expensive Our Work § Regularization through unsupervised auxiliary tasks § Reconstruction of input and intermediate features § Ladder networks learn invariant features that are useful for the discriminative task Encoder § Fully connected network § Gaussian noise added to each layer § Representation from the final layer, , used for the supervised task § Further regularization § A clean copy of encoder, , used as the reference for denoising Decoder § Denoising - reconstruct latent representation, , from using § Denoising function, , combines top down information from decoder and lateral connection from corresponding encoder layer § Lower layers can be used to reconstruct data § Higher layers learn abstractive features for the discriminative task § Modeled by MLP with inputs , projection of layer above Ladder Network with Multi-task Learning § Use both unsupervised and supervised auxiliary tasks § Jointly predict activation, valence, dominance Implementation § Cost and Metric: Concordance correlation coefficient (CCC) § Encoder and Decoder: 256 nodes Task Validation Aro Val Dom Autoencoder 0.358 ± 0.069 0.136 ± 0.141 0.305 ± 0.139 STL 0.778 ± 0.004 0.443 ± 0.008 0.722 ± 0.004 MTL 0.791 ± 0.003 0.469 ± 0.010 0.735 ± 0.003 Ladder+STL 0.801 ± 0.002 0.443 ± 0.007 0.742 ± 0.002 Ladder+MTL 0.803 ± 0.002 0.458 ± 0.004 0.746 ± 0.001 Task Test Aro Val Dom Autoencoder 0.272 ± 0.136 -0.006 ± 0.012 0.284 ± 0.148 STL 0.737 ± 0.008 0.292 ± 0.007 0.670 ± 0.007 MTL 0.745 ± 0.008 0.285 ± 0.007 0.676 ± 0.006 Ladder+STL 0.765 ± 0.002 0.294 ± 0.007 0.687 ± 0.003 Ladder+MTL 0.761 ± 0.002 0.289 ± 0.008 0.689 ± 0.002 Conclusions § All models better than conventional autoencoder § MTL > STL : Benefits of supervised auxiliary task § Ladder Network > Supervised tasks § Ladder + MTL: Best performance for several conditions § Power of combining unsupervised and supervised auxiliary tasks Hidden Representation Output Input + Noise Input Hidden Representation Output Input Hidden Representation Output 1 Input Output 2 Output 3 Hidden Representation Output Input + Noise Input Hidden Representation Output 1 Input + Noise Output 2 Output 3 Input Autoencoder STL MTL Ladder - STL Ladder - MTL (0, 2 ) (0, 2 ) (0, 2 ) (0, 2 ) (0) .,. (1) .,. (2) .,. (3) .,. (3) (2) (1) (0) () () () (1) (2) (3) () () () () () () C = C c + λ l X l C (l) d C MTL = C aro + β C val + (1 - - β )C dom g () σ 2 [u, ˜ z,u z ] u z z ˜ z (L) ˜ z ˆ z The MSP-Podcast Corpus § Multiple sentences from speakers appearing in various podcasts (2.75s – 11s) § Annotated on Amazon Mechanical Turk (aro., val., dom.) § V1.1: 22,630 utterances with emotional labels (38h56m) § Test set: 7,181 segments from 50 speakers § Dev set: 2,614 segments from 20speakers § Train set: 12,835 segments Acoustic Features § Interspeech 2013 Computational Paralinguistic Challenge feature set (6,373 features)

Upload: others

Post on 21-May-2022

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Motivation Database and Features - University of Texas at

Ladder Networks for Emotion Recognition Evaluation and Conclusions

Database and Features Motivation

Erik Jonsson School of Engineering & Computer Science at the University of Texas at Dallas, Richardson, Texas 75080, USA

This work was funded by NSF CAREER award IIS-1453781

Srinivas Parthasarathy, Carlos Busso

Background §  Conventionally emotion attributes are individually modelled

§  Jointly learning multiple attributes regularizes models better, improving performance

§  Requires labeled data - expensive

Our Work §  Regularization through unsupervised auxiliary tasks

§  Reconstruction of input and intermediate features

§  Ladder networks learn invariant features that are useful for the discriminative task

Encoder §  Fully connected network

§  Gaussian noise added to each layer

§  Representation from the final layer, , used for the supervised task

§  Further regularization

§  A clean copy of encoder, , used as the reference for denoising

Decoder §  Denoising - reconstruct latent representation, , from using

§  Denoising function, , combines top down information from decoder and lateral connection from corresponding encoder layer

§  Lower layers can be used to reconstruct data

§  Higher layers learn abstractive features for the discriminative task

§  Modeled by MLP with inputs , projection of layer above

Ladder Network with Multi-task Learning

§  Use both unsupervised and supervised auxiliary tasks

§  Jointly predict activation, valence, dominance

Implementation §  Cost and Metric: Concordance

correlation coefficient (CCC)

§  Encoder and Decoder: 256 nodes

Table 1: CCC values for the validation and test sets. The evaluations include 256 nodes per layer for arousal (Aro), valence (Val),and dominance (Dom). Bold values indicate the model(s) with the best performance per attribute (multiple cases are highlighted whendifferences are statistical significant). ⇤ indicates significant improvements of the ladder networks compared to all the baselines.

Task Validation TestAro Val Dom Aro Val Dom

Autoencoder 0.358 ± 0.069 0.136 ± 0.141 0.305 ± 0.139 0.272 ± 0.136 -0.006 ± 0.012 0.284 ± 0.148STL 0.778 ± 0.004 0.443 ± 0.008 0.722 ± 0.004 0.737 ± 0.008 0.292 ± 0.007 0.670 ± 0.007MTL 0.791 ± 0.003 0.469 ± 0.010 0.735 ± 0.003 0.745 ± 0.008 0.285 ± 0.007 0.676 ± 0.006

Ladder+STL 0.801 ± 0.002⇤ 0.443 ± 0.007 0.742 ± 0.002⇤ 0.765 ± 0.002⇤ 0.294 ± 0.007 0.687 ± 0.003⇤

Ladder+MTL 0.803 ± 0.002⇤ 0.458 ± 0.004 0.746 ± 0.001⇤ 0.761 ± 0.002⇤ 0.289 ± 0.008 0.689 ± 0.002⇤

(i.e., STL). Figure 1(a) shows the second baseline, which learnsfeature representations using an autoencoder in an unsupervisedfashion. The feature representations learned are then used as in-put for the supervised task. Unlike the ladder networks, thefeature representation is independently learned from the super-vised task. The objective of the autoencoder is to learn hiddenrepresentations that are useful for denoising the noise added tothe features. All weights and activations of the encoder arefrozen, and an output layer is then added on the top layer ofthe encoder for the prediction task. Figure 1(c) shows the thirdbaseline, which uses MTL to jointly predict the three emotionalattributes. Following the work of Parthasarathy and Busso [10],we train three MTL networks, one for each target emotionalattribute, optimizing ↵ and � in Equation 2 to maximize theperformance for each attribute. By learning all three tasks, weobtain feature representations that generalize well across differ-ent conditions.

4.3. Implementation DetailsAll the deep neural networks in this study have two hiddenlayers with 256 nodes per layer. We use rectified linear unit(ReLU) activation at the hidden layers and a linear activation forthe output layer. We optimize the networks using NADAM witha learning rate of 5e�5. The networks are implemented withdropout p = 0.5 at the input and first hidden layer. FollowingTrigeorgis et al. [28], we use the concordance correlation coef-ficient (CCC) as the loss function for training the models. Wealso use CCC to evaluate the models. All the hyper-parametersare set maximizing performance on the validation set, includingthe parameters for MTL (↵,�). We train all the networks for50 epochs with early stopping based on the results observed inthe validation set. The best model on the validation set is thenevaluated on the test set. All the models are trained 10 timeswith different random initializations, reporting the mean CCC.

For the ladder network, we add noise with variance �2=0.3to each layer of the encoder. We conducted a grid searchfor the weights for the reconstruction loss with values �l 2{0.1, 1, 10, 100}. A value of �l = 1 gives the best result on thevalidation set. We use the mean squared error as the reconstruc-tion cost and a dropout with probability p = 0.1 at the inputlayer and the first hidden layer. Notice that the original paperimplementing ladder network did not use dropout. However,the high dimensionality of our feature vector and the violationof the independence assumption in our features motivate us tofurther regularize the network by adding dropout.

5. ResultsTable 1 illustrates the mean CCC and standard deviation of theproposed architectures and the baselines for the validation andtest sets. We analyze the performance of various models usingthe one-tailed t-test over the 10 trials, asserting significance if

p-value<0.05. We highlight with an asterisk when the laddermodels perform better than the baselines. First, we note that theresults on the validation set are significantly higher than perfor-mance on the test set for all emotional attributes and all models.Comparing the performance of the different architectures on thetest set, we note that all the networks perform better than the au-toencoder baseline. This result shows that the feature represen-tation learned by the autoencoder does not fit well for the pri-mary regression task. Next, amongst the baselines we see thatMTL models perform the best in all cases, except for valence onthe test set. This result further confirms the benefits of super-vised auxiliary tasks through the joint learning of multiple emo-tional attributes, shown in our previous study [10]. Observingthe proposed architectures, we note that in almost all cases theladder networks perform significantly better than the baselinesand give the best performance. For valence, while MTL per-forms better than all other methods on the validation set, it doesnot translate to the test set where the Ladder+STL architecturehas the best performance. The role of regularization for valenceis an interesting topic which requires further study. Amongstthe proposed architectures, Ladder+MTL performs better thanLadder+STL in many cases. The results show the benefits ofcombining both unsupervised and supervised auxiliary tasks forpredicting emotional attributes. Overall, the unsupervised aux-iliary tasks greatly help to regularize our network improvingthe predictions and providing yet state-of-the-art performanceon the MSP-Podcast corpus.

6. ConclusionsThis work proposed ladder networks with multi-task learningto predict emotional attributes, achieving state-of-the-art per-formance on the MSP-Podcast corpus. We illustrated the ben-efits of using auxiliary tasks to regularize the network by com-bining unsupervised auxiliary tasks (ladder network) and super-vised auxiliary tasks (multi-task learning). The emotional mod-els generalize better across various conditions providing signif-icantly better performance than the baseline systems.

There are many future directions for this work. First, thetrue potential of using unsupervised auxiliary tasks is in har-nessing the almost unlimited amount of unlabeled data. Wecan extend the framework to work in a semi-supervised man-ner, where we can combine large number of unlabeled sampleswith fewer emotionally labeled samples, providing a more pow-erful feature representation. Second, the feature representationsproduced by the ladder network can be further studied and usedas general input features for other emotion recognition prob-lems such as classification of emotional categories. Finally, theauxiliary tasks could be extended to cover multiple modalitiesgeneralizing the models even more. The promising results inthis study suggest that these extensions can lead to importantimprovements in emotion prediction performance.

Table 1: CCC values for the validation and test sets. The evaluations include 256 nodes per layer for arousal (Aro), valence (Val),and dominance (Dom). Bold values indicate the model(s) with the best performance per attribute (multiple cases are highlighted whendifferences are statistical significant). ⇤ indicates significant improvements of the ladder networks compared to all the baselines.

Task Test TestAro Val Dom Aro Val Dom

Autoencoder 0.272 ± 0.136 -0.006 ± 0.012 0.284 ± 0.148 0.272 ± 0.136 -0.006 ± 0.012 0.284 ± 0.148STL 0.737 ± 0.008 0.292 ± 0.007 0.670 ± 0.007 0.737 ± 0.008 0.292 ± 0.007 0.670 ± 0.007MTL 0.745 ± 0.008 0.285 ± 0.007 0.676 ± 0.006 0.745 ± 0.008 0.285 ± 0.007 0.676 ± 0.006

Ladder+STL 0.765 ± 0.002⇤ 0.294 ± 0.007 0.687 ± 0.003⇤ 0.765 ± 0.002⇤ 0.294 ± 0.007 0.687 ± 0.003⇤

Ladder+MTL 0.761 ± 0.002⇤ 0.289 ± 0.008 0.689 ± 0.002⇤ 0.761 ± 0.002⇤ 0.289 ± 0.008 0.689 ± 0.002⇤

(i.e., STL). Figure 1(a) shows the second baseline, which learnsfeature representations using an autoencoder in an unsupervisedfashion. The feature representations learned are then used as in-put for the supervised task. Unlike the ladder networks, thefeature representation is independently learned from the super-vised task. The objective of the autoencoder is to learn hiddenrepresentations that are useful for denoising the noise added tothe features. All weights and activations of the encoder arefrozen, and an output layer is then added on the top layer ofthe encoder for the prediction task. Figure 1(c) shows the thirdbaseline, which uses MTL to jointly predict the three emotionalattributes. Following the work of Parthasarathy and Busso [10],we train three MTL networks, one for each target emotionalattribute, optimizing ↵ and � in Equation 2 to maximize theperformance for each attribute. By learning all three tasks, weobtain feature representations that generalize well across differ-ent conditions.

4.3. Implementation DetailsAll the deep neural networks in this study have two hiddenlayers with 256 nodes per layer. We use rectified linear unit(ReLU) activation at the hidden layers and a linear activation forthe output layer. We optimize the networks using NADAM witha learning rate of 5e�5. The networks are implemented withdropout p = 0.5 at the input and first hidden layer. FollowingTrigeorgis et al. [28], we use the concordance correlation coef-ficient (CCC) as the loss function for training the models. Wealso use CCC to evaluate the models. All the hyper-parametersare set maximizing performance on the validation set, includingthe parameters for MTL (↵,�). We train all the networks for50 epochs with early stopping based on the results observed inthe validation set. The best model on the validation set is thenevaluated on the test set. All the models are trained 10 timeswith different random initializations, reporting the mean CCC.

For the ladder network, we add noise with variance �2=0.3to each layer of the encoder. We conducted a grid searchfor the weights for the reconstruction loss with values �l 2{0.1, 1, 10, 100}. A value of �l = 1 gives the best result on thevalidation set. We use the mean squared error as the reconstruc-tion cost and a dropout with probability p = 0.1 at the inputlayer and the first hidden layer. Notice that the original paperimplementing ladder network did not use dropout. However,the high dimensionality of our feature vector and the violationof the independence assumption in our features motivate us tofurther regularize the network by adding dropout.

5. ResultsTable 1 illustrates the mean CCC and standard deviation of theproposed architectures and the baselines for the validation andtest sets. We analyze the performance of various models usingthe one-tailed t-test over the 10 trials, asserting significance if

p-value<0.05. We highlight with an asterisk when the laddermodels perform better than the baselines. First, we note that theresults on the validation set are significantly higher than perfor-mance on the test set for all emotional attributes and all models.Comparing the performance of the different architectures on thetest set, we note that all the networks perform better than the au-toencoder baseline. This result shows that the feature represen-tation learned by the autoencoder does not fit well for the pri-mary regression task. Next, amongst the baselines we see thatMTL models perform the best in all cases, except for valence onthe test set. This result further confirms the benefits of super-vised auxiliary tasks through the joint learning of multiple emo-tional attributes, shown in our previous study [10]. Observingthe proposed architectures, we note that in almost all cases theladder networks perform significantly better than the baselinesand give the best performance. For valence, while MTL per-forms better than all other methods on the validation set, it doesnot translate to the test set where the Ladder+STL architecturehas the best performance. The role of regularization for valenceis an interesting topic which requires further study. Amongstthe proposed architectures, Ladder+MTL performs better thanLadder+STL in many cases. The results show the benefits ofcombining both unsupervised and supervised auxiliary tasks forpredicting emotional attributes. Overall, the unsupervised aux-iliary tasks greatly help to regularize our network improvingthe predictions and providing yet state-of-the-art performanceon the MSP-Podcast corpus.

6. ConclusionsThis work proposed ladder networks with multi-task learningto predict emotional attributes, achieving state-of-the-art per-formance on the MSP-Podcast corpus. We illustrated the ben-efits of using auxiliary tasks to regularize the network by com-bining unsupervised auxiliary tasks (ladder network) and super-vised auxiliary tasks (multi-task learning). The emotional mod-els generalize better across various conditions providing signif-icantly better performance than the baseline systems.

There are many future directions for this work. First, thetrue potential of using unsupervised auxiliary tasks is in har-nessing the almost unlimited amount of unlabeled data. Wecan extend the framework to work in a semi-supervised man-ner, where we can combine large number of unlabeled sampleswith fewer emotionally labeled samples, providing a more pow-erful feature representation. Second, the feature representationsproduced by the ladder network can be further studied and usedas general input features for other emotion recognition prob-lems such as classification of emotional categories. Finally, theauxiliary tasks could be extended to cover multiple modalitiesgeneralizing the models even more. The promising results inthis study suggest that these extensions can lead to importantimprovements in emotion prediction performance.

Conclusions §  All models better than conventional autoencoder

§  MTL > STL : Benefits of supervised auxiliary task

§  Ladder Network > Supervised tasks

§  Ladder + MTL: Best performance for several conditions

§  Power of combining unsupervised and supervised auxiliary tasks

HiddenRepresentation

Output

Input+Noise Input

HiddenRepresentation

Output

Input

HiddenRepresentation

Output1

Input

Output2 Output3

HiddenRepresentation

Output

Input+Noise Input

HiddenRepresentation

Output1

Input+Noise

Output2 Output3

Input

Autoencoder STL MTL Ladder - STL Ladder - MTL

𝑁(0, 𝜎2)

𝑁(0, 𝜎2)

𝑁(0, 𝜎2)

𝑁(0, 𝜎2)

𝑔(0) . , .

𝑔(1) . , .

𝑔(2) . , .

𝑔(3) . , . 𝐶𝑑(3)

𝐶𝑑(2)

𝐶𝑑(1)

𝐶𝑑(0)

𝒙 𝒙

𝒛(𝟏)

𝒙

𝒛(𝟐)

𝒛(𝟑)

𝒚

𝑓(1)

𝑓(2)

𝑓(3)

𝒛(𝟑)

𝒛(𝟐)

𝒛(𝟏)

𝒙

𝒛(𝟑)

𝒛(𝟐)

𝒛(𝟏)

𝒙

𝒚

C = Cc + �l

X

l

C(l)d

<latexit sha1_base64="XMuCCQ3OKsm7VLOHJ/SGIPQYPQc=">AAACDHicbVBPS8MwHE3nvzn/VT16CU5hIoxWBPUgDHvxOMG6wVpLmmZbWJqWJBVG2Rfw4lfx4kHFqx/Am9/GbOtBNx+EPN77PZLfC1NGpbKsb6O0sLi0vFJeraytb2xumds7dzLJBCYuTlgi2iGShFFOXEUVI+1UEBSHjLTCgTP2Ww9ESJrwWzVMiR+jHqddipHSUmAeOPASOgGGx9BjOhahgEFPZrG+nCC6z2vsaBSYVatuTQDniV2QKijQDMwvL0pwFhOuMENSdmwrVX6OhKKYkVHFyyRJER6gHuloylFMpJ9PthnBQ61EsJsIfbiCE/V3IkexlMM41JMxUn05643F/7xOprrnfk55minC8fShbsagSuC4GhhRQbBiQ00QFlT/FeI+EggrXWBFl2DPrjxP3JP6Rd2+Oa02roo2ymAP7IMasMEZaIBr0AQuwOARPINX8GY8GS/Gu/ExHS0ZRWYX/IHx+QNcEJl0</latexit><latexit sha1_base64="XMuCCQ3OKsm7VLOHJ/SGIPQYPQc=">AAACDHicbVBPS8MwHE3nvzn/VT16CU5hIoxWBPUgDHvxOMG6wVpLmmZbWJqWJBVG2Rfw4lfx4kHFqx/Am9/GbOtBNx+EPN77PZLfC1NGpbKsb6O0sLi0vFJeraytb2xumds7dzLJBCYuTlgi2iGShFFOXEUVI+1UEBSHjLTCgTP2Ww9ESJrwWzVMiR+jHqddipHSUmAeOPASOgGGx9BjOhahgEFPZrG+nCC6z2vsaBSYVatuTQDniV2QKijQDMwvL0pwFhOuMENSdmwrVX6OhKKYkVHFyyRJER6gHuloylFMpJ9PthnBQ61EsJsIfbiCE/V3IkexlMM41JMxUn05643F/7xOprrnfk55minC8fShbsagSuC4GhhRQbBiQ00QFlT/FeI+EggrXWBFl2DPrjxP3JP6Rd2+Oa02roo2ymAP7IMasMEZaIBr0AQuwOARPINX8GY8GS/Gu/ExHS0ZRWYX/IHx+QNcEJl0</latexit><latexit sha1_base64="XMuCCQ3OKsm7VLOHJ/SGIPQYPQc=">AAACDHicbVBPS8MwHE3nvzn/VT16CU5hIoxWBPUgDHvxOMG6wVpLmmZbWJqWJBVG2Rfw4lfx4kHFqx/Am9/GbOtBNx+EPN77PZLfC1NGpbKsb6O0sLi0vFJeraytb2xumds7dzLJBCYuTlgi2iGShFFOXEUVI+1UEBSHjLTCgTP2Ww9ESJrwWzVMiR+jHqddipHSUmAeOPASOgGGx9BjOhahgEFPZrG+nCC6z2vsaBSYVatuTQDniV2QKijQDMwvL0pwFhOuMENSdmwrVX6OhKKYkVHFyyRJER6gHuloylFMpJ9PthnBQ61EsJsIfbiCE/V3IkexlMM41JMxUn05643F/7xOprrnfk55minC8fShbsagSuC4GhhRQbBiQ00QFlT/FeI+EggrXWBFl2DPrjxP3JP6Rd2+Oa02roo2ymAP7IMasMEZaIBr0AQuwOARPINX8GY8GS/Gu/ExHS0ZRWYX/IHx+QNcEJl0</latexit><latexit sha1_base64="XMuCCQ3OKsm7VLOHJ/SGIPQYPQc=">AAACDHicbVBPS8MwHE3nvzn/VT16CU5hIoxWBPUgDHvxOMG6wVpLmmZbWJqWJBVG2Rfw4lfx4kHFqx/Am9/GbOtBNx+EPN77PZLfC1NGpbKsb6O0sLi0vFJeraytb2xumds7dzLJBCYuTlgi2iGShFFOXEUVI+1UEBSHjLTCgTP2Ww9ESJrwWzVMiR+jHqddipHSUmAeOPASOgGGx9BjOhahgEFPZrG+nCC6z2vsaBSYVatuTQDniV2QKijQDMwvL0pwFhOuMENSdmwrVX6OhKKYkVHFyyRJER6gHuloylFMpJ9PthnBQ61EsJsIfbiCE/V3IkexlMM41JMxUn05643F/7xOprrnfk55minC8fShbsagSuC4GhhRQbBiQ00QFlT/FeI+EggrXWBFl2DPrjxP3JP6Rd2+Oa02roo2ymAP7IMasMEZaIBr0AQuwOARPINX8GY8GS/Gu/ExHS0ZRWYX/IHx+QNcEJl0</latexit>

CMTL = ↵Caro + �Cval + (1� ↵� �)Cdom<latexit sha1_base64="cdrl/UYpotglayAWnHZJ1bxc9CY=">AAACU3icbVFNSwMxFMyuVWv9qnr0EiyCIpZdEdSDIPbiQaFCq0K3lLdpakOzmyV5K5Rlf6QeBH+JFw+mH0KrPggMM/PIZBImUhj0vA/HXSgsLi0XV0qra+sbm+Wt7QejUs14kymp9FMIhksR8yYKlPwp0RyiUPLHcFAb6Y8vXBuh4gYOE96O4DkWPcEALdUpD2qdLIgA+wKzu8ZtntNLGoBM+kBnFNDKKkc0CDnOCS8gx8KBT49/9o4ntsNZX1dFed4pV7yqNx76F/hTUCHTqXfKb0FXsTTiMTIJxrR8L8G2TYOCSZ6XgtTwBNgAnnnLwhgibtrZuJSc7lumS3tK2xMjHbOzGxlExgyj0DpHIc1vbUT+p7VS7J23MxEnKfKYTS7qpZKioqOGaVdozlAOLQCmhc1KWR80MLT/ULIl+L+f/Bc0T6oXVf/+tHJ1PW2jSHbJHjkgPjkjV+SG1EmTMPJKPh3iOM678+W6bmFidZ3pzg6ZG3f9G9x+sfs=</latexit><latexit sha1_base64="cdrl/UYpotglayAWnHZJ1bxc9CY=">AAACU3icbVFNSwMxFMyuVWv9qnr0EiyCIpZdEdSDIPbiQaFCq0K3lLdpakOzmyV5K5Rlf6QeBH+JFw+mH0KrPggMM/PIZBImUhj0vA/HXSgsLi0XV0qra+sbm+Wt7QejUs14kymp9FMIhksR8yYKlPwp0RyiUPLHcFAb6Y8vXBuh4gYOE96O4DkWPcEALdUpD2qdLIgA+wKzu8ZtntNLGoBM+kBnFNDKKkc0CDnOCS8gx8KBT49/9o4ntsNZX1dFed4pV7yqNx76F/hTUCHTqXfKb0FXsTTiMTIJxrR8L8G2TYOCSZ6XgtTwBNgAnnnLwhgibtrZuJSc7lumS3tK2xMjHbOzGxlExgyj0DpHIc1vbUT+p7VS7J23MxEnKfKYTS7qpZKioqOGaVdozlAOLQCmhc1KWR80MLT/ULIl+L+f/Bc0T6oXVf/+tHJ1PW2jSHbJHjkgPjkjV+SG1EmTMPJKPh3iOM678+W6bmFidZ3pzg6ZG3f9G9x+sfs=</latexit><latexit sha1_base64="cdrl/UYpotglayAWnHZJ1bxc9CY=">AAACU3icbVFNSwMxFMyuVWv9qnr0EiyCIpZdEdSDIPbiQaFCq0K3lLdpakOzmyV5K5Rlf6QeBH+JFw+mH0KrPggMM/PIZBImUhj0vA/HXSgsLi0XV0qra+sbm+Wt7QejUs14kymp9FMIhksR8yYKlPwp0RyiUPLHcFAb6Y8vXBuh4gYOE96O4DkWPcEALdUpD2qdLIgA+wKzu8ZtntNLGoBM+kBnFNDKKkc0CDnOCS8gx8KBT49/9o4ntsNZX1dFed4pV7yqNx76F/hTUCHTqXfKb0FXsTTiMTIJxrR8L8G2TYOCSZ6XgtTwBNgAnnnLwhgibtrZuJSc7lumS3tK2xMjHbOzGxlExgyj0DpHIc1vbUT+p7VS7J23MxEnKfKYTS7qpZKioqOGaVdozlAOLQCmhc1KWR80MLT/ULIl+L+f/Bc0T6oXVf/+tHJ1PW2jSHbJHjkgPjkjV+SG1EmTMPJKPh3iOM678+W6bmFidZ3pzg6ZG3f9G9x+sfs=</latexit><latexit sha1_base64="cdrl/UYpotglayAWnHZJ1bxc9CY=">AAACU3icbVFNSwMxFMyuVWv9qnr0EiyCIpZdEdSDIPbiQaFCq0K3lLdpakOzmyV5K5Rlf6QeBH+JFw+mH0KrPggMM/PIZBImUhj0vA/HXSgsLi0XV0qra+sbm+Wt7QejUs14kymp9FMIhksR8yYKlPwp0RyiUPLHcFAb6Y8vXBuh4gYOE96O4DkWPcEALdUpD2qdLIgA+wKzu8ZtntNLGoBM+kBnFNDKKkc0CDnOCS8gx8KBT49/9o4ntsNZX1dFed4pV7yqNx76F/hTUCHTqXfKb0FXsTTiMTIJxrR8L8G2TYOCSZ6XgtTwBNgAnnnLwhgibtrZuJSc7lumS3tK2xMjHbOzGxlExgyj0DpHIc1vbUT+p7VS7J23MxEnKfKYTS7qpZKioqOGaVdozlAOLQCmhc1KWR80MLT/ULIl+L+f/Bc0T6oXVf/+tHJ1PW2jSHbJHjkgPjkjV+SG1EmTMPJKPh3iOM678+W6bmFidZ3pzg6ZG3f9G9x+sfs=</latexit>

g()<latexit sha1_base64="WY62RvNBP5TXDmPlohDl0qH7jKQ=">AAAB6XicbVBNS8NAEJ3Ur1q/qh69LBahXkoignorevFY0dhCG8pmu0mXbnbD7kYooT/BiwcVr/4jb/4bt20O2vpg4PHeDDPzwpQzbVz32ymtrK6tb5Q3K1vbO7t71f2DRy0zRahPJJeqE2JNORPUN8xw2kkVxUnIaTsc3Uz99hNVmknxYMYpDRIcCxYxgo2V7uP6ab9acxvuDGiZeAWpQYFWv/rVG0iSJVQYwrHWXc9NTZBjZRjhdFLpZZqmmIxwTLuWCpxQHeSzUyfoxCoDFEllSxg0U39P5DjRepyEtjPBZqgXvan4n9fNTHQZ5EykmaGCzBdFGUdGounfaMAUJYaPLcFEMXsrIkOsMDE2nYoNwVt8eZn4Z42rhnd3XmteF2mU4QiOoQ4eXEATbqEFPhCI4Rle4c3hzovz7nzMW0tOMXMIf+B8/gAALI0k</latexit><latexit sha1_base64="WY62RvNBP5TXDmPlohDl0qH7jKQ=">AAAB6XicbVBNS8NAEJ3Ur1q/qh69LBahXkoignorevFY0dhCG8pmu0mXbnbD7kYooT/BiwcVr/4jb/4bt20O2vpg4PHeDDPzwpQzbVz32ymtrK6tb5Q3K1vbO7t71f2DRy0zRahPJJeqE2JNORPUN8xw2kkVxUnIaTsc3Uz99hNVmknxYMYpDRIcCxYxgo2V7uP6ab9acxvuDGiZeAWpQYFWv/rVG0iSJVQYwrHWXc9NTZBjZRjhdFLpZZqmmIxwTLuWCpxQHeSzUyfoxCoDFEllSxg0U39P5DjRepyEtjPBZqgXvan4n9fNTHQZ5EykmaGCzBdFGUdGounfaMAUJYaPLcFEMXsrIkOsMDE2nYoNwVt8eZn4Z42rhnd3XmteF2mU4QiOoQ4eXEATbqEFPhCI4Rle4c3hzovz7nzMW0tOMXMIf+B8/gAALI0k</latexit><latexit sha1_base64="WY62RvNBP5TXDmPlohDl0qH7jKQ=">AAAB6XicbVBNS8NAEJ3Ur1q/qh69LBahXkoignorevFY0dhCG8pmu0mXbnbD7kYooT/BiwcVr/4jb/4bt20O2vpg4PHeDDPzwpQzbVz32ymtrK6tb5Q3K1vbO7t71f2DRy0zRahPJJeqE2JNORPUN8xw2kkVxUnIaTsc3Uz99hNVmknxYMYpDRIcCxYxgo2V7uP6ab9acxvuDGiZeAWpQYFWv/rVG0iSJVQYwrHWXc9NTZBjZRjhdFLpZZqmmIxwTLuWCpxQHeSzUyfoxCoDFEllSxg0U39P5DjRepyEtjPBZqgXvan4n9fNTHQZ5EykmaGCzBdFGUdGounfaMAUJYaPLcFEMXsrIkOsMDE2nYoNwVt8eZn4Z42rhnd3XmteF2mU4QiOoQ4eXEATbqEFPhCI4Rle4c3hzovz7nzMW0tOMXMIf+B8/gAALI0k</latexit><latexit sha1_base64="WY62RvNBP5TXDmPlohDl0qH7jKQ=">AAAB6XicbVBNS8NAEJ3Ur1q/qh69LBahXkoignorevFY0dhCG8pmu0mXbnbD7kYooT/BiwcVr/4jb/4bt20O2vpg4PHeDDPzwpQzbVz32ymtrK6tb5Q3K1vbO7t71f2DRy0zRahPJJeqE2JNORPUN8xw2kkVxUnIaTsc3Uz99hNVmknxYMYpDRIcCxYxgo2V7uP6ab9acxvuDGiZeAWpQYFWv/rVG0iSJVQYwrHWXc9NTZBjZRjhdFLpZZqmmIxwTLuWCpxQHeSzUyfoxCoDFEllSxg0U39P5DjRepyEtjPBZqgXvan4n9fNTHQZ5EykmaGCzBdFGUdGounfaMAUJYaPLcFEMXsrIkOsMDE2nYoNwVt8eZn4Z42rhnd3XmteF2mU4QiOoQ4eXEATbqEFPhCI4Rle4c3hzovz7nzMW0tOMXMIf+B8/gAALI0k</latexit>

�2<latexit sha1_base64="DRe8q9G8lOKJt3WChWHyv0OBjjM=">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mKoN6KXjxWMLbQxrLZbtqlu5u4uxFK6J/w4kHFq7/Hm//GTZuDtj4YeLw3w8y8MOFMG9f9dkorq2vrG+XNytb2zu5edf/gXsepItQnMY9VJ8Saciapb5jhtJMoikXIaTscX+d++4kqzWJ5ZyYJDQQeShYxgo2VOj3NhgI/NPrVmlt3Z0DLxCtIDQq0+tWv3iAmqaDSEI617npuYoIMK8MIp9NKL9U0wWSMh7RrqcSC6iCb3TtFJ1YZoChWtqRBM/X3RIaF1hMR2k6BzUgvern4n9dNTXQRZEwmqaGSzBdFKUcmRvnzaMAUJYZPLMFEMXsrIiOsMDE2oooNwVt8eZn4jfpl3bs9qzWvijTKcATHcAoenEMTbqAFPhDg8Ayv8OY8Oi/Ou/Mxby05xcwh/IHz+QMxno+b</latexit><latexit sha1_base64="DRe8q9G8lOKJt3WChWHyv0OBjjM=">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mKoN6KXjxWMLbQxrLZbtqlu5u4uxFK6J/w4kHFq7/Hm//GTZuDtj4YeLw3w8y8MOFMG9f9dkorq2vrG+XNytb2zu5edf/gXsepItQnMY9VJ8Saciapb5jhtJMoikXIaTscX+d++4kqzWJ5ZyYJDQQeShYxgo2VOj3NhgI/NPrVmlt3Z0DLxCtIDQq0+tWv3iAmqaDSEI617npuYoIMK8MIp9NKL9U0wWSMh7RrqcSC6iCb3TtFJ1YZoChWtqRBM/X3RIaF1hMR2k6BzUgvern4n9dNTXQRZEwmqaGSzBdFKUcmRvnzaMAUJYZPLMFEMXsrIiOsMDE2oooNwVt8eZn4jfpl3bs9qzWvijTKcATHcAoenEMTbqAFPhDg8Ayv8OY8Oi/Ou/Mxby05xcwh/IHz+QMxno+b</latexit><latexit sha1_base64="DRe8q9G8lOKJt3WChWHyv0OBjjM=">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mKoN6KXjxWMLbQxrLZbtqlu5u4uxFK6J/w4kHFq7/Hm//GTZuDtj4YeLw3w8y8MOFMG9f9dkorq2vrG+XNytb2zu5edf/gXsepItQnMY9VJ8Saciapb5jhtJMoikXIaTscX+d++4kqzWJ5ZyYJDQQeShYxgo2VOj3NhgI/NPrVmlt3Z0DLxCtIDQq0+tWv3iAmqaDSEI617npuYoIMK8MIp9NKL9U0wWSMh7RrqcSC6iCb3TtFJ1YZoChWtqRBM/X3RIaF1hMR2k6BzUgvern4n9dNTXQRZEwmqaGSzBdFKUcmRvnzaMAUJYZPLMFEMXsrIiOsMDE2oooNwVt8eZn4jfpl3bs9qzWvijTKcATHcAoenEMTbqAFPhDg8Ayv8OY8Oi/Ou/Mxby05xcwh/IHz+QMxno+b</latexit><latexit sha1_base64="DRe8q9G8lOKJt3WChWHyv0OBjjM=">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mKoN6KXjxWMLbQxrLZbtqlu5u4uxFK6J/w4kHFq7/Hm//GTZuDtj4YeLw3w8y8MOFMG9f9dkorq2vrG+XNytb2zu5edf/gXsepItQnMY9VJ8Saciapb5jhtJMoikXIaTscX+d++4kqzWJ5ZyYJDQQeShYxgo2VOj3NhgI/NPrVmlt3Z0DLxCtIDQq0+tWv3iAmqaDSEI617npuYoIMK8MIp9NKL9U0wWSMh7RrqcSC6iCb3TtFJ1YZoChWtqRBM/X3RIaF1hMR2k6BzUgvern4n9dNTXQRZEwmqaGSzBdFKUcmRvnzaMAUJYZPLMFEMXsrIiOsMDE2oooNwVt8eZn4jfpl3bs9qzWvijTKcATHcAoenEMTbqAFPhDg8Ayv8OY8Oi/Ou/Mxby05xcwh/IHz+QMxno+b</latexit>

[u, z, u� z]<latexit sha1_base64="MGPwvbherH5/sZ79xlJCZgWMitg=">AAACAXicbVBNS8NAEN3Ur1q/op7Ey2IRPJSSiKDeil48VjC20Iay2WzbpZts2J0IbShe/CtePKh49V9489+4bXPQ1gcDj/dmmJkXJIJrcJxvq7C0vLK6VlwvbWxube/Yu3v3WqaKMo9KIVUzIJoJHjMPOAjWTBQjUSBYIxhcT/zGA1Oay/gOhgnzI9KLeZdTAkbq2AettILbwEXIstG4glPclqEEPPI7dtmpOlPgReLmpIxy1Dv2VzuUNI1YDFQQrVuuk4CfEQWcCjYutVPNEkIHpMdahsYkYtrPpi+M8bFRQtyVylQMeKr+nshIpPUwCkxnRKCv572J+J/XSqF74Wc8TlJgMZ0t6qYCg8STPHDIFaMghoYQqri5FdM+UYSCSa1kQnDnX14k3mn1surenpVrV3kaRXSIjtAJctE5qqEbVEceougRPaNX9GY9WS/Wu/Uxay1Y+cw++gPr8wduCZZf</latexit><latexit sha1_base64="MGPwvbherH5/sZ79xlJCZgWMitg=">AAACAXicbVBNS8NAEN3Ur1q/op7Ey2IRPJSSiKDeil48VjC20Iay2WzbpZts2J0IbShe/CtePKh49V9489+4bXPQ1gcDj/dmmJkXJIJrcJxvq7C0vLK6VlwvbWxube/Yu3v3WqaKMo9KIVUzIJoJHjMPOAjWTBQjUSBYIxhcT/zGA1Oay/gOhgnzI9KLeZdTAkbq2AettILbwEXIstG4glPclqEEPPI7dtmpOlPgReLmpIxy1Dv2VzuUNI1YDFQQrVuuk4CfEQWcCjYutVPNEkIHpMdahsYkYtrPpi+M8bFRQtyVylQMeKr+nshIpPUwCkxnRKCv572J+J/XSqF74Wc8TlJgMZ0t6qYCg8STPHDIFaMghoYQqri5FdM+UYSCSa1kQnDnX14k3mn1surenpVrV3kaRXSIjtAJctE5qqEbVEceougRPaNX9GY9WS/Wu/Uxay1Y+cw++gPr8wduCZZf</latexit><latexit sha1_base64="MGPwvbherH5/sZ79xlJCZgWMitg=">AAACAXicbVBNS8NAEN3Ur1q/op7Ey2IRPJSSiKDeil48VjC20Iay2WzbpZts2J0IbShe/CtePKh49V9489+4bXPQ1gcDj/dmmJkXJIJrcJxvq7C0vLK6VlwvbWxube/Yu3v3WqaKMo9KIVUzIJoJHjMPOAjWTBQjUSBYIxhcT/zGA1Oay/gOhgnzI9KLeZdTAkbq2AettILbwEXIstG4glPclqEEPPI7dtmpOlPgReLmpIxy1Dv2VzuUNI1YDFQQrVuuk4CfEQWcCjYutVPNEkIHpMdahsYkYtrPpi+M8bFRQtyVylQMeKr+nshIpPUwCkxnRKCv572J+J/XSqF74Wc8TlJgMZ0t6qYCg8STPHDIFaMghoYQqri5FdM+UYSCSa1kQnDnX14k3mn1surenpVrV3kaRXSIjtAJctE5qqEbVEceougRPaNX9GY9WS/Wu/Uxay1Y+cw++gPr8wduCZZf</latexit><latexit sha1_base64="MGPwvbherH5/sZ79xlJCZgWMitg=">AAACAXicbVBNS8NAEN3Ur1q/op7Ey2IRPJSSiKDeil48VjC20Iay2WzbpZts2J0IbShe/CtePKh49V9489+4bXPQ1gcDj/dmmJkXJIJrcJxvq7C0vLK6VlwvbWxube/Yu3v3WqaKMo9KIVUzIJoJHjMPOAjWTBQjUSBYIxhcT/zGA1Oay/gOhgnzI9KLeZdTAkbq2AettILbwEXIstG4glPclqEEPPI7dtmpOlPgReLmpIxy1Dv2VzuUNI1YDFQQrVuuk4CfEQWcCjYutVPNEkIHpMdahsYkYtrPpi+M8bFRQtyVylQMeKr+nshIpPUwCkxnRKCv572J+J/XSqF74Wc8TlJgMZ0t6qYCg8STPHDIFaMghoYQqri5FdM+UYSCSa1kQnDnX14k3mn1surenpVrV3kaRXSIjtAJctE5qqEbVEceougRPaNX9GY9WS/Wu/Uxay1Y+cw++gPr8wduCZZf</latexit>

u<latexit sha1_base64="7a80Kd8w0CNPmSGNm3ly3L4Q4Qo=">AAAB53icbVBNS8NAEJ34WetX1aOXxSJ4KokI6q3oxWMLxhbaUDbbSbt2swm7G6GE/gIvHlS8+pe8+W/ctjlo64OBx3szzMwLU8G1cd1vZ2V1bX1js7RV3t7Z3duvHBw+6CRTDH2WiES1Q6pRcIm+4UZgO1VI41BgKxzdTv3WEyrNE3lvxikGMR1IHnFGjZWaWa9SdWvuDGSZeAWpQoFGr/LV7Scsi1EaJqjWHc9NTZBTZTgTOCl3M40pZSM6wI6lksaog3x26IScWqVPokTZkobM1N8TOY21Hseh7YypGepFbyr+53UyE10FOZdpZlCy+aIoE8QkZPo16XOFzIixJZQpbm8lbEgVZcZmU7YheIsvLxP/vHZd85oX1fpNkUYJjuEEzsCDS6jDHTTABwYIz/AKb86j8+K8Ox/z1hWnmDmCP3A+fwBQFozN</latexit><latexit sha1_base64="7a80Kd8w0CNPmSGNm3ly3L4Q4Qo=">AAAB53icbVBNS8NAEJ34WetX1aOXxSJ4KokI6q3oxWMLxhbaUDbbSbt2swm7G6GE/gIvHlS8+pe8+W/ctjlo64OBx3szzMwLU8G1cd1vZ2V1bX1js7RV3t7Z3duvHBw+6CRTDH2WiES1Q6pRcIm+4UZgO1VI41BgKxzdTv3WEyrNE3lvxikGMR1IHnFGjZWaWa9SdWvuDGSZeAWpQoFGr/LV7Scsi1EaJqjWHc9NTZBTZTgTOCl3M40pZSM6wI6lksaog3x26IScWqVPokTZkobM1N8TOY21Hseh7YypGepFbyr+53UyE10FOZdpZlCy+aIoE8QkZPo16XOFzIixJZQpbm8lbEgVZcZmU7YheIsvLxP/vHZd85oX1fpNkUYJjuEEzsCDS6jDHTTABwYIz/AKb86j8+K8Ox/z1hWnmDmCP3A+fwBQFozN</latexit><latexit sha1_base64="7a80Kd8w0CNPmSGNm3ly3L4Q4Qo=">AAAB53icbVBNS8NAEJ34WetX1aOXxSJ4KokI6q3oxWMLxhbaUDbbSbt2swm7G6GE/gIvHlS8+pe8+W/ctjlo64OBx3szzMwLU8G1cd1vZ2V1bX1js7RV3t7Z3duvHBw+6CRTDH2WiES1Q6pRcIm+4UZgO1VI41BgKxzdTv3WEyrNE3lvxikGMR1IHnFGjZWaWa9SdWvuDGSZeAWpQoFGr/LV7Scsi1EaJqjWHc9NTZBTZTgTOCl3M40pZSM6wI6lksaog3x26IScWqVPokTZkobM1N8TOY21Hseh7YypGepFbyr+53UyE10FOZdpZlCy+aIoE8QkZPo16XOFzIixJZQpbm8lbEgVZcZmU7YheIsvLxP/vHZd85oX1fpNkUYJjuEEzsCDS6jDHTTABwYIz/AKb86j8+K8Ox/z1hWnmDmCP3A+fwBQFozN</latexit><latexit sha1_base64="7a80Kd8w0CNPmSGNm3ly3L4Q4Qo=">AAAB53icbVBNS8NAEJ34WetX1aOXxSJ4KokI6q3oxWMLxhbaUDbbSbt2swm7G6GE/gIvHlS8+pe8+W/ctjlo64OBx3szzMwLU8G1cd1vZ2V1bX1js7RV3t7Z3duvHBw+6CRTDH2WiES1Q6pRcIm+4UZgO1VI41BgKxzdTv3WEyrNE3lvxikGMR1IHnFGjZWaWa9SdWvuDGSZeAWpQoFGr/LV7Scsi1EaJqjWHc9NTZBTZTgTOCl3M40pZSM6wI6lksaog3x26IScWqVPokTZkobM1N8TOY21Hseh7YypGepFbyr+53UyE10FOZdpZlCy+aIoE8QkZPo16XOFzIixJZQpbm8lbEgVZcZmU7YheIsvLxP/vHZd85oX1fpNkUYJjuEEzsCDS6jDHTTABwYIz/AKb86j8+K8Ox/z1hWnmDmCP3A+fwBQFozN</latexit>

z<latexit sha1_base64="bPhcpTa7EXRQYPdsuRRM5WelIWA=">AAAB8HicbVBNS8NAFHzxs9avqkcvi0XwVBIR1FvRi8cKxhbbUjbbl3bpZhN2N0IN/RdePKh49ed489+4aXPQ1oGFYeY9dt4EieDauO63s7S8srq2Xtoob25t7+xW9vbvdZwqhj6LRaxaAdUouETfcCOwlSikUSCwGYyuc7/5iErzWN6ZcYLdiA4kDzmjxkoPnYiaYRBmT5NeperW3CnIIvEKUoUCjV7lq9OPWRqhNExQrduem5huRpXhTOCk3Ek1JpSN6ADblkoaoe5m08QTcmyVPgljZZ80ZKr+3shopPU4CuxknlDPe7n4n9dOTXjRzbhMUoOSzT4KU0FMTPLzSZ8rZEaMLaFMcZuVsCFVlBlbUtmW4M2fvEj809plzbs9q9avijZKcAhHcAIenEMdbqABPjCQ8Ayv8OZo58V5dz5mo0tOsXMAf+B8/gBq2JDy</latexit><latexit sha1_base64="bPhcpTa7EXRQYPdsuRRM5WelIWA=">AAAB8HicbVBNS8NAFHzxs9avqkcvi0XwVBIR1FvRi8cKxhbbUjbbl3bpZhN2N0IN/RdePKh49ed489+4aXPQ1oGFYeY9dt4EieDauO63s7S8srq2Xtoob25t7+xW9vbvdZwqhj6LRaxaAdUouETfcCOwlSikUSCwGYyuc7/5iErzWN6ZcYLdiA4kDzmjxkoPnYiaYRBmT5NeperW3CnIIvEKUoUCjV7lq9OPWRqhNExQrduem5huRpXhTOCk3Ek1JpSN6ADblkoaoe5m08QTcmyVPgljZZ80ZKr+3shopPU4CuxknlDPe7n4n9dOTXjRzbhMUoOSzT4KU0FMTPLzSZ8rZEaMLaFMcZuVsCFVlBlbUtmW4M2fvEj809plzbs9q9avijZKcAhHcAIenEMdbqABPjCQ8Ayv8OZo58V5dz5mo0tOsXMAf+B8/gBq2JDy</latexit><latexit sha1_base64="bPhcpTa7EXRQYPdsuRRM5WelIWA=">AAAB8HicbVBNS8NAFHzxs9avqkcvi0XwVBIR1FvRi8cKxhbbUjbbl3bpZhN2N0IN/RdePKh49ed489+4aXPQ1oGFYeY9dt4EieDauO63s7S8srq2Xtoob25t7+xW9vbvdZwqhj6LRaxaAdUouETfcCOwlSikUSCwGYyuc7/5iErzWN6ZcYLdiA4kDzmjxkoPnYiaYRBmT5NeperW3CnIIvEKUoUCjV7lq9OPWRqhNExQrduem5huRpXhTOCk3Ek1JpSN6ADblkoaoe5m08QTcmyVPgljZZ80ZKr+3shopPU4CuxknlDPe7n4n9dOTXjRzbhMUoOSzT4KU0FMTPLzSZ8rZEaMLaFMcZuVsCFVlBlbUtmW4M2fvEj809plzbs9q9avijZKcAhHcAIenEMdbqABPjCQ8Ayv8OZo58V5dz5mo0tOsXMAf+B8/gBq2JDy</latexit><latexit sha1_base64="bPhcpTa7EXRQYPdsuRRM5WelIWA=">AAAB8HicbVBNS8NAFHzxs9avqkcvi0XwVBIR1FvRi8cKxhbbUjbbl3bpZhN2N0IN/RdePKh49ed489+4aXPQ1oGFYeY9dt4EieDauO63s7S8srq2Xtoob25t7+xW9vbvdZwqhj6LRaxaAdUouETfcCOwlSikUSCwGYyuc7/5iErzWN6ZcYLdiA4kDzmjxkoPnYiaYRBmT5NeperW3CnIIvEKUoUCjV7lq9OPWRqhNExQrduem5huRpXhTOCk3Ek1JpSN6ADblkoaoe5m08QTcmyVPgljZZ80ZKr+3shopPU4CuxknlDPe7n4n9dOTXjRzbhMUoOSzT4KU0FMTPLzSZ8rZEaMLaFMcZuVsCFVlBlbUtmW4M2fvEj809plzbs9q9avijZKcAhHcAIenEMdbqABPjCQ8Ayv8OZo58V5dz5mo0tOsXMAf+B8/gBq2JDy</latexit>

z<latexit sha1_base64="bPhcpTa7EXRQYPdsuRRM5WelIWA=">AAAB8HicbVBNS8NAFHzxs9avqkcvi0XwVBIR1FvRi8cKxhbbUjbbl3bpZhN2N0IN/RdePKh49ed489+4aXPQ1oGFYeY9dt4EieDauO63s7S8srq2Xtoob25t7+xW9vbvdZwqhj6LRaxaAdUouETfcCOwlSikUSCwGYyuc7/5iErzWN6ZcYLdiA4kDzmjxkoPnYiaYRBmT5NeperW3CnIIvEKUoUCjV7lq9OPWRqhNExQrduem5huRpXhTOCk3Ek1JpSN6ADblkoaoe5m08QTcmyVPgljZZ80ZKr+3shopPU4CuxknlDPe7n4n9dOTXjRzbhMUoOSzT4KU0FMTPLzSZ8rZEaMLaFMcZuVsCFVlBlbUtmW4M2fvEj809plzbs9q9avijZKcAhHcAIenEMdbqABPjCQ8Ayv8OZo58V5dz5mo0tOsXMAf+B8/gBq2JDy</latexit><latexit sha1_base64="bPhcpTa7EXRQYPdsuRRM5WelIWA=">AAAB8HicbVBNS8NAFHzxs9avqkcvi0XwVBIR1FvRi8cKxhbbUjbbl3bpZhN2N0IN/RdePKh49ed489+4aXPQ1oGFYeY9dt4EieDauO63s7S8srq2Xtoob25t7+xW9vbvdZwqhj6LRaxaAdUouETfcCOwlSikUSCwGYyuc7/5iErzWN6ZcYLdiA4kDzmjxkoPnYiaYRBmT5NeperW3CnIIvEKUoUCjV7lq9OPWRqhNExQrduem5huRpXhTOCk3Ek1JpSN6ADblkoaoe5m08QTcmyVPgljZZ80ZKr+3shopPU4CuxknlDPe7n4n9dOTXjRzbhMUoOSzT4KU0FMTPLzSZ8rZEaMLaFMcZuVsCFVlBlbUtmW4M2fvEj809plzbs9q9avijZKcAhHcAIenEMdbqABPjCQ8Ayv8OZo58V5dz5mo0tOsXMAf+B8/gBq2JDy</latexit><latexit sha1_base64="bPhcpTa7EXRQYPdsuRRM5WelIWA=">AAAB8HicbVBNS8NAFHzxs9avqkcvi0XwVBIR1FvRi8cKxhbbUjbbl3bpZhN2N0IN/RdePKh49ed489+4aXPQ1oGFYeY9dt4EieDauO63s7S8srq2Xtoob25t7+xW9vbvdZwqhj6LRaxaAdUouETfcCOwlSikUSCwGYyuc7/5iErzWN6ZcYLdiA4kDzmjxkoPnYiaYRBmT5NeperW3CnIIvEKUoUCjV7lq9OPWRqhNExQrduem5huRpXhTOCk3Ek1JpSN6ADblkoaoe5m08QTcmyVPgljZZ80ZKr+3shopPU4CuxknlDPe7n4n9dOTXjRzbhMUoOSzT4KU0FMTPLzSZ8rZEaMLaFMcZuVsCFVlBlbUtmW4M2fvEj809plzbs9q9avijZKcAhHcAIenEMdbqABPjCQ8Ayv8OZo58V5dz5mo0tOsXMAf+B8/gBq2JDy</latexit><latexit sha1_base64="bPhcpTa7EXRQYPdsuRRM5WelIWA=">AAAB8HicbVBNS8NAFHzxs9avqkcvi0XwVBIR1FvRi8cKxhbbUjbbl3bpZhN2N0IN/RdePKh49ed489+4aXPQ1oGFYeY9dt4EieDauO63s7S8srq2Xtoob25t7+xW9vbvdZwqhj6LRaxaAdUouETfcCOwlSikUSCwGYyuc7/5iErzWN6ZcYLdiA4kDzmjxkoPnYiaYRBmT5NeperW3CnIIvEKUoUCjV7lq9OPWRqhNExQrduem5huRpXhTOCk3Ek1JpSN6ADblkoaoe5m08QTcmyVPgljZZ80ZKr+3shopPU4CuxknlDPe7n4n9dOTXjRzbhMUoOSzT4KU0FMTPLzSZ8rZEaMLaFMcZuVsCFVlBlbUtmW4M2fvEj809plzbs9q9avijZKcAhHcAIenEMdbqABPjCQ8Ayv8OZo58V5dz5mo0tOsXMAf+B8/gBq2JDy</latexit>

z(L)<latexit sha1_base64="HOAQ3M+ArzyvjijkjizSYhVLTWI=">AAACAHicbVBNS8NAEN34WetX1IvgZbEI9VISEdRb0YsHDxWMLTSxbDabdulmE3Y3Qg3x4l/x4kHFqz/Dm//GTZuDtj4YeLw3w8w8P2FUKsv6NubmFxaXlisr1dW19Y1Nc2v7VsapwMTBMYtFx0eSMMqJo6hipJMIgiKfkbY/vCj89j0Rksb8Ro0S4kWoz2lIMVJa6pm7rqIsIJkbITXww+whz++y+tVh3jNrVsMaA84SuyQ1UKLVM7/cIMZpRLjCDEnZta1EeRkSimJG8qqbSpIgPER90tWUo4hILxt/kMMDrQQwjIUuruBY/T2RoUjKUeTrzuJQOe0V4n9eN1XhqZdRnqSKcDxZFKYMqhgWccCACoIVG2mCsKD6VogHSCCsdGhVHYI9/fIscY4aZw37+rjWPC/TqIA9sA/qwAYnoAkuQQs4AINH8AxewZvxZLwY78bHpHXOKGd2wB8Ynz9fdJcI</latexit><latexit sha1_base64="HOAQ3M+ArzyvjijkjizSYhVLTWI=">AAACAHicbVBNS8NAEN34WetX1IvgZbEI9VISEdRb0YsHDxWMLTSxbDabdulmE3Y3Qg3x4l/x4kHFqz/Dm//GTZuDtj4YeLw3w8w8P2FUKsv6NubmFxaXlisr1dW19Y1Nc2v7VsapwMTBMYtFx0eSMMqJo6hipJMIgiKfkbY/vCj89j0Rksb8Ro0S4kWoz2lIMVJa6pm7rqIsIJkbITXww+whz++y+tVh3jNrVsMaA84SuyQ1UKLVM7/cIMZpRLjCDEnZta1EeRkSimJG8qqbSpIgPER90tWUo4hILxt/kMMDrQQwjIUuruBY/T2RoUjKUeTrzuJQOe0V4n9eN1XhqZdRnqSKcDxZFKYMqhgWccCACoIVG2mCsKD6VogHSCCsdGhVHYI9/fIscY4aZw37+rjWPC/TqIA9sA/qwAYnoAkuQQs4AINH8AxewZvxZLwY78bHpHXOKGd2wB8Ynz9fdJcI</latexit><latexit sha1_base64="HOAQ3M+ArzyvjijkjizSYhVLTWI=">AAACAHicbVBNS8NAEN34WetX1IvgZbEI9VISEdRb0YsHDxWMLTSxbDabdulmE3Y3Qg3x4l/x4kHFqz/Dm//GTZuDtj4YeLw3w8w8P2FUKsv6NubmFxaXlisr1dW19Y1Nc2v7VsapwMTBMYtFx0eSMMqJo6hipJMIgiKfkbY/vCj89j0Rksb8Ro0S4kWoz2lIMVJa6pm7rqIsIJkbITXww+whz++y+tVh3jNrVsMaA84SuyQ1UKLVM7/cIMZpRLjCDEnZta1EeRkSimJG8qqbSpIgPER90tWUo4hILxt/kMMDrQQwjIUuruBY/T2RoUjKUeTrzuJQOe0V4n9eN1XhqZdRnqSKcDxZFKYMqhgWccCACoIVG2mCsKD6VogHSCCsdGhVHYI9/fIscY4aZw37+rjWPC/TqIA9sA/qwAYnoAkuQQs4AINH8AxewZvxZLwY78bHpHXOKGd2wB8Ynz9fdJcI</latexit><latexit sha1_base64="HOAQ3M+ArzyvjijkjizSYhVLTWI=">AAACAHicbVBNS8NAEN34WetX1IvgZbEI9VISEdRb0YsHDxWMLTSxbDabdulmE3Y3Qg3x4l/x4kHFqz/Dm//GTZuDtj4YeLw3w8w8P2FUKsv6NubmFxaXlisr1dW19Y1Nc2v7VsapwMTBMYtFx0eSMMqJo6hipJMIgiKfkbY/vCj89j0Rksb8Ro0S4kWoz2lIMVJa6pm7rqIsIJkbITXww+whz++y+tVh3jNrVsMaA84SuyQ1UKLVM7/cIMZpRLjCDEnZta1EeRkSimJG8qqbSpIgPER90tWUo4hILxt/kMMDrQQwjIUuruBY/T2RoUjKUeTrzuJQOe0V4n9eN1XhqZdRnqSKcDxZFKYMqhgWccCACoIVG2mCsKD6VogHSCCsdGhVHYI9/fIscY4aZw37+rjWPC/TqIA9sA/qwAYnoAkuQQs4AINH8AxewZvxZLwY78bHpHXOKGd2wB8Ynz9fdJcI</latexit>

z<latexit sha1_base64="SOLBll1yHRS45x2VU3MQE7OpssQ=">AAAB+nicbVDLSsNAFJ3UV62vWJduBovgqiQiqLuiG5cVjC00oUwmk3bo5MHMjVhDfsWNCxW3fok7/8ZJm4W2Hhg4nHMv98zxU8EVWNa3UVtZXVvfqG82trZ3dvfM/ea9SjJJmUMTkci+TxQTPGYOcBCsn0pGIl+wnj+5Lv3eA5OKJ/EdTFPmRWQU85BTAloamk0XuAhY7kYExn6YPxXF0GxZbWsGvEzsirRQhe7Q/HKDhGYRi4EKotTAtlLwciKBU8GKhpsplhI6ISM20DQmEVNePste4GOtBDhMpH4x4Jn6eyMnkVLTyNeTZUS16JXif94gg/DCy3mcZsBiOj8UZgJDgssicMAloyCmmhAquc6K6ZhIQkHX1dAl2ItfXibOafuybd+etTpXVRt1dIiO0Amy0TnqoBvURQ6i6BE9o1f0ZhTGi/FufMxHa0a1c4D+wPj8AVsKlNk=</latexit><latexit sha1_base64="SOLBll1yHRS45x2VU3MQE7OpssQ=">AAAB+nicbVDLSsNAFJ3UV62vWJduBovgqiQiqLuiG5cVjC00oUwmk3bo5MHMjVhDfsWNCxW3fok7/8ZJm4W2Hhg4nHMv98zxU8EVWNa3UVtZXVvfqG82trZ3dvfM/ea9SjJJmUMTkci+TxQTPGYOcBCsn0pGIl+wnj+5Lv3eA5OKJ/EdTFPmRWQU85BTAloamk0XuAhY7kYExn6YPxXF0GxZbWsGvEzsirRQhe7Q/HKDhGYRi4EKotTAtlLwciKBU8GKhpsplhI6ISM20DQmEVNePste4GOtBDhMpH4x4Jn6eyMnkVLTyNeTZUS16JXif94gg/DCy3mcZsBiOj8UZgJDgssicMAloyCmmhAquc6K6ZhIQkHX1dAl2ItfXibOafuybd+etTpXVRt1dIiO0Amy0TnqoBvURQ6i6BE9o1f0ZhTGi/FufMxHa0a1c4D+wPj8AVsKlNk=</latexit><latexit sha1_base64="SOLBll1yHRS45x2VU3MQE7OpssQ=">AAAB+nicbVDLSsNAFJ3UV62vWJduBovgqiQiqLuiG5cVjC00oUwmk3bo5MHMjVhDfsWNCxW3fok7/8ZJm4W2Hhg4nHMv98zxU8EVWNa3UVtZXVvfqG82trZ3dvfM/ea9SjJJmUMTkci+TxQTPGYOcBCsn0pGIl+wnj+5Lv3eA5OKJ/EdTFPmRWQU85BTAloamk0XuAhY7kYExn6YPxXF0GxZbWsGvEzsirRQhe7Q/HKDhGYRi4EKotTAtlLwciKBU8GKhpsplhI6ISM20DQmEVNePste4GOtBDhMpH4x4Jn6eyMnkVLTyNeTZUS16JXif94gg/DCy3mcZsBiOj8UZgJDgssicMAloyCmmhAquc6K6ZhIQkHX1dAl2ItfXibOafuybd+etTpXVRt1dIiO0Amy0TnqoBvURQ6i6BE9o1f0ZhTGi/FufMxHa0a1c4D+wPj8AVsKlNk=</latexit><latexit sha1_base64="SOLBll1yHRS45x2VU3MQE7OpssQ=">AAAB+nicbVDLSsNAFJ3UV62vWJduBovgqiQiqLuiG5cVjC00oUwmk3bo5MHMjVhDfsWNCxW3fok7/8ZJm4W2Hhg4nHMv98zxU8EVWNa3UVtZXVvfqG82trZ3dvfM/ea9SjJJmUMTkci+TxQTPGYOcBCsn0pGIl+wnj+5Lv3eA5OKJ/EdTFPmRWQU85BTAloamk0XuAhY7kYExn6YPxXF0GxZbWsGvEzsirRQhe7Q/HKDhGYRi4EKotTAtlLwciKBU8GKhpsplhI6ISM20DQmEVNePste4GOtBDhMpH4x4Jn6eyMnkVLTyNeTZUS16JXif94gg/DCy3mcZsBiOj8UZgJDgssicMAloyCmmhAquc6K6ZhIQkHX1dAl2ItfXibOafuybd+etTpXVRt1dIiO0Amy0TnqoBvURQ6i6BE9o1f0ZhTGi/FufMxHa0a1c4D+wPj8AVsKlNk=</latexit>z

<latexit sha1_base64="4ss1rquOjRDS1kiy1ljFNzWJH0w=">AAAB+HicbVBNS8NAFNzUr1q/oh69LBbBU0lEUG9FLx4rGC00oWy2m3bpZhN2Xwo15J948aDi1Z/izX/jps1BWwcWhpn3eLMTpoJrcJxvq7ayura+Ud9sbG3v7O7Z+wcPOskUZR5NRKK6IdFMcMk84CBYN1WMxKFgj+H4pvQfJ0xpnsh7mKYsiMlQ8ohTAkbq27Y/IpD7MYFRGOVPRdG3m07LmQEvE7ciTVSh07e//EFCs5hJoIJo3XOdFIKcKOBUsKLhZ5qlhI7JkPUMlSRmOshnyQt8YpQBjhJlngQ8U39v5CTWehqHZrKMqBe9UvzP62UQXQY5l2kGTNL5oSgTGBJc1oAHXDEKYmoIoYqbrJiOiCIUTFkNU4K7+OVl4p21rlru3XmzfV21UUdH6BidIhddoDa6RR3kIYom6Bm9ojcrt16sd+tjPlqzqp1D9AfW5w/A4ZPw</latexit><latexit sha1_base64="4ss1rquOjRDS1kiy1ljFNzWJH0w=">AAAB+HicbVBNS8NAFNzUr1q/oh69LBbBU0lEUG9FLx4rGC00oWy2m3bpZhN2Xwo15J948aDi1Z/izX/jps1BWwcWhpn3eLMTpoJrcJxvq7ayura+Ud9sbG3v7O7Z+wcPOskUZR5NRKK6IdFMcMk84CBYN1WMxKFgj+H4pvQfJ0xpnsh7mKYsiMlQ8ohTAkbq27Y/IpD7MYFRGOVPRdG3m07LmQEvE7ciTVSh07e//EFCs5hJoIJo3XOdFIKcKOBUsKLhZ5qlhI7JkPUMlSRmOshnyQt8YpQBjhJlngQ8U39v5CTWehqHZrKMqBe9UvzP62UQXQY5l2kGTNL5oSgTGBJc1oAHXDEKYmoIoYqbrJiOiCIUTFkNU4K7+OVl4p21rlru3XmzfV21UUdH6BidIhddoDa6RR3kIYom6Bm9ojcrt16sd+tjPlqzqp1D9AfW5w/A4ZPw</latexit><latexit sha1_base64="4ss1rquOjRDS1kiy1ljFNzWJH0w=">AAAB+HicbVBNS8NAFNzUr1q/oh69LBbBU0lEUG9FLx4rGC00oWy2m3bpZhN2Xwo15J948aDi1Z/izX/jps1BWwcWhpn3eLMTpoJrcJxvq7ayura+Ud9sbG3v7O7Z+wcPOskUZR5NRKK6IdFMcMk84CBYN1WMxKFgj+H4pvQfJ0xpnsh7mKYsiMlQ8ohTAkbq27Y/IpD7MYFRGOVPRdG3m07LmQEvE7ciTVSh07e//EFCs5hJoIJo3XOdFIKcKOBUsKLhZ5qlhI7JkPUMlSRmOshnyQt8YpQBjhJlngQ8U39v5CTWehqHZrKMqBe9UvzP62UQXQY5l2kGTNL5oSgTGBJc1oAHXDEKYmoIoYqbrJiOiCIUTFkNU4K7+OVl4p21rlru3XmzfV21UUdH6BidIhddoDa6RR3kIYom6Bm9ojcrt16sd+tjPlqzqp1D9AfW5w/A4ZPw</latexit><latexit sha1_base64="4ss1rquOjRDS1kiy1ljFNzWJH0w=">AAAB+HicbVBNS8NAFNzUr1q/oh69LBbBU0lEUG9FLx4rGC00oWy2m3bpZhN2Xwo15J948aDi1Z/izX/jps1BWwcWhpn3eLMTpoJrcJxvq7ayura+Ud9sbG3v7O7Z+wcPOskUZR5NRKK6IdFMcMk84CBYN1WMxKFgj+H4pvQfJ0xpnsh7mKYsiMlQ8ohTAkbq27Y/IpD7MYFRGOVPRdG3m07LmQEvE7ciTVSh07e//EFCs5hJoIJo3XOdFIKcKOBUsKLhZ5qlhI7JkPUMlSRmOshnyQt8YpQBjhJlngQ8U39v5CTWehqHZrKMqBe9UvzP62UQXQY5l2kGTNL5oSgTGBJc1oAHXDEKYmoIoYqbrJiOiCIUTFkNU4K7+OVl4p21rlru3XmzfV21UUdH6BidIhddoDa6RR3kIYom6Bm9ojcrt16sd+tjPlqzqp1D9AfW5w/A4ZPw</latexit>

The MSP-Podcast Corpus §  Multiple sentences from speakers appearing in various

podcasts (2.75s – 11s)

§  Annotated on Amazon Mechanical Turk (aro., val., dom.)

§  V1.1: 22,630 utterances with emotional labels (38h56m) §  Test set: 7,181 segments from 50 speakers

§  Dev set: 2,614 segments from 20speakers

§  Train set: 12,835 segments

Acoustic Features §  Interspeech 2013 Computational Paralinguistic

Challenge feature set (6,373 features)

✔ ✔ ✔ ✔ ✔ ✔