human cognition: decoding perceived, attended, imagined acoustic events and human-robot interfaces

24
Human Cognition: Decoding Perceived, Attended, Imagined Acoustic Events and Human-Robot Interfaces

Upload: mizell

Post on 23-Feb-2016

43 views

Category:

Documents


0 download

DESCRIPTION

Human Cognition: Decoding Perceived, Attended, Imagined Acoustic Events and Human-Robot Interfaces. The Team. Adriano Claro Monteiro Alain de Cheveign Anahita Mehta Byron Galbraith Dimitra Emmanouilidou Edmund Lalor Deniz Erdogmus Jim O’Sullivan. Mehmet Ozdas Lakshmi Krishnan - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Human Cognition: Decoding Perceived, Attended, Imagined Acoustic Events and Human-Robot Interfaces

Human Cognition: Decoding Perceived, Attended, Imagined Acoustic Events and

Human-Robot Interfaces

Page 2: Human Cognition: Decoding Perceived, Attended, Imagined Acoustic Events and Human-Robot Interfaces

The Team

• Adriano Claro Monteiro • Alain de Cheveign• Anahita Mehta • Byron Galbraith • Dimitra Emmanouilidou• Edmund Lalor• Deniz Erdogmus• Jim O’Sullivan

• Mehmet Ozdas • Lakshmi Krishnan • Malcolm Slaney • Mike Crosse • Nima Mesgarani • Jose L Pepe Contreras-

Vidal • Shihab Shamma • Thusitha Chandrapala

Page 3: Human Cognition: Decoding Perceived, Attended, Imagined Acoustic Events and Human-Robot Interfaces

The Goal

• To determine a reliable measure of imagined audition using electroencephalography (EEG).

• To use this measure to communicate.

Page 4: Human Cognition: Decoding Perceived, Attended, Imagined Acoustic Events and Human-Robot Interfaces

What types of imagined audition?• Speech:

– Short (~3-4s) sentences• “The whole maritime population of Europe and America.”• “Twinkle-twinkle little star.”• “London bridge is falling down, falling down, falling down.”

• Music– Short (~3-4s) phrases

• Imperial March from Star Wars.• Simple sequence of tones.

• Steady-State Auditory Stimulation– 20 s trials

• Broadband signal amplitude modulated at 4 or 6 Hz

Page 5: Human Cognition: Decoding Perceived, Attended, Imagined Acoustic Events and Human-Robot Interfaces

The Experiment• 64 – channel EEG system (Brain Vision LLC – thanks!)

• 500 samples/s

• Each “trial” consisted of the presentation of the actual auditory stimulus (“perceived” condition) followed (2 s later) by the subject imagining hearing that stimulus again (“imagined” condition).

Page 6: Human Cognition: Decoding Perceived, Attended, Imagined Acoustic Events and Human-Robot Interfaces

The Experiment

• Careful control of experimental timing.

• Perceived...2s... Imagined...2 s x 5 ... Break... next stimulus

4, 3, 2, 1, +

Page 7: Human Cognition: Decoding Perceived, Attended, Imagined Acoustic Events and Human-Robot Interfaces

Data Analysis - Preprocessing• Filtering• Independent Component Analysis (ICA)

• Time-Shift Denoising Source Separation (DSS)– Looks for reproducibility over stimulus repetitions

Page 8: Human Cognition: Decoding Perceived, Attended, Imagined Acoustic Events and Human-Robot Interfaces

• The hypothesis: – EEG recorded while people listen to (actual) speech varies in a

way that relates to the amplitude envelope of the presented (actual) speech.

– EEG recorded while people IMAGINE speech will vary in a way that relates to the amplitude envelope of the IMAGINED speech.

Data Analysis: Hypothesis-driven.

Page 9: Human Cognition: Decoding Perceived, Attended, Imagined Acoustic Events and Human-Robot Interfaces

• Phase consistency over trials...

• EEG from same sentence imagined over several trials should show consistent phase variations.

• EEG from different imagined sentences should not show consistent phase variations.

Data Analysis: Hypothesis-driven.

Page 10: Human Cognition: Decoding Perceived, Attended, Imagined Acoustic Events and Human-Robot Interfaces

Data Analysis: Hypothesis-driven.

Actual speech Imagined speech

Consistency in theta (4-8Hz) band Consistency in alpha (8-14Hz) band

Page 11: Human Cognition: Decoding Perceived, Attended, Imagined Acoustic Events and Human-Robot Interfaces

Data Analysis: Hypothesis-driven.

Page 12: Human Cognition: Decoding Perceived, Attended, Imagined Acoustic Events and Human-Robot Interfaces

Data Analysis: Hypothesis-driven.

• Red line – perceived music• Green line – imagined music

Page 13: Human Cognition: Decoding Perceived, Attended, Imagined Acoustic Events and Human-Robot Interfaces

Data Analysis - Decoding

Page 14: Human Cognition: Decoding Perceived, Attended, Imagined Acoustic Events and Human-Robot Interfaces

Data Analysis - Decoding

0 20 40 60 80 100 120 140 160 180 200-0.2

0

0.2

0.4

0.6

0.8

1

1.2

0 20 40 60 80 100 120 140 160 180 2000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

r = 0.30, p = 3e-5 r = 0.19, p = 0.01

London’s Bridge Twinkle TwinkleOriginal

Reconstruction

Page 15: Human Cognition: Decoding Perceived, Attended, Imagined Acoustic Events and Human-Robot Interfaces

Data Analysis - SSAEP

Page 16: Human Cognition: Decoding Perceived, Attended, Imagined Acoustic Events and Human-Robot Interfaces

4Hz

6Hz

Perceived ImaginedData Analysis - SSAEP

Page 17: Human Cognition: Decoding Perceived, Attended, Imagined Acoustic Events and Human-Robot Interfaces

Data Analysis• Data Mining/Machine Learning Approaches:

Page 18: Human Cognition: Decoding Perceived, Attended, Imagined Acoustic Events and Human-Robot Interfaces

Data Analysis• Data Mining/Machine Learning Approaches:

Page 19: Human Cognition: Decoding Perceived, Attended, Imagined Acoustic Events and Human-Robot Interfaces

SVM Classifier Input

𝐶𝑋=𝑋 𝑋𝑇

𝐸𝐸𝐺=𝑒 (𝑡 )1⋮

𝑒(𝑡 )64

𝑋=𝐸𝐸𝐺1

⋮𝐸𝐸𝐺𝑁

𝐸𝐸𝐺=𝑒 (𝑡 )1 … 𝑒(𝑡)64

EEG data (channels × time) :

Concatenate channels:

Group N trials:

Input covariance matrix:

1 1 1 1 1 0 0

1 0 0 0 0 0 1

Class Labels

0 0 1 0 1

1 1 0 1 1

Predicted Labels

Page 20: Human Cognition: Decoding Perceived, Attended, Imagined Acoustic Events and Human-Robot Interfaces

SVM Classifier Results

Mean DA = 90%

Decoding imagined speech and music:

Mean DA = 90%Mean DA = 87%

Page 21: Human Cognition: Decoding Perceived, Attended, Imagined Acoustic Events and Human-Robot Interfaces

Mean input1

200 400 600 800 1000 1200 1400 1600 1800

10

20

30

40

50

60

Mean input2

200 400 600 800 1000 1200 1400 1600 1800

10

20

30

40

50

60

Mean DSS result for out1

200 400 600 800 1000 1200 1400 1600 1800

2

4

6

8

10

Mean DSS result for out2

200 400 600 800 1000 1200 1400 1600 1800

2

4

6

8

10

Raw EEG Signal(500Hz data)

DSS Output(Look for repeatability)

DCT Output(Reduce dimensionality)

DCT Model for class 1

1 2 3 4 5 6 7 8 9 10

0.5

1

1.5

2

2.5

DCT Model for class 2

1 2 3 4 5 6 7 8 9 10

0.5

1

1.5

2

2.5

DCT Processing Chain

Page 22: Human Cognition: Decoding Perceived, Attended, Imagined Acoustic Events and Human-Robot Interfaces

DCT Classification Performance

Perc

enta

ge a

ccur

acy

Page 23: Human Cognition: Decoding Perceived, Attended, Imagined Acoustic Events and Human-Robot Interfaces

Data Analysis• Data Mining/Machine Learning Approaches:

– Linear Discriminant Analysis on Different Frequency Bands

Music vs SpeechSpeech 1 vs Speech 2Music 1 vs Music 2Speech vs RestMusic vs Rest

- results ~ 50 – 66%

Page 24: Human Cognition: Decoding Perceived, Attended, Imagined Acoustic Events and Human-Robot Interfaces

Summary• Both hypothesis drive and machine-learning approaches indicate

that it is possible to decode/classify imagined audition

• Many very encouraging results that align with our original hypothesis

• More data needed!!

• In a controlled environment!!

• To be continued...