proposal of a brain computer interface to command an autonomous car

6
1 Proposal of a Brain Computer Interface to Command an Autonomous Car Javier Castillo *‡ , Sandra M¨ uller * , Eduardo Caicedo , Alberto Ferreira De Souza , and Teodiano Bastos * * Post-Graduate Program of Electrical Engineering, Federal University of Espirito Santo, Av. Fernando Ferrari, 514, Vitoria, Brazil Post-Graduate Program of Informatics, Federal University of Espirito Santo, Av. Fernando Ferrari, 514, Vitoria, Brazil Post-Graduate Program of Electrical Engineering, University of Valle, Av. Paso Ancho 13-00, Cali, Valle, Colombia Email: [email protected] Abstract—This paper presents a proposal of Brain Computer Interface (BCI) to command an autonomous car. This BCI is based on the paradigm of visual evoked potentials (VEP) and event-related desynchronization (ERD). A menu interface is presented to the user with disabilities in order he/she can choose a destination for the autonomous car. The selection of the final destination is performed using visual stimuli flickering at different frequencies, analyzed through the analysis of brain signals present at the occipital region of the user’s scalp. The power spectrum of these signals is then obtained, in order to get the frequency of the visual stimulation. Preliminary tests were performed with healthy users and people with disabilities, which reached an average success rate above 90%. The proposed system is also capable of turning on/off the stimuli, thus reducing the fatigue associated with particular visual stimuli. Index Terms—Brain Computer Interface, SSVEP, Autonomous Car, ERD I. I NTRODUCTION A Brain-computer interface (BCI) is a non-muscular com- munication channel for the transmission of signals from the brain to the outside world. A BCI processes mental decision control signals through the analysis of brain bioelec- tric activity allowing the user to interact with the interface itself [1]. A BCI operates based on different paradigms of electroencephalographic (EEG) signals, which can be mo- tor imagination, mu rhythm variation, or evoked potentials. Among evoked potentials there are Steady-State Visual Evoked Potentials (SSVEP), which are associated to visual stimuli frequencies observed by the user. A BCI based on these potentials is called SSVEP-BCI [2]. Besides, the process of commanding an autonomous car requires the generation of relevant information related to the address or route to go. The use of brain signals provides a new communication channel through which it is possible to generate actions to command such autonomous car. The car used in this work is an unmanned autonomous car with high level of autonomy, shown in Fig. 1. The main features of this autonomous car are the ability of locating itself, orienting and planning ways, through laser, radar, GPS and computer vision. Advanced control systems interpret this information to identify the appropriate path, obstacles, and relevant signals. However, this navigation capability requires a reactive behavior, with the ability of interpreting the information with respect to the environment and error recovery [3]. Fig. 1: Autonomous car of Federal University of Espirito Santo - UFES. II. METHODS This work adapted and optimized the SSVEP-BCI pre- viously used by people with disabilities for navigating a robotic wheelchair [2]. As a way of validating the system here developed, our proposal is to take a car tour at the campus of the Federal University of Esprito Santo (UFES), with the vehicle being completely commanded by a person with disabilities, using his/her brain signals. In this study, an EEG signal database with 19 people aged 28 ± 7 years was used, out of which 6 of them had motor disabilities [2]. A FPGA was used for stimuli generation with an accuracy of ±0.01Hz [2]. The algorithms for signal processing were implemented on a PC of 2.4 GHz and 2 GB of RAM, using MATLAB Version 2012b. The signal recording protocol consisted of presenting the visual stimuli for 2 minutes, with intervals of 30 s in frequencies of 5.6, 6.4, 6.9 and 8.0 Hz. The stimulus was presented to the user at a distance of 70 cm. The following methods were evaluated in this work: A. Traditional Power Spectral Density Analysis (Traditional- PSDA) The PSDA method is often used as a method of SSVEP detection [4], which is related to signal processing in the

Upload: ufes

Post on 04-Dec-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

1

Proposal of a Brain Computer Interface toCommand an Autonomous Car

Javier Castillo∗‡, Sandra Muller∗, Eduardo Caicedo‡, Alberto Ferreira De Souza †, and Teodiano Bastos ∗∗Post-Graduate Program of Electrical Engineering, Federal University of Espirito Santo, Av. Fernando Ferrari,

514, Vitoria, Brazil†Post-Graduate Program of Informatics, Federal University of Espirito Santo, Av. Fernando Ferrari, 514, Vitoria,

Brazil‡Post-Graduate Program of Electrical Engineering, University of Valle, Av. Paso Ancho 13-00, Cali, Valle,

ColombiaEmail: [email protected]

Abstract—This paper presents a proposal of Brain ComputerInterface (BCI) to command an autonomous car. This BCIis based on the paradigm of visual evoked potentials (VEP)and event-related desynchronization (ERD). A menu interfaceis presented to the user with disabilities in order he/she canchoose a destination for the autonomous car. The selection ofthe final destination is performed using visual stimuli flickeringat different frequencies, analyzed through the analysis of brainsignals present at the occipital region of the user’s scalp. Thepower spectrum of these signals is then obtained, in order to getthe frequency of the visual stimulation. Preliminary tests wereperformed with healthy users and people with disabilities, whichreached an average success rate above 90%. The proposed systemis also capable of turning on/off the stimuli, thus reducing thefatigue associated with particular visual stimuli.

Index Terms—Brain Computer Interface, SSVEP, AutonomousCar, ERD

I. INTRODUCTION

ABrain-computer interface (BCI) is a non-muscular com-munication channel for the transmission of signals from

the brain to the outside world. A BCI processes mentaldecision control signals through the analysis of brain bioelec-tric activity allowing the user to interact with the interfaceitself [1]. A BCI operates based on different paradigms ofelectroencephalographic (EEG) signals, which can be mo-tor imagination, mu rhythm variation, or evoked potentials.Among evoked potentials there are Steady-State Visual EvokedPotentials (SSVEP), which are associated to visual stimulifrequencies observed by the user. A BCI based on thesepotentials is called SSVEP-BCI [2]. Besides, the process ofcommanding an autonomous car requires the generation ofrelevant information related to the address or route to go.The use of brain signals provides a new communicationchannel through which it is possible to generate actions tocommand such autonomous car. The car used in this work is anunmanned autonomous car with high level of autonomy, shownin Fig. 1. The main features of this autonomous car are theability of locating itself, orienting and planning ways, throughlaser, radar, GPS and computer vision. Advanced controlsystems interpret this information to identify the appropriatepath, obstacles, and relevant signals. However, this navigation

capability requires a reactive behavior, with the ability ofinterpreting the information with respect to the environmentand error recovery [3].

Fig. 1: Autonomous car of Federal University of EspiritoSanto - UFES.

II. METHODS

This work adapted and optimized the SSVEP-BCI pre-viously used by people with disabilities for navigating arobotic wheelchair [2]. As a way of validating the systemhere developed, our proposal is to take a car tour at thecampus of the Federal University of Esprito Santo (UFES),with the vehicle being completely commanded by a personwith disabilities, using his/her brain signals.

In this study, an EEG signal database with 19 people aged28 ± 7 years was used, out of which 6 of them had motordisabilities [2]. A FPGA was used for stimuli generationwith an accuracy of ±0.01Hz [2]. The algorithms for signalprocessing were implemented on a PC of 2.4 GHz and 2GB of RAM, using MATLAB Version 2012b. The signalrecording protocol consisted of presenting the visual stimulifor 2 minutes, with intervals of 30 s in frequencies of 5.6, 6.4,6.9 and 8.0 Hz. The stimulus was presented to the user at adistance of 70 cm. The following methods were evaluated inthis work:

A. Traditional Power Spectral Density Analysis (Traditional-PSDA)

The PSDA method is often used as a method of SSVEPdetection [4], which is related to signal processing in the

2

frequency domain. The implementation of the power spectraldensity analysis is performed by searching the power densitiesaround the stimulus frequencies, whose the signal/noise (SNR)ratio is given by Equation 1.

Sk = 10log10

(nP (fk)∑n/2

m=1P (fk +mfres) + P (fk −mfres)

), (1)

where n is the number of sample near the frequency stimulus,P (fk) is the power density of the stimulus frequencies andfres is the resolution frequency which depends on the numberof samples used in the Fourier transform. P (fk +mfres) andP (fk−mfres) are power densities around the target frequency.

B. Ratio Power Spectral Density Analysis (Ratio−PSDA)

The second method implemented can be defined as specifiedin Equation 2.

Ratio =error

ampl=|fk − fh|P (fh)

, (2)

where k = 1, 2 · · ·N N is the number of stimuli and fk are thefrequencies of the kth stimulus. fh is the frequency showinga peak in the power spectrum, and P (fh) is the magnitude ofthe peak frequency closer to the frequency (or its harmonics)of the stimulus target. When there is a peak in the targetfrequency or its harmonics, the error is zero. If the error is notzero, but the amplitude is very large, the value of the ratio,(Equation 2) is small, related to other stimulus, indicating thatthe frequency is very close to the stimulus.

C. Canonical Correlation Analysis (CCA)

CCA is a statistical technique that obtains the maximumsimilarity between two data sets. Fig. 2 shows a CCA repre-sentation in the frequency domain [4] [5]. The EEG channelsare represented by the vector X , and references signals arerepresented by the vector Y . These signals are sines andcosines of the fundamentals frequencies and their harmonics,as shown in Equation 3.

Y =

sin (2πfk)cos (2πfkt)

...sin (2πhfkt)cos (2πhfk)

(3)

where k = 1, 2 · · ·N , N is the number of stimuli and h is thenumber of harmonics of the target frequency.

Considering Wx and Wy the matrices that represent acombination linear of vector X and Y, the CCA methodseeks to find the linear combination of the frequency domainmatrices Wx and Wy that presents the highest correlation anddefine the constraints expressed by Equations 4 and 5.

E[xxT

]= E

[xTx

]= E

[WT

x XXTWx

]= 1 (4)

E[yyT

]= E

[yT y

]= E

[WT

y Y YTWy

]= 1 (5)

Trials

CCA

CCA

Referencesignalsgenerator

>

EuclideanDistance

X

Y1

Yp

Fig. 2: Schematic representation of canonical correlationanalysis.

The total correlation is calculated as the ratio between theautocorrelation and cross-correlation of the input and outputvectors, as shown in Equation 6.

ρk = ρWx,Wy(x, y) =

E[xT y]√E[xT x]E[yT y]

=

E[WTx XY TWy]√

E[WTx XXTWx]E[WT

y Y Y TWy]

(6)

D. Least Absolute Shrinkage and Selection Operator (LASSO)

Suppose that data (xi, yi), i = 1, 2, . . . , N , where xi =(xi1, . . . , xip)

′ are predictor variables and yi are the re-sponses [6]. Assume that the xij are standardized so that∑

u xij/N = 0,∑

i x2ij/N = 1. As in the usual regression

set-up, assume either that the observations are independent oryi are conditionally independent given xij .

(α, β

)= argmin

N∑i−1

yi − α−∑j

βjxij

2 (7)

The solution to Equation 7 is a quadratic programmingproblem with linear inequality constraints [6].

E. Detecting Events Related Desynchronization (ERD)

ERD is bound to get a signal that has been identified as aresult of an event [7]. When the user closes his/her eyes fora while, one can detect a variation of the energy in the alphaband. This variation is easier to detect using electrodes locatedat the occipital region, specifically the electrodes Oz, O1 andO2. Then, the signal is filtered in the range of 8 Hz to 13Hz. The resulting signal is derived and squared to emphasizethe high frequencies and its integration is performed usinga sliding window. The event detection is performed throughthe calculation of the kurtosis threshold criterion [8]. Kurtosisvalues higher than 3 are considered valid to be detected asan event. This resource does not depend on amplitude values,since it only works on the concept of waveform. Fig. 3 showsthe schematic representation of the synchronization detector.

3

500 1000 1500 2000 2500

−200

−100

0

100

200

300

0 100 200 300 400 500 600 7000

500

1000

1500

8 13 f (Hz)

Mag

ddt

2

n

100 150 200 250 300 350 400 450 500 550

−40

−30

−20

−10

0

10

20

30

40

Fig. 3: Detection of synchronization events to trigger stimuli.

F. Autonomous car

The autonomous car used in this work was developed toemulate the behavior and decision-making in the same manneras a common driver, using the necessary sensors for thistask. From this perspective, an overview of a system for anautonomous vehicle command is shown in Fig. 4.

Sense

Perception

Decision Planning

Steering

Control

Actions Enviroment

Fig. 4: Levels representing the autonomous vehiclenavigation [9].

In general, the command of an autonomous vehicle is ofthe following structure:

1) Control level: the lowest level of control is one thatarises from the direct communication of sensors: GPS,radar, etc. This level of compensation is based on mon-itoring behavioral skills, such as maintaining a directionand speed.

2) Steering level: the average level of command is per-formed with sensor information, which is called per-ception, and hence microcommands would be made inorder to drive the car. These commands are related torule-based monitoring of behavior. Thus, one can definea reference speed or distance, with stopping rules withred lights or changing lanes on a highway.

3) Planning Level: the highest level is about an abstractionstructure; when decisions are made and performed, itis called planning, and corresponds to behavior-basedknowledge. This analysis requires conscious decisionsand information from the environment.

III. BRAIN COMPUTER INTERFACES AND NAVIGATIONDEVICES

In the process of commanding robotic wheelchairs or mobilerobots some interfaces have been implemented for differenttypes of functions. Table I shows some of these proposals.

The proposal of this work, outlined in Fig. 5, is to developa hybrid brain computer interface using an asynchronous BCIapproach with a SSVEP-BCI. The motivation is that althoughthe SSVEP-BCI has a high performance with respect to itssuccess rate, the presence of stimuli when no decision is

TABLE I: Some proposals for the command of autonomousvehicles.

PlanningcLevel SteeringcLevel ControlcLevel

Bellcetcal3cw2SS8R;Valvuenacetcalcw2SS7R;Ferreiracetcal3w2SS9R

Milláncetcalcw2SS4R;Leebcetcalcw2SS6R;Pfurtschellercetcal3cw2SS6R;Trejocetcal3cw2SS6R;Macetcal3cw2SS7R;Martinezcetcalcw2SS7R;Pirescetcalcw2SS8R;McFarlandcetcalcw2SS8R;Galáncetcalcw2SS8R;Brouwercetcal3w2SS8R;Ron1angevincetcal3cw2SS9R;Müllercetcalcw2SC3R

None

required can lead to fatigue. For this reason, an asynchronousBCI to enable/disable the stimuli that can reduce fatigue, isused here.

ERD

Stimuli

Control

Steering

Planning

Environment Menu

User

AsinchronousBCI

SynchronousBCI

AutonomousCar

Feed-Back

Fig. 5: Proposed system to command an autonomous vehicleusing a brain computer interface.

The brain commands are sent to the classification module,and the autonomous car plans and executes this action. Amenu was developed Fig. 6, which presents the possible di-rections that are georeferenced using GPS. Each selection hasa command validation. When a decision has been validated,the system disables the stimuli. When the user requests anew activity, his/her eyes should be closed for a while, andthe asynchronous BCI detects this condition and reactive thestimuli.

IV. RESULTS

A. Synchronous BCI

Table II presents the results of data classification for dif-ferent users. The first 6 users have some type of disability.For each one of the classification techniques the accuracy andthe Information Transfer Rate (ITR) were calculated. ITR is astandard measure of communication systems, which calculatethe amount of information communicated per unit of time.ITR depends on both speed and accuracy and is defined byEquation 8 [10].

4

Yes Not

Y NStart

HY N

Y NSend

Command

SendCommand

Turn onStimuli

AsynchronousBCI

ClosedEyes?

Turn offStimuli

Fig. 6: Menu for destination selection and control for on/offstimuli of autonomous car.

B = (1− Pu) ∗(log2N + Plog2P + (1− P ) log2

(1−PN−1

)),

(8)

where N is the number of classes, P is the rate of correctclassifications, and Pu is the rate of undefined classifications.The unit for ITR is [bits/s], but it can be determined in[bits/min], multiplying the result by the selection speed, i.e,the number of selections performed by the system in oneminute. In this paper, the developed SSVEP-BCI sends onecommand per second, which leads to a rate of 60 selectionsper minute.

Fig. 7 shows the distribution of the average percentages ofaccuracy for each classification techniques by presenting themaximum, minimum and the average of concentration.

B. Asynchronous BCI

Tests based on the detection of events (closing and openingthe eyes) provided a success rate of 90% through the alpharhythm evaluation. This high success rate shows the applica-bility of the system proposed.

V. DISCUSSIONS

With regard to the implemented algorithms, the followingconsiderations can be highlighted:

1) Although results are good in the controlled laboratoryenvironment, it is necessary to investigate the resultsof these algorithms when implemented in real worldapplications.

2) The classification algorithms were developed in thetime domain (CCA and LASSO) and frequency do-main (Traditional-PSDA and Ratio-PSDA), in order to

Trad−PSDA Ratio−PSDA CCA LASSO40

50

60

70

80

90

100

Acc

ura

cy [%

]

(a) Representation diagrams showing the maximum, minimumand average values for different classifiers.

Accuracy [%]40 50 60 70 80 90 1000

2

4

6

8

10

12

14

Trad−PSDA

Ratio−PSDACCA

LASSO

(b) Success rate of the different classifiers.

Fig. 7: Success rate for classifiers based on SSVEP.

have tools to adapt to situations that may arise in theautonomous car. Fig. 7(a) shows that the classifiersusing signals in the time domain (CCA, and LASSO)have better responses with respect to the minimum andaverage values. On the other hand, PSDA has a betterperformance regarding to the mean values and concen-tration of the data in terms of the success rate comparedwith PSDA using SNR. Fig. 7(b) shows success rates ofthe classifiers as a histogram.

VI. CONCLUSIONS

BCI used outside the laboratory environment is a mandatorystep for practical applications. Although the ERD detector wasnot extensively explored for SSVEP detection, the use of bothparadigms are important in a practical BCI system. The testswith SSVEP obtained average success rates between 70% and90%. However, the highest concentration of data was foundin the range of 90% to 95%, therefore suggesting suitabilityfor the application in autonomous cars. The next step of thiswork is to design a system for data fusion of the classifiersto be implemented in the autonomous car, driven by a personwith disabilities.

5

TABLE II: Results for 19 users, out of which 6 (*) have motor disabilities.

AccuracyR[v] PuR[v] ITRR[bit/min] AccuracyR[v] PuR[v] ITRR[bit/min] AccuracyR[v] PuR[v] ITRR[bit/min] AccuracyR[v] PuR[v] ITRR[bit/min]

m1R- 90C16 4C38 79C19 85C06 6C88 64C55 96C81 3C13 101C48 96C81 3C13 101C48

m2R- 52C61 27C50 10C92 74C24 40C63 27C39 75C97 26C25 36C45 79C38 27C50 40C86

m3R- 91C47 5C00 82C33 92C59 6C25 84C47 96C22 3C13 99C28 96C22 3C13 99C28

m4R- 86C89 7C50 68C39 95C19 15C00 83C92 95C39 4C38 95C11 95C39 4C38 95C11

m5R- 76C25 13C13 43C42 97C08 28C13 76C07 87C14 15C00 63C39 87C03 15C00 63C15

m6R- 84C25 8C13 61C86 46C16 40C63 5C39 93C63 50C63 46C14 94C37 16C25 80C32

m7 93C05 15C63 77C26 90C69 28C75 60C12 91C58 11C88 76C67 93C03 12C50 80C07

m8 76C19 14C38 42C69 88C18 31C88 52C68 88C47 16C25 65C41 88C47 16C25 65C41

m9 90C21 7C50 76C75 97C43 2C50 104C57 96C22 0C63 101C84 96C22 0C63 101C84

m10 73C77 6C25 42C44 96C71 5C00 99C13 95C59 1C25 98C93 95C59 1C25 98C93

m11 99C38 3C75 111C78 98C06 3C75 105C76 99C38 1C25 114C69 99C38 1C25 114C69

m12 70C15 30C63 26C96 91C70 20C00 69C90 95C73 11C25 106C18 95C73 11C25 106C18

m13 90C74 5C00 80C29 94C86 2C50 95C17 96C18 2C50 99C80 96C18 2C50 99C80

m14 79C83 10C00 51C58 93C31 26C88 67C57 96C81 10C00 94C26 96C81 10C00 94C26

m15 59C72 26C88 17C08 91C25 21C88 67C19 92C65 11C25 80C13 92C65 11C25 80C13

m16 96C22 0C63 101C84 96C18 1C25 101C08 97C50 0C63 106C85 97C50 0C63 106C85

m17 96C84 1C25 103C57 96C84 3C13 101C60 96C25 0C63 101C97 96C25 0C63 101C97

m18 93C49 2C50 90C65 96C25 1C25 101C33 96C11 3C75 98C26 96C11 3C75 98C26

m19 97C36 2C50 104C26 96C11 3C13 98C90 98C13 0C63 109C48 98C13 0C63 109C48Average 84C13 10C13 67C01 90C42 15C23 77C20 93C99 9C18 89C28 94C28 7C47 91C48

SubjectTraditionalLPSDA RatioLPSDA CCA LASSO

ACKNOWLEDGMENT

The authors want to thank CAPES, CNPq for funding(process 133707/2013-0) and the Graduate Program of theSchool of Electrical and Electronic Engineering, Universityof Valle (Colombia).

REFERENCES

[1] J. Wolpaw, N. Birbaumer, M. Dennis, G. Pfurtscheller, and T. Vaughan,“Brain-computer interfaces for communication and control.” Clinicalneurophysiology : official journal of the International Federation ofClinical Neurophysiology, vol. 113, no. 6, pp. 767–791, 2002. [Online].Available: http://dx.doi.org/10.1016/S1388-2457(02)00057-3

[2] S. Muller, T. Bastos, and M. Sarcinelli, “Proposal of a SSVEP-BCI tocommand a robotic wheelchair.” Journal of Control, Automation andElectrical Systems, vol. 24, pp. 97–105, 2013.

[3] F. Lobo, “Sistemas e veculos autnomos - aplicaes na defensa,” Master’sthesis, Faculdade de Engenharia da Universidade do Porto, revised 4sep/2013. 2005. [Online]. Available: http://paginas.fe.up.pt/∼flp/papers/SVA-AD flp cdn05.pdf

[4] Q. Wei, M. Xiao, and Z. Lu, “A comparative study of canonical correla-tion analysis and power spectral density analysis for SSVEP detection.”Third International Conference on Intelligent Human-Machine Systemsand Cybernetics, pp. 7–10, 2011.

[5] T. Tanaka, C. Zhang, and H. Higashi, “SSVEP frequency detectionmethods considering background EEG,” IEEE SCIS-ISIS, pp. 20–24,2012.

[6] R. Tibshirani, “Regression shrinkage and selection via the LASSO,”Journal aof the Royal Statistical Society. Series B (Methodological).,vol. 58, pp. 267–288, 1996.

[7] A. Ferrerira, T. Bastos, M. Sarcinelli, J. Martn, J. Garca, and M. Mazo,“Improvements of a brain-computer interface applied to a roboticwheelchair. in: Ana fred; joaquim filipe; hugo gamboa. (org.).” Biomed-ical Engineering Systems and Technologies. Berlim: Springer BerlinHeidelberg, vol. 52, pp. 64–73, 2009.

[8] L. DeCarlo, “On the meaning and use of kurtosis.” PsychologicalMethods,, vol. 2, pp. 292–307., 1997.

[9] M. Thurling, J. Van, A. Brouwer, and P. Werkhoven, Brain-Computer In-terfaIn Applying our Minds to Human-Computer Interaction. Springer,2010, ch. EEG-Based Navigation form a Human Factors Perspective,pp. 99–105.

[10] J. Millan, F. Renkens, J. Mourino, and W. Gerstner, “Noninvasive brain-actuated control of a mobile robot by human EEG.” IEEE transactionson bio-medical engineering, vol. 51, no. 6, pp. 1026–1033, 2004.[Online]. Available: http://dx.doi.org/10.1109/TBME.2004.827086

[11] C. Bell, P. Shenoy, R. Chalodhorn, and R. Rao, “Control of a humanoidrobot by a noninvasive brain-computer interface in humans.” J NeuralEng, vol. 2, pp. 214–220, 2008.

[12] A. Brouwer and J. Van, “A tactile p300 bci and the optimal number oftactors: Effects of target probability and discriminability.” in In: Proceed-ings of the 4th International Brain- Computer Interface Workshop andTraining Course 2008. Verlag der Technischen Universitt Graz, Graz,2008, p. 280285.

[13] F. Galan, M. Nuttin, E. Lew, P. Ferrez, G. Vanacker, J. Philips, andJ. Milln, “A brain-actuated wheelchair: Asynchronous and non-invasivebrain-computer interfaces for continuous control of robots.” ClinNeurophysiol 119:, vol. 9, p. 21592169., 2008.

[14] J. Hohne, M. Schreuder, B. Blankertz, and M. Tangermann, “Two-dimensional auditory p300 speller with predictive text system.” ConfProc IEEE Eng Med Biol Soc., pp. 4185–8, 2010.

[15] C. Leeb, R.and Keinrath, D. Friedman, C. Guger, R. Scherer, C. Neuper,M. Garau, A. Antley, A. Steed, M. Slater, and G. Pfurtscheller, “Walkingby thinking: the brainwaves are crucial, not muscles!” Presence: Teleopvirtural Environ, vol. 5, pp. 500–514, 2006.

[16] Z. Ma, X. Gao, and S. Gao, “Enhanced p300-based cursor movementcontrol.” Lecture Notes in Computer Science (including subseries Lec-ture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics).LNAI, vol. 4565, p. 120126. ., 2007.

[17] J. Markoff. (2011, 5) Google lobbies nevada to allow self-driving cars. The New York Times. [Online]. Available: http://www.nytimes.com/2011/05/11/science/11drive.html? r=0

[18] P. Martinez, H. Bakardjian, and A. Cichocki, “Fully online multicom-mand brain-computer interface with visual neurofeedback using SSVEPparadigm.” Comput Intell Neurosci, 2007.

[19] D. McFarland, D. Krusienski, W. Sarnacki, and J. Wolpaw, “Emulationof computer mouse control with a noninvasive brain-computer interface.”J Neural Eng, vol. 2, p. 101110, 2008.

[20] G. Pfurtscheller, R. Leeb, C. Keinrath, D. Friedman, C. Neuper,C. Guger, and M. Slater, “Walking from thought,” Cognitive BrainResearch, vol. 1, pp. 145–152, 2006.

[21] G. Pires, M. Castelo-Branco, and U. Nunes, “Visual p300-based bcito steer a wheelchair: A bayesian approach.” in Proceedings of the30th Annual International Conference of the IEEE Engineering inMedicine and Biology Society, EMBS08Personalized Healthcare throughTechnology art no 4649238, 2008, p. 658661.

[22] R. Ron, A. Daz, and F. Velasco, “A two-class brain computer interface tofreely navigate through virtual worlds (ein zwei-klassen-brain-computer-interface zur freien navigation durch virtuelle welten).” Biomed Tech 54:,vol. 3, p. 126133, 2009.

[23] G. Schalk, J. Wolpaw, M. D, and G. Pfurtscheller, “EEG-based com-munication: presence of an error potential.” Clinical neurophysiology :

6

official journal of the International Federation of Clinical Neurophysi-ology, vol. 111, no. 12, pp. 2138–2144, 2000.

[24] E. Sutter, “The brain response interface: communication throughvisually-induced electrical brain responses,” J Microcomput Appl, 1992.

[25] L. Trejo, R. Rosipal, and B. Matthews, “Brain-computer interfacesfor 1-D and 2-D cursor control: designs using volitional controlof the EEG spectrum or steady-state visual evoked potentials.”IEEE transactions on neural systems and rehabilitation engineering: a publication of the IEEE Engineering in Medicine and BiologySociety, vol. 14, no. 2, pp. 225–229, 2006. [Online]. Available:http://dx.doi.org/10.1109/TNSRE.2006.875578

[26] D. Valbuena, M. Cyriacks, O. Friman, I. Volosyak, and A. Grser, “Brain-computer interface for high-level control of rehabilitation robotic sys-tems,” IEEE 10th International Conference on Rehabilitation Robotics,ICORR 07, art no 4428489,, p. 619625, 2007.

[27] J. Wolpaw and M. Dennis, “Control of a two-dimensional movementsignal by a noninvasive brain-computer interface in humans.”Proceedings of the National Academy of Sciences of the United Statesof America, vol. 101, no. 51, pp. 17 849–17 854, 2004. [Online].Available: http://dx.doi.org/10.1073/pnas.0403504101