XXII.4 July - August 2015
Page: 44
Digital Citation

Brains, computers, and drones: Think and control!

Nataliya Kosmyna, Franck Tarpin-Bernard, Bertrand Rivet

Imagine you could control the world with your thoughts. Sounds appealing, doesn’t it? There is a technology that can capture your brain activity and issue commands to computer systems, such as robots, prosthetics, and games. Indeed, brain-computer interfaces (BCIs) have been around since the 1970s, and have improved with each passing decade. You might wonder: “Wait! If this technology has been around all this time, how come we’re not all using it? I mean, we hear about great applications sometimes in the press—controlling a drone, for instance—but then nothing seems to come of it. Why is that?”

BCIs capture brain activity via several different techniques, including magnetic resonance imaging (MRI), spectroscopy, and the most accessible method—electroencephalography (EEG)—which uses sensors to measure the electrical current produced by the brain. The ideal course of action to achieve the best signal quality would be to drill your skull open and put some electrodes directly on your brain tissue to measure the desired activity. However, we can all see why that isn’t really worth the risk. Fortunately, electrical activity in the brain can also be measured and recorded by electrodes placed on your scalp (Figure 1). However, the skull and fluids in between are really thick, and dampen the electrical signals from the brain considerably. Moreover, there are other body parts that produce even more electricity than your brain: muscles. The electrical activity from muscles can create strong interference in the EEG reading on the scalp, especially muscles close to the brain such as those that operate our eyes. In fact, a major requirement for employing BCIs today is for the participant to remain so still as to not even blink.



Besides the interference problems that may arise with scalp sensors, EEG records a mixture of all the activity that is taking place in the vicinity of the electrodes. This means that if we want to detect or measure one specific process in the brain, the signals it produces will drown in a sea of noise. Separating the signals we actually want from all the noise of other brain activity is complex, because the signals vary and have an intricate structure. Finally, for one particular process, say motor control (which allows you to move and direct your muscles), the signals produced by various individuals may be very different. Thus, it is difficult to produce a one-size-fits-all BCI. Each method must be carefully and meticulously adapted to each user, and each user needs to learn how to operate the BCI. The phase where the BCI is adapted and the user learns to use it is called the training phase and is obligatory before the use of any BCI [1].

It’s All About Training

The first BCIs relied on what is called operant conditioning training (OC). The principle behind OC is to display continuous feedback based on brain activity signals in an area of interest, so users can learn to consciously modulate their EEG signals to make the feedback match a detection target provided to them. Once the user learns how to do this, the signal that generated the feedback can be used as a continuous control signal, for instance, to steer a wheelchair or to control the lateral movement of a robotic limb. The main issue with OC is that it takes a prohibitive amount of time to achieve good control performance. Typically, daily or weekly hour-long training sessions are required in order to achieve even the most rudimentary semblance of control. After about one to two months, you can expect to achieve an acceptable degree of control with, say, around 80 to 90 percent accuracy. This means that 10 to 20 percent of the time, the BCI will detect incorrect control commands.

In recent years, with the advent of machine learning and of increasingly powerful computers and cloud clusters, new developments have become possible. On the one hand, these developments allowed advanced signal-processing algorithms: They can be trained to separate noise from useful EEG signals. On the other hand, we can now teach computers how to better recognize EEG signals, mitigating the importance of being able to learn EEG modulation. We need to train so-called classifiers, which are essentially black boxes that, when given brain-signal inputs, will be able to learn the patterns and identify different types of brain activity. A classifier must be fed a lot of examples of what it is expected to guess in order to build a classification model. Subsequently, for each new piece of data fed to the classifier, it answers the category to which the piece of data belongs. In the case of EEG, the examples are signals that correspond to each type of activity we want to be recognized by the classifier, for example, the set of signals recorded while the user was imagining moving their left or right hand. Once the examples are fed to the classifier, it will be able to categorize new pieces of data as being either imagined left-hand or right-hand activity.

This type of training alleviates the difficulty of using BCIs for users. Indeed, training sessions with this technique tend to be no longer than 20 minutes. However, a training session is very intensive and requires much focus for the whole duration of the session. Users typically lose focus or tire quickly. The examples are recorded following a precisely timed protocol where signals are captured at fixed intervals and where the user must follow a sometimes intense rhythm.

Recent improvements over such protocols allow the training to be incremental: Only a few examples are captured to train the BCI initially. Then, the BCI adapts to the current state of the user by either asking for more examples if performance is bad or by automatically estimating appropriate parameters.

To be or Not to be BCI-Illiterate?

The problem with all these training protocols is that they rely on the ability of users to modulate their EEG activity, which remains a very difficult task. Not everyone is able to use BCIs. About 20 percent of BCI users suffer from what is known as BCI illiteracy, or the inability to modulate their signals. Among the remaining 80 percent, most users get only passable performance (approximately 75 percent accuracy). Compared with other interaction modalities, the error rates are high and the number of people who can use them is limited. One approach for reducing these limitations is to use BCIs in multimodal systems, where they are used to control non-critical elements of the interaction and where another, more robust modality takes over for the rest. Another alternative is to try to record signals from hundreds of users and build a signal dictionary that can then be used to obtain a training-free BCI [2]. The more users’ recordings are used to build the database, the lower the likelihood that a particular user will not be able to use the BCI. However, building a signal database is a costly venture. It takes years to acquire the necessary amount of signals, and the numerous participants will want monetary compensation.

Training for Dummies

Feedback is an essential part of BCI training and one of the main factors involved in the perceived tediousness of such protocols. Modifying the nature of feedback can have a large influence on both user perception and the resulting control performance. For example, to ensure that beginners learn faster without being discouraged, biasing the feedback positively (reporting that the user is doing better than what is actually the case) yields very good results. For experienced users, matters are different: Providing slightly negative feedback is more beneficial for performance (understandably, positively biased feedback motivates novices but does not teach them anything more). Beyond this, most feedback practices used in formative feedback and for educative purposes can be applied and transposed for BCI training [3].

Additionally, current BCI feedback interfaces and visualizations are very limited and technical. While for experiments in laboratories, it’s not really a big deal, when it comes to systems intended for consumers and the general public, an intuitive understandability of the feedback is of paramount importance. For instance, recent research is looking into using virtual reality and video games to integrate training and make it more engaging [4].

Adopting Your Very Own BCI

If you would like to use a BCI today, what options do you have? Professional-grade EEG amplifiers and recording equipment are notoriously expensive, costing up to $30,000 and sometimes even more. Aside from the price, typical electrodes use conductive gel to establish contact with the scalp. This means that after using such an amplifier, your hair will be covered with a gooey substance that must be washed off—certainly not the most enjoyable experience. Alternative electrodes that do not require gel (called active dry electrodes) exist. They have an electronic circuit on each electrode that attempts to compensate for the signal loss due to lower conductivity (no conductive gel, remember!). However, such electrodes are typically made of gold and can cost up to $4,000, which is not affordable for most consumers.

Fortunately, some alternatives exist. A few private companies such as Emotiv (http://emotiv.com) and NeuroSky (http://neurosky.com) have been producing consumer-grade headsets at a low cost (less than $500—compare that to $30,000). The Emotiv EPOC, the first of its kind, contains 14 fixed electrodes adorned with saline pads, where electrodes make contact with the scalp through small sponges soaked in salt water. The EPOC has a bundled software suite in the standard edition that allows some elementary control over some applications. The EPOC allows recognizing a limited range of brain activity (this does not include motor activity; however, it is compatible with virtual keyboard applications that allow users to type with the BCI). NeuroSky’s solution has a single electrode and concentrates on measuring emotions and other elementary cognitive states. A recent Kickstarter campaign gave rise to OpenBCI (http://www.openbci.com), an open hardware initiative that aims at providing cheap and accessible do-it-yourself EEG acquisition equipment (electrodes, skull caps, acquisition hardware). The project sells electronic boards compatible with the Arduino platform that support eight or 16 electrodes and allow the capture of myriad physiological signals. Although limited to gel electrodes at the moment and sold with minimal packaging, it is a very flexible solution that allows detecting activity over the whole scalp and building virtually any type of BCI system.

As for software, there are several open source software toolkits that allow building BCIs. The most notable toolkits are BCI2000 (research oriented) and OpenViBE (research oriented, but provided with current BCI system implementations that are easy to use). OpenViBE offers a simple user interface and extensive online documentation and support [2].

Command and Control

Now, the most interesting question of them all: What can I actually do with a BCI? BCIs have been applied to many areas of control: robots, prosthetics, wheelchairs, speller interfaces [1]. Most recently, BCIs have been applied to controlling video games, including World of Warcraft and other custom-built games. In further research we will focus on one particularly fun way of using BCIs: controlling a robot or drone. A team at Michigan State University proposed a highly publicized BCI system in 2013 that allowed achieving 3D control of a Parrot AR.Drone quadcopter. The work in question, however, had the purpose of achieving telepresence with high accuracy (so was not for home use) and employed operant conditioning with training sessions spanning two months. Such a long training process is certainly not something you’d want to do just to give BCIs a try at home on a weekend. We recently presented a BCI system for drone control at major HCI conferences (UBICOMP and CHI) that allowed visitors to try BCIs in a few minutes (Figure 2) [5,6]. Certainly it is not possible to achieve full 3D control with high accuracy within a few minutes; however, participants can learn just enough to perform some limited 2D control and have fun. We proposed that visitors use imagined hand and feet movements (you have to imagine and intend to move your hands or feet until right before they actually start moving) to toggle between the landed and flying state and to make the drone fly forward until it reached a destination target. The task was very simple and the control was somewhat imperfect, but most visitors were thrilled about the experience and eager for more. We even had a visit by the award-winning science fiction author Margaret Atwood, who enjoyed the experience. This goes to show that despite their imperfections, BCIs can already be used in HCI applications, including those for entertainment and art. We hope that you will give BCIs a go and develop new interaction techniques that leverage their full recreational potential.


1. Wolpaw, J.R., Birbaumer, N., McFarland, D.J., Pfurtscheller, G., and Vaughan, T.M. Brain-computer interfaces for communication and control. Clin. Neurophysiol. 113, 6, (Jun. 2002), 767–91,

2. Congedo, M., Goyat, M., Tarrin, N., Varnet, L., Rivet, B., Ionescu, G., Jrad, N., Phlypo, R., Acquadro, M., and Jutten, C. ‘Brain Invaders’: A prototype of an open-source P300-based video game working with the OpenViBE platform. Proc. of 5th International BCI Conference. 2011, 280–283.

3. Lotte, F., Larrue, F., and Mühl, C. Flaws in current human training protocols for spontaneous Brain-Computer Interfaces: Lessons learned from instructional design. Front. Hum. Neurosci. 7 (2013), 568.

4. Lécuyer, A., Lotte, F., Reilly, R., and Leeb, R. Brain-computer interfaces, virtual reality, and videogames. Computer 42, 10 (2008), 66–72.

5. Kosmyna, N., Tarpin-Bernard, F., and Rivet, B. Adding human learning in brain computer interfaces (BCIs): Towards a practical control modality. ACM Trans. Comput. Interact. ACM, 2015 (to appear).

6. Kosmyna, N., Tarpin-Bernard, F., and Rivet, B. Bidirectional feedback in motor imagery BCIs: Learn to control a drone within 5 minutes. CHI ‘14 Extended Abstracts on Human Factors in Computing Systems. 2014, 479–482.


Nataliya Kosmyna is a third-year Ph.D student in HCI and brain-computer interfaces under the supervision of Franck Tarpin-Bernard in the Engineering Human-Computer Interaction Research Group (EHCI, or LIGIIHM; http://iihm.imag.fr/en/) in Grenoble, France. Her interests include brain-computer interfaces, multimodal interaction, and machine learning. natalie@kosmina.eu

Franck Tarpin-Bernard is a professor at the University of Grenoble, France. He specializes in brain-training software (stimulation, remediation, and rehabilitation) for either mass-market applications or professionals. He also works on brain-computer interfaces (BCIs). He experiments with non-invasive BCIs with an interactional point of view (usability, cognitive overload, etc.). franck.tarpin-bernard@ujf-grenoble.fr

Bertrand Rivet is an assistant professor in GIPSA-Lab at the university of Grenoble, France. His research topics include biomedical signal processing (EEG, MEG, ECG), modelization of event-related potentials (ERP), brain-computer interfaces, and multimodal speech processing. bertrand.rivet@gipsa-lab.grenoble-inp.fr


F1Figure 1. An EEG acquisition device.

F2Figure 2. The demonstration stand for BCI drone control.

©2015 ACM  1072-5220/15/07  $15.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2015 ACM, Inc.

Post Comment

No Comments Found