One of the most prominent examples of technology facilitating speech is the computer system used by the famous scientist Stephen Hawking. Hawking has ALS and has been wheelchair bound for the majority of his adult life. To communicate, Hawking uses a program developed by Intel®. Characters are presented on a keyboard screen, and a cursor scans across options. Dr. Hawking is able to control the cursor using a sensor which detects his cheek muscle movements. More recently, a word prediction algorithm was introduced, based on a library generated from Hawking’s many books and lectures, to speed the process. This system has been continually modified and improved as Dr. Hawking’s symptoms continue to change.

While Dr. Hawking relies on a cheek muscle sensor, eye movements and brain waves have also been used to control speech devices. Researchers have made significant efforts to develop products that facilitate communication for patients. In the following post, we will discuss some of the exciting technologies available.

Technology From The 80’s Is The Basis For Current Speech BMI Devices.

In 1988, Farwell et al. described a brain-machine interface that would produce speech using EEG signals. Specifically, the system recognizes the large, positive P300 electrical potential that is generated 300- 400 ms after stimulation. The system works as follows:

  1. Letters of the alphabets and numbers are arranged in the shape of a matrix on a display screen. The rows and columns of the matrix are lit up one by one.
  2. The subject is told to focus on the character he/she wants to input.
  3. The brain wave, called P300, is generated when a column or row containing the character of interest is illuminated.
  4. The input character is estimated based on the P300 brain wave and the corresponding rows and column that were illuminated.




According to Farwell et al., subjects can input 2.3 characters in a minute on average. Although this speed is too slow for practical use, this research was an important breakthrough. Many products on the market are based on the P300 Speller model; one such example is the intendiX®SPELLER by g.tec.

Since 1988, further research has sought to improve the speed and accuracy of the P300 Speller. The major drawback of this model is its inability to improve input speed. For accuracy, the P300 electrical potential must be recorded repeatedly to generate a mean. Additionally, the rows and columns are illuminated individually causing further delay. Thankfully, there are numerous companies and researchers seeking to address these concerns.

MindAffect Is Working To Pick Up Speed.

Amsterdam-based MindAffect has developed an algorithm that helps ALS patients input characters more rapidly using a variation on the P300 Speller design.

Similar to the P300 Speller, characters are arranged in the form of a matrix and subjects are instructed to focus on the character of interest. In the MindAffect version, each character is illuminated by a special “gold-code,” and the activity in the visual cortex is measured using EEG. The character of interest is determined based on the pattern of brain waves predicted to occur as a result of the character illumination sequence. The advantage of the MindAffect technology is that its predictive ability is not hindered by poor resolution, which can be caused by weak electrical signals or a lack of availability of electrodes.

MindAffect is a company founded by Dr. Peter Desain at Radboud University. In his published works, Desain and his team demonstrate an 86% input accuracy at a rate of 8.99 characters a minute. Dr. Desain has been conducting research on BCI for more than 15 years, and the company spun out of his work at the university. Recently, MindAffect attended CES 2018 with support from the Dutch government.

State-Of-The-Art Research Achieves An Input Rate of 60 Characters Per Minute.

A novel approach to BMI and speech generation incorporates illuminated characters but applies a different process to increase input speed. In this iteration, different patterns of illumination are used for each character; these variations include changes in the frequency and phases of flickering. By measuring brain waves, single-trial steady-state visual evoked potentials (SSVEPs) can be detected and attributed to the different frequency and phase patterns. SSVEPs are strong, distinct signals that can be identified within 0.5 seconds. In theory, with only a 0.5-second delay, it is possible to input significantly more characters compared to other models.

According to the researcher’s, the SSVEP system has an accuracy of over 90%. Furthermore, their latest work, published in 2018, shows an input rate of 325.33 ± 38.17 bits per minute which is equivalent to 65 characters per minute.

Let Your Brain Do The Talking.

The latest research suggests BMI technology can produce 60 characters per minute; for comparison, the typical typing speed on a keyboard is 40 words per minute (approximately 200 characters). As these devices continue to improve we could see a rate of 100 characters per minute in the near future. This would be a significantly enhanced form of communication for the many individuals who struggle with speech production.


  • Farwell, L., & Donchin, E., 1988. Talking off the top of your head: toward a mental prosthesis utilizing event-related brain potentials. Electroencephalography and Clinical Neurophysiology. Volume 70, Issue 6, pages 510-523.
  • Thielen J., van den Broek P., Farquhar J., and Desain P., 2015. Broad-Band Visually Evoked Potentials: Re(con)volution in Brain-Computer Interfacing. PLoS ONE 10(7): e0133797.
  • Chen, X., et al., 2015.. High-speed spelling with a noninvasive brain–computer interface. Proceedings of the National Academy of Sciences. 112 (44) E6058-E6067