A groundbreaking study by Meta using advanced mind-reading AI technology is offering new hope for persons with communication impairments due to brain injury or neurological disorders.
Unlike previous brain-computer interfaces that required invasive brain implants, which carried a risk of infections, brain hemorrhages, or brain damage, and deteriorated over time, the new technology reads brain signals from outside the skull. The new system, known as Brain2Qwerty, then translates the brain signals into written language.
Brain2Qwerty uses magnetoencephalography (MEG), which measures the magnetic fields produced by the electrical currents in the brain. This produces clearer signals than electroencephalography (EEG), which measures electrical brain activity from the scalp.
For the study, 35 healthy volunteers were briefly shown letters forming sentences. They were asked to memorize and type the sentences on the keyboard while their brain activity was recorded using MEG. The researchers then trained the AI Brain2Qwerty system to match the brain patterns with the typed letters.
Although the results were not perfect, Brain2Qwerty could accurately decipher an average of 68% of characters from the participants’ brain signals. This is more than double the accuracy of EEG-based methods, about 33%.
The study also revealed some interesting insights into how the brain processes language. Using MEG, researchers captured 1,000 “snapshots” of brain activity per second, allowing them to observe the precise moment that thoughts became letters and words. The study revealed that the brain keeps letters and words distinct with a dynamic neural code.
“The neural activity preceding the production of each word is marked by the sequential rise and fall of context-, word-, syllable-, and letter-level representations,” one researcher on the project explained.
This means that the brain first considers the word’s context, then its meaning, and finally, its syllables and letters.
The research also indicated that Brain2Qwerty was picking up abstract language signals and motor commands linked to the typing movements. For example, mistakes in the AI system often involve mixing up letters that are close to each other on the keyboard, similar to the way a typist may intend to type the letter “k” and accidentally strike the “l” instead. The system even corrected typographical errors on the go, showing that it could comprehend both motor and cognitive intent.
The system has not yet been refined and simplified enough to make it practical for everyday use. Still, it offers hope for future developments that may be helpful to those with communication impairments. More research is needed, particularly with subjects who have actual communication impairments.