Speech Synthesizer, plug directly into the brain

"Recordings" from the surface of the brain to produce scientists unprecedented ideas on how paralyzed people with the help of the brain to control it.

Can a paralyzed man who is unable to speak, such as physicist Stephen Hawking, used brain implants to conduct a conversation?

Today it is the main goal of constantly developing research universities in the US, which is already more than five years prove that the recording devices placed under the human skull can detect brain activity associated with human conversation.






While the results are preliminary, Edward Chang, a neurosurgeon at the University of California, San Francisco, said that he is working on building a wireless brain-computer interface that can translate brain signals directly into audible speech using the voice synthesizer.

Work on the creation of the speech prosthesis based on the success of the experiments: paralyzed people acting as volunteers, used brain implants to manipulate the robotic limbs, by their thoughts (see. & Quot; мысленный Experiment & quot;). This technology is viable and works because scientists are able to interpret the approximate excitation of neurons inside the motor cortex and match them with the movements of the arms or legs.


Now a team of Chang tries to repeat the same thing for a person's ability to speak. This task is much more difficult, partly because the totality of human language is unique to him, and this technology can not easily be tested, for example, on animals.

In his University, Chung conducts experiments on speech in conjunction with operations on the brain, which he spends on patients with epilepsy. The plate electrodes placed under the patient's skull, records the electrical activity from the surface of the brain. Patients wear a device known as "array (grid) electrocardiography" "electrocorticography array", for a few days, so that the doctors could find the exact source of an epileptic seizure.



Chang studying brain activity on their patients, since they can speak. In an article in the journal "Nature" last year, he and his colleagues describe how they used the array of electrodes to display a model of the electrical activity in the brain called the ventral sensorimotor cortex when patients uttered the simple words like just sounds, in the likeness " bah "(" nonsense ")," goo "(" slime ") and others.

The idea is to record the electrical activity in the motor cortex, which drives the lips, tongue and vocal cords, when a person is talking. According to mathematical calculations Chang team showed that these data may highlight the "many key phonetic features."

One of the worst effects of the disease as a side (lateral), amyotrophic lateral sclerosis (ALS), so this is like spreading paralysis, people lose not only the ability to move, but also the ability to speak. Some patients ALS using devices that allow the use of the residual ability to communicate. If Hawking, it uses software that allows him very slowly, syllable by syllable to pronounce the words, cutting the muscles of his cheeks. Other patients use device tracking eye movements ("eye-trackers") to control a computer mouse.



The idea of ​​using brain-computer interface to achieve practically speaking was offered earlier, a company that since 1980 is testing technology that uses a single electrode to record directly inside the human brain in people with the syndrome of "locked inside" (waking coma). In 2009, the company described the work on the speech decoding 25-year-old paralyzed man, who is unable to move or speak.

Another study, which this year published by Mark Slutsky from Northwestern University - he made an attempt to decipher the signals in the motor cortex when patients read aloud the words that contain all 39 phonemes of the English language (consonants and vowels that make up speech) . The command identified phonemes with an average accuracy of 36 percent. The study used the same type of surface electrodes, which are used Chang.

Slutsky said that until this accuracy may seem very low, but it has been achieved with a relatively small sample of words spoken in a limited amount of time. "We expect to achieve much better results in the future decoding," he says. Speech recognition system can also help to understand what people are trying to say the word, say the scientists.

Article prepared team Telebreeze Team
Our page on Facebook and Twitter

Source: habrahabr.ru/company/telebreeze/blog/230263/