Turning brain signals into speech using artificial intelligence moves closer to reality

| | May 2, 2019
brain signals speech neurosciencenews public
This article or excerpt is included in the GLP’s daily curated selection of ideologically diverse news, opinion and analysis of biotechnology innovation.

In an effort to provide a voice for people who can’t speak, neuroscientists have designed a device that can transform brain signals into speech.

This technology isn’t yet accurate enough for use outside the lab, although it can synthesize whole sentences that are mostly intelligible. Its creators described their speech-decoding device in a study published on 24 April in Nature.

Scientists have previously used artificial intelligence to translate single words, mostly consisting of one syllable, from brain activity.


The researchers worked with five people who had electrodes implanted on the surface of their brains as part of epilepsy treatment. First, the team recorded brain activity as the participants read hundreds of sentences aloud. Then, [neurosurgeon Edward] Chang and his colleagues combined these recordings with data from previous experiments that determined how movements of the tongue, lips, jaw and larynx created sound.

Related article:  'BrainNet' experiment allows people to communicate by thought, 'blurring fundamental notions about individual identity'

The team trained a deep-learning algorithm on these data, and then incorporated the program into their decoder. The device transforms brain signals into estimated movements of the vocal tract, and turns these movements into synthetic speech. People who listened to 101 synthesized sentences could understand 70% of the words on average, Chang says.

Read full, original post: Brain signals translated into speech using artificial intelligence

Share via
News on human & agricultural genetics and biotechnology delivered to your inbox.
Optional. Mail on special occasions.
Send this to a friend