The GLP is committed to full transparency. Download and review our Annual Report.

Turning brain signals into speech using artificial intelligence moves closer to reality

| | May 2, 2019

In an effort to provide a voice for people who can’t speak, neuroscientists have designed a device that can transform brain signals into speech.

This technology isn’t yet accurate enough for use outside the lab, although it can synthesize whole sentences that are mostly intelligible. Its creators described their speech-decoding device in a study published on 24 April in Nature.

Scientists have previously used artificial intelligence to translate single words, mostly consisting of one syllable, from brain activity.

..

The researchers worked with five people who had electrodes implanted on the surface of their brains as part of epilepsy treatment. First, the team recorded brain activity as the participants read hundreds of sentences aloud. Then, [neurosurgeon Edward] Chang and his colleagues combined these recordings with data from previous experiments that determined how movements of the tongue, lips, jaw and larynx created sound.

Related article:  When it comes to food, pesticides and drugs, does 'natural' mean safer or healthier?

The team trained a deep-learning algorithm on these data, and then incorporated the program into their decoder. The device transforms brain signals into estimated movements of the vocal tract, and turns these movements into synthetic speech. People who listened to 101 synthesized sentences could understand 70% of the words on average, Chang says.

Read full, original post: Brain signals translated into speech using artificial intelligence

The GLP aggregated and excerpted this article to reflect the diversity of news, opinion, and analysis. Click the link above to read the full, original article.
News on human & agricultural genetics and biotechnology delivered to your inbox.
Optional. Mail on special occasions.

Send this to a friend