The Limited Times

Now you can see non-English news...

A decoder reads the thoughts recorded by a brain scanner

2023-05-01T15:21:02.239Z


The system captures the meaning of the sentences more than their literalness thanks to artificial intelligence


Three subjects were made to listen to a

New York Times

podcast and monologues from a popular Anglo-Saxon show while having their brains scanned.

With a decoder designed by them, American scientists managed to convert the brain scan graphs not only into complete sentences, but into texts that reproduced with great fidelity what they had heard.

According to their results, published today in the scientific journal

Nature Neuroscience

, this so-called "semantic" decoder was also capable of verbalizing what they thought and, even more, what was going through their heads while watching silent movies.

Since the beginning of the century, and especially in the last decade, there have been great advances in the design of brain-machine interfaces (BCIs).

Most wanted people unable to speak or move even all their muscles to communicate.

But most of these systems require opening up the skull and placing an array of electrodes directly into the brain.

Another less invasive approach relies on functional magnetic resonance imaging (fMRI).

Here, the interface ends in a cap filled with electrodes that is placed on the head.

This cap does not record direct neuronal activity, but the changes in the level of oxygen in the blood that it causes.

This posed resolution problems.

On the one hand, by access from outside and, on the other,

To solve these problems, a group of researchers from the University of Texas (United States) have relied on an artificial intelligence system that will sound familiar to many: GPT, the same one on which the ChatGPT bot is based.

This language model, developed by the OpenAI artificial intelligence lab, uses deep learning to generate text.

In this investigation, they trained him with fMRI images of the brains of three people who were played 16 hours of audio from a

New York Times

office and the

Moth Radio Hour

program .

In this way they were able to match what they saw with their representation in their heads.

The idea is that, when they hear another text again, the system could anticipate it based on the patterns of what it has already learned.

More information

"Hello", the first greeting through 'telepathy' thousands of kilometers away

“This is the original GPT, not like the new one [ChatGPT is supported by the latest version of GPT, version 4].

We collected a ton of data and then built this model, which predicts brain responses to stories,” Alexander Huth, a neuroscientist at Texas University, said in a webcast last week.

With this procedure, the decoder proposes sequences of words "and for each of those words that we think might come next, we can measure how good that new sequence sounds and, in the end, see if it matches the brain activity that we observe." , details.

This decoder has been called semantic, and rightly so.

Previous interfaces recorded brain activity in motor areas that control the mechanical basis of speech, that is, movements of the mouth, larynx, or tongue.

“What they can decode is how the person is trying to move their mouth to say something.

Our system works on a very different level.

Instead of looking at the low-level motor domain, it works at the level of ideas, of semantics, of meaning.

That is why it does not record the exact words that someone heard or pronounced, but rather their meaning”, explains Huth.

For this, although the resonances recorded the activity of various brain areas, they focused more on those related to hearing and language.

Jerry Tang prepares one of the subjects for the experiments.

The sheer size of the scanner, its cost, and the need for the subject to remain still and focused complicates the malevolent idea of ​​reading the minds of others. Nolan Zunk/University of Texas at Austin

Once the model was trained, the scientists tested it with half a dozen people who had to listen to different texts than those used to train the system.

The machine decoded the fMRI images to a close approximation of what the stories told.

To confirm that the device operated on the semantic level rather than on the engine, they repeated the experiments, but this time asking the participants to imagine a story themselves and then write it down.

They found a great correspondence between what was decoded by the machine and what was written by humans.

Even more difficult still, in a third batch, the subjects had to watch scenes from silent movies.

Although here the semantic decoder failed more on the specific words, it still captured the meaning of the scenes.

Neuroscientist Christian Herff leads research into brain-machine interfaces at the University of Maastricht (Netherlands) and almost a decade ago he created an ICB that allowed brain waves to be converted into text, letter by letter.

Herff, who has not participated in this new device, highlights the incorporation of the GPT language predictor.

“This is really cool, since the GPT inputs contain the semantics of speech, not the articulatory or acoustic properties, as was done in previous ICBs,” he says.

In addition, he adds: "They show that the model trained in what is heard, can decode the semantics of silent films and also in imagined speech."

This scientist is "absolutely convinced that semantic information will be used in brain-machine interfaces for speech in the future."

“Their results are not applicable today, you need an MRI machine that occupies a hospital room.

But what they have achieved has not been achieved by anyone before"

Arnau Espinosa, neurotechnologist at the Wyss Center Foundation in Switzerland

Arnau Espinosa, a neurotechnologist at the Wyss Center Foundation (Switzerland), published a paper last year on an ICB with a totally different approach that allowed an ALS patient to communicate.

Regarding the current one, he recalls that “its results are not applicable today to a patient, you need magnetic resonance equipment that is worth millions and that occupies a hospital room;

but what they have achieved has not been achieved by anyone before”.

The interface in which Espinosa intervened was different.

“We were going for a signal with less spatial resolution, but a lot of temporal resolution.

We were able to know in every microsecond which neurons are firing and then we were able to go to phonemes and how to create a word”, he adds.

For Espinosa, in the end it will be necessary to combine several systems, taking different signals.

“Theoretically, it would be possible;

Rafael Yuste, a Spanish neurobiologist at Columbia University in New York (United States), has been warning about the dangers posed by advances in his own discipline for some time.

“This research, and the Facebook study, demonstrate the possibility of decoding speech using non-invasive neurotechnology.

It is no longer science fiction ”, he opines in an email.

“These methods will have huge scientific, clinical and commercial applications, but, at the same time, they herald the possibility of deciphering mental processes, since inner speech is often used to think.

This is one more argument for the urgent protection of mental privacy as a fundamental human right”, he adds.

Anticipating these fears, the authors of the experiments wanted to see if they could use their system to read the minds of other subjects.

Fortunately, they found that the model trained with one person was not able to decipher what another person heard or saw.

To be sure, they ran one last series of tests.

This time they asked the participants to count by sevens, think and name animals or make up a story in their head while listening to the stories.

Here, the GPT-based interface, with all the technology that goes into an MRI machine and all the data handled by the AI, failed more than a sideshow shotgun.

For the authors, this would show that to read the mind, the cooperation of its owner is needed.

But they also warn that their research relied on the patterns of half a dozen people.

Perhaps with the data of tens or hundreds, the danger, they acknowledge, is real.

You can follow

MATERIA

on

Facebook

,

Twitter

and

Instagram

, or sign up here to receive

our weekly newsletter

.

Source: elparis

All news articles on 2023-05-01

You may like

News/Politics 2024-03-13T06:52:31.365Z
Life/Entertain 2024-02-12T13:34:33.987Z
Tech/Game 2024-04-04T05:17:23.207Z

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.