The Limited Times

Now you can see non-English news...

Artificial intelligence is getting better at reading minds

2023-05-03T15:51:42.325Z


In a recent experiment, the researchers used large linguistic models to translate brain activity into words.


Think about the words that are going through your head: that tasteless joke you wisely kept to yourself at dinner;

your unspoken impression of your best friend's new partner.

Now imagine if someone could hear them.

On Monday, scientists at the University of Texas at Austin took another step in that direction.

An MRI machine.

EFE/MARINA GUILLEN

In a study published in the journal

Nature Neuroscience

, researchers describe an artificial intelligence capable of translating the private thoughts of humans through

functional magnetic resonance imaging (fMRI) analysis,

which measures blood flow to different regions of the brain.

Researchers have already developed language decoding methods to capture the speech attempts of people who have lost the ability to speak and allow paralyzed people to write just by thinking about writing.

But the new decoder is one of the first

not dependent

on implants.

In the study, he was able to convert a person's imaginary speech into real speech, and when silent movies were shown to subjects, he was able to

generate relatively accurate descriptions

of what was happening on the screen.

"It's not just a linguistic stimulus," explained Alexander Huth, a neuroscientist at the university who led the research.

“We are getting to the meaning, to something about the idea of ​​what is happening.

And the fact that that is possible is very exciting.”

The study focused on three participants, who came to Huth's lab for 16 hours over several days to listen to "The Moth" and other narrative podcasts.

As they listened, an fMRI scanner recorded blood oxygenation levels in parts of their brains.

Next, the researchers used a large linguistic model to match

patterns of brain activity

to the words and phrases the participants had heard.

Large linguistic models, such as

OpenAI's GPT-4 and Google's Bard,

are trained on large amounts of written text to predict the next word in a phrase or sentence.

In the process, the models create maps indicating how the words are related to one another.

Years ago, Huth realized that parts of these maps—so-called

context embeddings

, which capture the semantic features or meanings of sentences—could be used to predict how the brain lights up in response to language.

In a basic sense, said Shinji Nishimoto, a neuroscientist at Osaka University who was not involved in the research, "brain activity is a kind of

encrypted signal,

and linguistic models provide ways to decipher it."

In their study, Huth and his colleagues reversed the process in practice, using other AI to translate the participant's fMRI images into words and phrases.

The researchers tested the

decoder

by having participants listen to new recordings and then checking how well the translation matched the

actual transcript.

Almost all the words were out of place in the decoded script, but the meaning of the passage was usually preserved.

In essence, the decoders

were paraphrasing.

Original transcript:

“I got up from the air mattress and pressed my face against the glass of the bedroom window expecting to see eyes looking back at me, but instead I found only darkness.”

Decoded from brain activity:

"I kept walking to the window and I opened the glass, I stood on tiptoe and looked out, I didn't see anything, I looked up again and I didn't see anything."

While under the fMRI scanner, the participants were also asked to silently imagine telling a story;

then they repeated the story out loud, for reference.

Also in this case, the decoding model captured

the gist

of the unspoken version.

The version of the participant:

“Look for a message from my wife saying she had changed her mind and would be back.”

Decoded version:

"Seeing her, for some reason I thought she would come to me and tell me that she misses me."

Finally, the subjects viewed a short silent animated film, again while undergoing an fMRI scanner.

By analyzing their brain activity, the linguistic model was able to decode a

rough synopsis

of what they were seeing, perhaps their internal description of what they were seeing.

The result suggests that the AI ​​decoder was not only picking up words, but also meanings.

"Language perception is an external process, while imagination is an active internal process," Nishimoto said.

"And the authors showed that the brain uses

common representations

in these processes."

Greta Tuckute, a neuroscientist at the Massachusetts Institute of Technology who was not involved in the research, said that was "the high-level question."

"Can we decode meaning from the brain?" he continued.

"In a way, they show that we can."

This method of decoding language had limitations, Huth and his colleagues noted.

For one thing, fMRI scanners are bulky and expensive.

Furthermore, training the model is a long and tedious process, and to be effective it must be done

with individuals

.

When the researchers tried to use a decoder trained from one person to read

another's brain activity, it failed,

suggesting that each brain has

unique ways

of representing meaning.

The participants were also able to shield their internal monologues, as they misled the decoder thinking about other things.

The AI ​​may be able to read our minds, but for now it will have to do it one by one and with our permission.

c.2023 The New York Times Company

look too

The "Godfather of AI" leaves Google and warns of the danger that lies ahead

AI detects breast cancer that doctors miss

Source: clarin

All news articles on 2023-05-03

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.