The Limited Times

Now you can see non-English news...

Neurotechnology: Machines can now read minds


Human brains will soon be directly connected to computers. A study shows what is already possible: a paralyzed man learned to write again just by thinking about it - and thereby refuted Elon Musk.

Enlarge image

Small electrodes pick up signals from the brain

Photo: Matthew Mckee /

At the moment, a development is taking place that can be interpreted as part of the "Great Acceleration" that humanity is currently causing and experiencing: Learning machines can now read minds.

Human brains will be directly connected to computers in the lifetimes of many of the people who currently inhabit the earth.

I realize that sounds unbelievable, but a study that appeared in the science magazine Nature this week shows exactly that.

So-called brain-computer interfaces, i.e. interfaces between brains and computers, have been around for a long time.

They are primarily used to enable people who are paralyzed by accidents or illnesses to interact with their environment.

Such systems are currently reaching a new, qualitatively different level.

This is because, unlike in the past, not only people are now learning to use their thoughts to control a cursor using a virtual keyboard.

Now the machines are also learning to understand people better and better.

Christian Stocker


Born 1973, is a cognitive psychologist and has been a professor at the Hamburg University of Applied Sciences (HAW) since autumn 2016.

There he is responsible for the "Digital Communication" course.

Before that, he headed the Netzwelt department at SPIEGEL ONLINE.

Apple's big flop

The essence of the learning computer systems of the present and future consists primarily in recognizing patterns in complex, varied data sets and reacting to them.

We have long known this from our own everyday life: within a few years, systems based on speech recognition, from Alexa to Siri, have actually become suitable for everyday use.

Handwriting recognition is now also working amazingly well.

That was still different in the 1990s: At that time, Apple produced one of the biggest flops in the company's history with the Newton, a portable minicomputer with handwriting recognition as a central feature.

Also because it was too difficult to teach the device to read your own pig claw.

The machines program themselves

That has changed in the meantime, and that is once again due to the gigantic advances in machine learning.

Today's learning computers only need sufficient training data to recognize handwritten digits, for example - this is a standard example that is used in the training of professionals.

The digits are broken down into pixels, the color values ​​of these pixels migrate through an artificial neural network that is supposed to name the correct digit at the end.

This doesn't work at all at first, but after many training sessions in which the network is corrected and thus changed, such systems can not only correctly recognize digits with which they were trained, but also other, new ones.

According to the same principle, machines now also recognize objects in pictures, a task that was considered almost impossible to solve 20 years ago.

The machines are no longer programmed, they program themselves, at least in part.

Think of spoken words

There was already a publication in 2019 that went in a similar direction: In a "proof of concept", authors from the University of California created artificial speech output based on brain activity that occurs when speaking.

Here, too, artificial neural networks were used to interpret the signals from human brains, as was the case in several other studies on the subject.

The new work is still a breakthrough.

This technology, which is still new, is now turning research areas upside down - I've written an entire book about it.

A current example is the mRNA vaccines from Biontech, Moderna and Curevac: All of these companies use machine learning because the relationships between gene codes and the forms of proteins that cells produce are so complex.


Christian Stocker

We are the experiment: our world is changing so breathtakingly that we stagger from crisis to crisis.

We have to learn to manage this tremendous acceleration.

Publisher: Karl Blessing Verlag

Number of pages: 384

Publisher: Karl Blessing Verlag

Number of pages: 384

Buy for € 22.00

Price query time

May 16, 2021 12:02 p.m.

No guarantee

Order from Amazon

Order from Thalia

Product reviews are purely editorial and independent.

Via the so-called affiliate links above, we usually receive a commission from the dealer when making a purchase.

More information here

Think letters on paper, or at least in the calculator

It was only a matter of time before someone applied this technology to brain-computer interfaces, and it is happening more and more often now.

The system published on Thursday can also learn to recognize handwriting - but in this case the letters are no longer actually written, only thought.

The patient in the study is paralyzed from the neck down by a spinal injury.

Although the accident had been nine years at the time of the study, his brain was still producing motor signals when he imagined writing letters by hand.

These signals were recorded with an implant.

The man could train the machine himself: he imagined writing something, the machine then received the information as to whether the result was correct.

So he produced the training data set himself, which finally enabled the system to correctly guess what he wanted to write with a very high hit rate.

»Almost as fast as writing on a smartphone«

Once the training was completed, this process went pretty quickly: the test subject finally managed 90 letters per minute, and with a subsequent auto-correction, as is known from smartphones, he achieved a hit rate of over 99 percent.

The conventional methods, in which test subjects control a cursor using a virtual keyboard, are at most half as fast.

The new system allows the patient "to write sentences with approximately the same speed that non-disabled adults can reach on a smartphone," says Jamie Henderson of the University of Stanford, one of the authors.

Of course, there are some absolutely non-trivial hurdles on the way to the brain-computer interface.

The procedure is invasive, which means that a chip the size of a small tablet has to be implanted in the brain.

Of course, that carries risks.

And many alphabets have more than just 26 letters.

However, both the chips that can be used to record such signals and the implantation methods, but above all the learning machines that then read the signals, will improve continuously and rapidly over the next few years.

That too is part of the essence of learning machines.

Elon Musk, Iain M. Banks, and the "stupid" researchers

Such interfaces between machines and living people have existed for a long time in science fiction books.

In the cyberpunk novels of the 1980s, for example, in Iain McDonald's acclaimed "Luna" trilogy and in the "Culture" series by the late Scottish author Iain M. Banks.

In his books, the communication systems that people wear in their brains are called "Neural Lace".

We already have non-paralyzed people who feel the need to be able to think directly into a computer.

"Transhumanists" willing to experiment, the military and regimes that are keen on technological advances, such as the Chinese, will presumably drive the development forward without major ethical concerns.

more on the subject

Gates, Musk, Bezos: The billionaires will not save usA column by Christian Stöcker

There is even a company that promises a product a la "Neural Lace," with explicit reference to Banks' books.

The company is called Neuralink and one of its founders is Elon Musk.

A man, then, who seems to view science fiction books not as literary visions, but as instructions for use for the human future.

The company appears to be struggling to keep qualified staff at its disposal, and many neuroscientists accused Musk, probably rightly, of exaggeration and hype.

Musk returned the favor with a few pointed remarks about the inertia of research and unrealistic academics who thought they were very clever, "but are actually quite stupid" and "industrial visionaries" who "get things done" cannot hold a candle to them.

The Stanford team has just shown that it might be the other way around

Source: spiegel

All tech articles on 2021-05-17

You may like

Tech/Game 2021-05-17T12:54:30.301Z

Trends 24h


© Communities 2019 - Privacy