The Limited Times

Now you can see non-English news...

The limits of digital computing and neuromorphic chips

2023-01-02T04:28:01.482Z


A technology that mimics the architecture of the gelatinous mass in our heads may be the solution to reinventing computational development


Computational neuroscientists suggest that brains evolved as predictive machines to optimize their energy consumption. Yuichiro Chino (Getty Images)

A growing number of neuroscientists think that our brain is a kind of "prediction machine" that predicts what is happening before it happens, that is, our perceptions are, in part, hypotheses.

Experiments by “computational neuroscientists” with artificial neural networks (the components of AI algorithms) suggest that brains evolved as prediction machines to optimize their energy consumption.

Life evolved by creating a perfect balance between what it could “compute” and the energy it could expend.

By interweaving genes and forms, the digital and the analog, hardware and software, reason and emotion, the universe created biological intelligence, which uses far less energy than our digital computer computations.

Artificial intelligence progressed in a very different way from biological intelligence, obeying the laws of scientific geopolitics and industrial competition almost as much as those of physics.

Pioneers Alan Turing and John von Neumann, inspired by human biology and also by Kurt Goedel's mathematics on the limits of logic and algorithms to understand reality, created the first digital computers (with funding from the Manhattan Project and military funds during the "Cold War").

Thanks to semiconductor physics, computers expanded their computing capacity as the size of the chips could be reduced.

Between 1980 and 2010 the memory and computing capacity of microprocessors doubled every two years.

This caused a separation between the activity of the manufacturers of chips (the hardware) and the activity of the developers of software and algorithms.

Computer scientists and scientists got used to thinking only about the algorithm, assuming that it would run on machines capable of calculating anything we threw at them.

But we are reaching the limits of this model.

On the one hand, the chips cannot be further miniaturized (the limit of 2 nanometers has already been reached, there is no more space to continue shrinking).

On the other hand, only Taiwan and South Korea know how to make the most advanced chips, which creates an uncertain geopolitical situation.

But there is another problem, energy consumption, which is beginning to become another insurmountable obstacle for the fragile globalized production chains.

It is estimated that 3% of all the electricity used in the world is consumed in data centres, far more than all the electricity used by the entire UK.

Projections suggest that in 2030 it will rise to 13%.

The supercomputers we use in weather models, medicine design, planes and cars, etc.

they also consume a lot, about the same electricity as a city of 10,000 inhabitants.

The Summit supercomputer at Oak Ridge National Laboratory, for example, will annually produce CO2 emissions equivalent to those of more than 30,000 round-trip flights between Washington and London.

One round of training a powerful AI algorithm (for example, language translators) costs $4 million in electricity bills.

A single cryptocurrency transaction uses the same electricity as a typical family in a week.

These exorbitant expenses limit what can/should be computed.

Scientists are trying to improve the situation, but in an uncoordinated way.

Although something unites them, they all look to biology for inspiration in living structures, capable of computing with very low energy expenditure.

Algorithm designers try to incorporate the predictive ability of the brain that I mentioned at the beginning, using the physics of the process, to reduce the number of AI parameters.

But the trend of the powerful is not going that way: the race towards “artificial superintelligence” began in 2020 with “Open AI” founded by Elon Musk that revealed GPT-3 with a capacity of 175 billion parameters in the algorithm.

It was followed in 2021 by Google (1.6 trillion parameters) and the Beijing Academy of Artificial Intelligence (with 1.75 trillion).

Some scientists are clear about it, to progress the only solution is to look at biology again.

As in our brain, the hardware and the algorithm/software have to be closely related.

One particularly interesting area that is beginning to gain traction is that of "neuromorphic chips."

Neuromorphic designs mimic the architecture of the gelatinous blob in our heads, with computing units colocated alongside memory.

The researchers use analog computing, which can process continuous signals, just like real neurons.

Several analog neuromorphic computers are already in operation, in the US two examples are NeuRRAM (which spends 1,000 times less than a digital chip) and Neurogrid from "Brains in Silicon" at Stanford.

In Europe,

IMEC built the world's first self-learning neuromorphic chip and demonstrated its ability to learn to compose music.

It is not clear how these new systems will reach the real world.

The problem is that designing hardware is risky and expensive (developing a new chip costs 30-80 million dollars and 2-3 years).

Perhaps it is precisely the geopolitical situation, as happened in the birth of the first computers, that gives us the push.

In China, neuromorphic computing is seen as one of the areas where it can surpass current digital systems and there are dedicated laboratories in all its leading universities.

In the US, the Office of Digital and Artificial Intelligence of the armed forces (CDAO) and other military institutions are already developing and financing the implementation of neuromorphic hardware for use in combat.

Applications include smart headsets/glasses, drones, and robots.

In an unstable world once again threatened by wars, geopolitics may lead us to reinvent computing and reconnect with Goedel, Turing and von Neumann to overcome its limits.

As they well knew, reality cannot be simulated in digital algorithms.

We return to the reality of physics, which always escapes the total control of human logic, to try to move forward.

You can follow

EL PAÍS TECNOLOGÍA

on

Facebook

and

Twitter

or sign up here to receive our

weekly newsletter

.

Source: elparis

All tech articles on 2023-01-02

You may like

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.