The Limited Times

Now you can see non-English news...

Alberto Sangiovanni Vincentelli: "Technology advances by leaps and bounds and some of the leaps are made by faith"


The Berkeley professor has been awarded the Fronteras Award for revolutionizing the design and manufacture of chips

Chips are everywhere.

“Not only in objects, but also in ourselves”, says Alberto Sangiovanni Vincentelli (Milan, 1947), professor at the University of California at Berkeley and an authority when it comes to the revolution in electronic circuits.

They are in mobile phones, computers, cars, electrical appliances, toys and even in prosciutto


Iberian ham in Italian.

Putting a chip next to the pig's leg bone to measure the salinity and moisture of the ham—the two most important things when deciding whether it's ready to eat—was one of his ideas.

Today, the

smart prosciutto

It is a reality, as well as the sensors that footballers wear to measure their performance or confirm that they are on the field.

Both inventions passed through his Berkeley office "about twenty years" ago.

Also Elon Musk's chips for the brain, something that a colleague of his from the American university developed “a long time ago”.

Sangiovanni Vincentelli was ahead of time with his ideas and created the tools that made it possible to have these technologies today.

He has been awarded the BBVA Foundation Frontiers of Knowledge award for his contribution to the design and improvement of chips, which are present in today's electronic devices.

He has been a reference in the last 50 years for transferring knowledge between the academic world and companies, facilitating the transformation in the chip trade around the planet.

The researcher speaks with EL PAÍS by videoconference from a mountainous town in western India, where he has traveled for a meeting of directors of one of his ten companies.

Sangiovanni Vincentelli also founded the companies Cadence and Synopsis, references in the world electronics industry for developing the programs used in each of today's chips.


How does it feel to see your work used all over the world?


Is incredible.

At some point, it comes naturally.

You design a component with its tools and it's like the person who invented the hammer.

Everyone uses it, but whoever created it didn't think, at the time, that he was building every hammer in the world.

What we do is help people design chips that go into every object today.


When you started your work 50 years ago, could you imagine that this would happen?


Yes, without a doubt.

It wasn't hard to imagine.

It was clear that ever smaller, more powerful, and cheaper chips could be made.

And so on.

You could imagine all kinds of applications.

I mean, I couldn't do it, but it was obvious that he was going in that direction.

In fact, one of the founders of Intel, Gordon Moore, postulated Moore's Law, which stipulated that the number of chips on a substrate would double every two years.

And that has remained until now.

It's unbelievable, because it was 45 years ago.

Several of us had the same idea.


Is it physically sustainable to maintain this parameter?


We say many times that we can't take it anymore.

I remember a good colleague of mine and an authority on technology, Professor James Meindl, when transistors were close to one micron [equivalent to one millionth of a meter] he said in a speech: “This is it, we can't take it anymore”.

But now we're down to a nanometer [a millionth of a millimeter], which is a few atoms, one on top of the other.

So not much else can be done.

Even the devices are already so small that they no longer behave like a transistor and it becomes a stochastic component, where you can't really predict what happens.

We are approaching the physical limits, but not the end of the capacity of microelectronics


What are the complications?


The first is the manufacturing process, how to make something so small.

The light no longer reaches there and they begin to use


[electron beams] and lasers to reach a nanometer.

But below that, it's not clear what can be done.

The second is that because it is so small it does not behave in a certain way.

The last one, is it too expensive and is it really worth it?

Currently, a new three-nanometer manufacturing line costs around five billion dollars.

These three things reveal that we are approaching the physical limits, but not the end of the capacity of microelectronics.

We can do something called multichip packages.


Would it mean uniting several chips instead of having a single superpower?


This is what we have been doing since the 1940s, when the transistor was invented.

Put the components on top of the substrate and connect them together.

On the other hand, making a single integrated circuit on a single chip is interesting for performance, because it runs faster, consumes less power, and may even be cheaper in terms of manufacturing.

But when it's too expensive to develop, you have to go back and use the old formula.

Now, what you want to do is link to several bare chips.

Instead of having it crated, wrapped in plastic, and plugged into the board, it's just using it as it comes off the manufacturing lines.

It looks like a cake, but with a thousand layers.

The distance between them is very short, so the performance is not that good, so it's a compromise solution.

Now they are called


, which is a funny name for multichip modules, which we started thinking about 30 years ago.


Why hasn't he gone down this road since then?


There were companies that tried to do it, but they all failed.

And the reason is that the technology was advancing very quickly.

By the time there was the multi-chip package, there was already the single chip that contained everything.

Even if there was a bare chip, the other was better.

But now we can't squeeze more things, so we must recover the birth of technology.


Is machine learning overestimated in the microprocessor scenario?


Yes, totally.

Unfortunately, technology advances by leaps and bounds and some of the leaps are made on faith.

Something sounds good, it is projected and what it can do is extrapolated.

Machine learning is just one chapter of artificial intelligence.

Imagine that there is a black box and you want to decipher it.

So an experiment is done: you put something in it, see what comes out, and then try to guess what's inside.

And here comes the difference between physics and machine learning, which in my opinion, is just one approach to identification.

What you did in the old days with a black box was try to figure out what was the physics inside.

Then the phenomenon was observed and wondered what could explain it.

This is a key point.

Our mind, with modern mathematical models and experimentation, tries to explain why.

Artificial intelligence can't do it, because it doesn't know the mechanism behind why a device behaves in such a way.

But if you want to search the web with ChatGPT that's fine, there's no problem there.

If you lose something, no one dies.

But if the failure is in autonomous driving, that's another story, a person can die.


How far can this intelligence go?


A colleague of mine started asking leading questions and at one point ChatPGT said nonsense.

You can cheat.

Regarding the intuitive way of using the internet, sometimes I get annoyed because if I'm in Singapore or India and I look for where I can watch AC Milan against Tottenham, I get all the local channels.

But that doesn't interest me, because I want to know which Italian channel.

Even if I say “in Italy” it still gives me all the channels from India.

However, if I tell ChatGPT "I want to know the channels that show AC Milan in Italy" it will do it without any problem.

Eliminate the frustration of spending time searching for something that could have been done more intelligently.

And you can apply it interactively, I would love something like Alexa.



What would you like to see in the next few years regarding this type of technology?


The key point is to understand what your limitation is.

Instead of saying that everything is good, it is asking what is the negative side of the technology that we are developing.

There is always a downside, but people don't spread it.

To some extent I try to do as much as I can, it would be nice for everyone to study the pros and cons.

About machine learning or ChatGPT, they are excellent.

But you have to consider what the negative side is, what can be guaranteed with this technology, where it is best used and what it should not be used for.

Think about CRISPR-Cas9, which allows us to modify our genes in a precise way.

It can eliminate genetic diseases, but it can also become like the Nazis, everyone is born with blue eyes, you could do it now by the way.

How do we make sure something like this doesn't happen?

You can follow

EL PAÍS Tecnología





or sign up here to receive our

weekly newsletter


Subscribe to continue reading

Read without limits

Keep reading

I'm already a subscriber

Source: elparis

All tech articles on 2023-03-09

You may like

Life/Entertain 2023-05-08T19:15:23.133Z
Life/Entertain 2023-04-09T06:34:17.723Z
Life/Entertain 2023-04-09T06:58:34.569Z

Trends 24h

Tech/Game 2023-05-28T12:21:48.887Z


© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.