The Limited Times

Now you can see non-English news...

Regulate me: this artificial intelligence thing is very fat (and it will be mine)

2023-06-03T22:43:30.199Z

Highlights: The promoters of AI themselves are the ones who are in the most hurry to regulate their activity. Laws and regulations must protect the rights and privacy of citizens. AI swallows all kinds of information, which is not theirs, to make their own. Your own data and your own personal image are yours, and an app shouldn't be able to appropriate it. The dreaded Artificial General Intelligence, which will summarize all the knowledge of humanity and surpass all the capabilities of mortals, remains a dream (or nightmare) that is far away.


The catastrophism of the kings of this business tries to distract us from the abuses that technology companies are already committing. And it seeks to put up barriers to other competitors.


Regulate us now, say the heads of artificial intelligence. What we have in hand is very fat, revolutionary, so much so that machines will displace us humans, they will enslave us, this can end in the extinction of the species. Regulate us now, they say, like the arms companies: we want to be inspected, to operate only by license... Regulate us, they don't say this, to raise barriers to small businesses, or collaborative projects that use open source; to prevent each company or organization from having its AI system tailored if it has not gone through our box.

This is happening: the promoters of AI themselves – led by the fashionable man: the creator of OpenAI, Sam Altman, there are also the first executives of Google DeepMind and Anthopic – are the ones who are in the most hurry to regulate their activity. With this, in the first place, a lot of importance is given: it is pure marketing. And that it is still daring to call intelligence what these algorithms do, and it is not entirely artificial what is fed by us physical people. The dreaded Artificial General Intelligence, which will summarize all the knowledge of humanity and surpass all the capabilities of mortals, remains a dream (or nightmare) that is far away. But this field will make quick leaps, there's no doubt about that.

We are on the way to what they call web3 (decentralized, democratic, free of the control of large corporations) happening the same as web2 (that of social networks), which was also going to empower citizens and only reinforced the oligopoly of digital services. What has happened so far is the "winner takes all" effect, which in addition to a nice Abba song is the rule that has led to an excessive concentration of power in a handful of companies. That is why, broadly speaking, Google dominates navigation; Amazon, e-commerce; Microsoft, operating systems and PC programs; Apple, the chic segment of devices. Facebook (Meta) was one of those winners, almost hegemonic in social networks, but the emergence of others like TikTok and its foolish all-or-nothing bet on the metaverse have made it fall out of the elite. Nvidia now enters the group of billionaire companies, thanks precisely to its advances in AI.

What is at stake is who will be the winner who takes it all with the AI. Microsoft, with its alliance with OpenIA (creator of ChatGPT) is well placed; Google is waking up because its search business is threatened; and Nvidia claims its place among the big ones with a less media but very solvent track record in graphics processing and high-performance computing. That in the West: the giants of Asia are going to have a good piece of the pie.

Should AI be regulated? Of course! Let's not get as late as social networks, which are a jungle today. Laws and regulations must protect the rights and privacy of citizens, avoid mass and universal surveillance, prevent mass campaigns of disinformation and political manipulation more effective than those we already suffer, tackle discrimination. In particular, the protection of intellectual property will have to be regulated, because AI swallows all kinds of information, which is not theirs, to make their own. Not only are the copyrights of creators, which already suffered a plague of piracy around the turn of the century, at risk; Your own data and your own personal image are yours, and an app shouldn't be able to appropriate it.

And one of the most delicate aspects to be defined is which decisions can be entrusted to an AI and which can not: do we allow machines to solve the selection of personnel, the granting of mortgages, the conditional release of a prisoner? Do we leave autonomous military or police machines to choose whether to shoot at a target? These are all very urgent debates, and they must lead to swift decisions. But should we regulate that only a handful of large companies can operate with artificial intelligence, through licenses? Quite the contrary: legislation should stimulate competition, rather than repeating past mistakes.

Some say: we're not going to be able to regulate AI much because even its own engineers don't fully understand how a machine that learns on its own works. A flimsy argument: there is no need to get into the guts of very complex programs: it is enough to examine (evaluate, audit) their results. And, for the moment, an ingenuity like ChatGPT surprises us by the more or less natural use of language (although it does it better in English), but for nothing more. He does not give accurate information, he invents much of what he says, he makes big mistakes that would be unacceptable in any profession. And AI, it is known, inherits human biases through the information and parameters that have been supplied to it: gender, ethnic, class prejudices and many more.

The catastrophism that imagines a tyranny of machines in a dystopian future sounds very frightening, but it responds to more mundane interests. Because this debate about the apocalypse distracts us from the abuses that these still rudimentary technologies are already committing, including a not always obvious extraction of the talent of others. Let's regulate AI, of course. But not at the dictate of their owners.

Ricardo de Querol is the author of 'La gran fragmentación' (Harp).

You can follow EL PAÍS Tecnología on Facebook and Twitter or sign up here to receive our weekly newsletter.

Subscribe to continue reading

Read without limits

Read more

I'm already a subscriber

Source: elparis

All tech articles on 2023-06-03

You may like

News/Politics 2024-03-31T05:09:34.974Z

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.