In all the years I've been in the legal world, I've not seen anyone use the "I'm too crazy, stop me" argument to avoid regulation. "Mitigating the risk of extinction of AI should be a global priority, along with other risks on a societal scale such as pandemics and nuclear war," sign and affirm Demis Hassabis, CEO of Google DeepMind, or Sam Altman, CEO of OpenAI - of bowling for Europe - among a long list of managers and scientists expert in AI in a statement published on the website of the Center for AI Safety, Non-profit organization of which it is not possible to know from its website when it was constituted or who pays the bills.
I read this and I can only think, well turn it off and let us continue with our life from choice to election, that we have enough with our daily existential risks to deal with the end of the world at the hands of Skynet. If we apply the state of alarm to air traffic controllers to go on vacation, I do not know what Biden does that does not send the National Guard to the headquarters of OpenIA to turn off the servers. If Bruce Willis were in charge he would have ended so much nonsense already, but it seems that we only have naïve bureaucrats victims of the tocomocho.
Read moreWhy AI boosters sign so many apocalyptic manifestos
Let us apply the logic of impending collapses. In the case of the nuclear threat, the risk is mitigated by the balance of forces, with the principle of mutually assured destruction. I like principles like that, based on the deep knowledge of human nature, those that challenge the amygdala and work well in case of conflict. The risk of pandemic, as we have experienced in our flesh, is multifactorial in its causes and in those responsible, so it is difficult to mitigate. So since we are not able to put effective measures to limit its occurrence (even if there are) what we do is react by developing new vaccines that, if they work properly, will mitigate the risk of a pandemic happening again.
With their flaws, these risk mitigation systems seem rational, based on experience and science. But when we come to how to solve the existential problems of a killer AI instead of doing the logical thing, turning it off or disconnecting it from the internet, prohibiting it from operating or throwing it away, our reaction is to do an audit, but voluntary, do not go to anger HAL ahead of time and throw us a rain of fire and killer frogs that make us protagonists of the twenty-fourth series of zombies. I recognize that this logic or its absence overcomes me. And I recognize that I recognize it because the heads of the technology companies are very smart trileros who have been bullfighting us for a few decades.
Before, it was enough for them to put "innovation" at the beginning of the sentences to convince rulers more interested in conquering the battlefield than in the rights of citizens. The logic of letting grow and then regulating is in all industrial and social revolutions, and, therefore, has a rational basis even if it has been mistaken miserably in this one. In the rational risk analysis, economic growth, control of key sectors and positioning the US in the lead well above individual rights were more weighted. I do not like it, they have been wrong in the long-term calculation, but it responds to a rational methodology. When we think of existential risks we prefer that everyone leaves emotions at home.
Now that regulators and governments are not, with respect to technology, in the expansionist cycle, but in the regulatory one, technomoguls know that innovating is much more complicated. And that's a bummer when you've spent your money on a slow-growing but exploding and rapidly expanding technology. What a bad leg, Sam Altman must have thought, who has not caught me the explosion of generative AI at the beginning of the two thousand. What times to do what one wanted. So, smart boy, he's applied a shock doctrine volley to us and it's stayed so wide. Naomi Klein studied it very well in her work of the same title: if you scare enough, they will let you do the unthinkable. And he exemplified it in his work with the disaster that was the application of the fright or death in Pinochet's Chile, a test laboratory for the ultraliberal economic policies of Friedman and the Chicago School. It doesn't matter how many minority blog explanations are given to explain how they are distracting us from what's important. The scare is given and Bruce neither comes nor is expected.
You can follow EL PAÍS Tecnología on Facebook and Twitter or sign up here to receive our weekly newsletter.
Subscribe to continue reading
Read without limits
I'm already a subscriber