The Limited Times

Now you can see non-English news...

The danger isn't artificial intelligence, it's OpenAI

2023-05-19T04:29:03.992Z

Highlights: Sam Altman, co-founder of OpenAI, has called for more regulation of AI. Julian Zelizer: Altman dugs regulators with apocalyptic fantasies to disguise the risks of capitalism. He says Altman is running in Congress as the benevolent guardian of a new species of amazing creature. Zelizer says if we help him tame that unicorn we can achieve capitalist nirvana and prevent the apocalypse. The proposal clearly mimics the non-proliferation of nuclear weapons, he says.


Sam Altman dugs regulators with apocalyptic fantasies to disguise the genuine risks of his fast-paced version of capitalism


There are eight million intelligences on the planet, but the human being can only identify two. The first is yours, characterized by access to complex cognitive processes such as reasoning, problem solving, learning, creativity, emotional competence, social awareness and adaptability. The second is a generative software called ChatGPT, whose characteristic virtue is to speak to us in our own language. We call this intelligence "artificial" intelligence. We are interested more than any other.

Unlike the rest, artificial intelligence cannot emerge anywhere. He is not able to be born in a stable, on the way to Egypt, surrounded by oxen and sheep. Nor in places as inhospitable and varied as the depths of the ocean, saline deserts, Arctic regions or the acidic and hot belly of a volcano. For a model like ChatGPT to be born, powerful computers with high processing and memory resources, scalable data storage, integrated development environments, specialized frameworks and software libraries, as well as reliable network connections and an impressive diet of content in the form of databases are needed.

An AI needs space, maintenance, cooling and electricity. With this level of autonomy, at the moment it is unlikely to surprise us with an ambush, or to thrive unnoticed as a pandemic virus in the favelas of Manaus or a mold colony in the respiratory passages of a hospital. And yet, that is the danger that should concern us when legislating its development, at least according to the executive director of OpenAI. Sam Altman told the Senate subcommittee on privacy, technology and legislation on Tuesday that his company plans to build and release increasingly dangerous systems and needs their help to ensure the transition to superintelligence happens without putting humanity in danger. It proposes that AI be regulated to prevent a problem that currently only exists in literature and cinema: the Singularity.

The next day's headlines amplify the discourse. "OpenAI co-founder calls for more regulation for AI." It is surprising that a manager would ask for regulation in an industry famous for its resistance to being monitored. The same week, Eric Schmidt, co-founder of Google and principal adviser to the Department of Defense for the development of AI, said in an NBC News program that "there is no one outside the industry capable of understanding what is possible. No one in government can do it right." But there is a precedent very close in time: when Congress wanted to regulate cryptocurrencies they called Sam Bankman-Fried. The founder of FTX so publicly embraced the regulation of his crypto business that, when it came time to write the laws, regulators called him. Also because he had earned Washington's trust by publicly donating millions of dollars to Democratic campaigns and secretly an amount equivalent to Republican ones. With that strategy, he proposed to tailor the regulation of crypto, the Digital Goods Consumer Protection Act (DCCPA). That's the strategy of Washington's new front-runner, Sam Altman, to tailor a new AI regulation to his needs.

Riding the myth of the Singularity, Altman was running in Congress as the benevolent guardian of a new species of amazing creature, a wild unicorn capable of taking us to a magical world that needs to be tamed so as not to pierce us with its powerful multicolored horn. "We understand that people are anxious about how AI can change the way we live," he graciously conceded. So do we." The wizarding world is a future of factories without workers and offices without workers. A world without unions or strikes, tailored to the employer and at the convenience of the consumer. If we help him tame that unicorn we can achieve capitalist nirvana and prevent the apocalypse. Although, in the process, OpenAI has already privatized without permission the contents of the Web, infringing pre-existing laws such as intellectual property and use the profits of that plunder to help other industries to monitor and degrade the working conditions of their workers, as the writers of the Writers Guild have understood. Not to mention the existential crisis we do face right now, in reality. Training GPT-3 consumes hundreds of times the energy of a home and produces 502 metric tons of CO₂ but that is not the regulation that Altman is asking for. Who cares about the environment, intellectual property, or labor rights when faced with the Singularity?

The Congress audience was wonderfully receptive. What two or three reforms or regulations would you implement, if you wanted to implement any, if you were queen for a day? Altman wants licenses and an agency to grant them and control the development and use of AI so that unlicensed models can "self-replicate and self-implant like crazy." The proposal clearly mimics the nuclear non-proliferation treaty and would favor the monopoly of giants such as Google, Meta, Microsoft, Anthropic and OpenAI over open and collaborative models around the world. "The U.S. must lead," Altman says, "but to be effective, we need global regulation."

A recent report by the Corporate Europe Observatory, a group that lobbies big business in the EU, denounced the intense pressure campaign that these giants have undertaken to intervene in the new European law on artificial intelligence, before it was approved by members of the European Parliament last week. The final draft states that generative models such as ChatGPT will have to disclose whether their models have been trained on copyrighted material and text or image generators, such as MidJourney, will have to identify themselves as machines and mark their content in a way that can be identified. It also considers special transparency rules for systems that qualify as high-risk, such as algorithms used to manage a company's workers or for border immigration control by a government. Those systems will need to meet risk mitigation requirements, such as showing the data they've used to train AI and the measures they've taken to correct biases. But they have removed the important requirement that models be audited by independent experts and are also proposing the creation of a new AI body to establish a centralized enforcement and control center.

An important detail: AI models are not protected as platforms were from taking responsibility for the content that circulates through their servers. They are not covered by Section 230 or its equivalents in other parts of the world. They have to ensure that their tools do not produce content related to child abuse, terrorism, hate speech or any other type of content that violates European Union law. However, ChatGPT already produces much of the propaganda that intoxicates social media with the aim of manipulating democratic processes. If Sam Altman manages to dodge those kinds of responsibilities in the U.S. as well, it's unlikely we'll rattle him here.

Marta Peirano is a technology specialist and author of the books El enemigo conoce el Sistema and Contra el Futuro (both in Debate).

Subscribe to continue reading

Read without limits

Read more

I'm already a subscriber

Source: elparis

All news articles on 2023-05-19

You may like

News/Politics 2024-03-05T17:27:43.677Z
Life/Entertain 2024-03-16T06:06:26.753Z
News/Politics 2024-03-18T18:27:10.886Z

Trends 24h

News/Politics 2024-03-27T16:45:54.081Z

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.