The Limited Times

Now you can see non-English news...

Endangering security and the economy: AI's lies should scare us - voila! technology

2023-05-24T11:20:42.503Z

Highlights: An Australian politician found himself convicted of a criminal offence he did not commit – because of what ChatGPT told about him. This is just one example of what can go wrong with uncontrolled use of artificial intelligence. Official companies such as Meta and OpenAI are obligated by regulatory power to create a constitution with clear qualifications defining what AI can and cannot do. In contrast, the criminal brain is not bound by any regulation, and thus the model begins to develop into various mutations unhindered and without binding control.


An Australian politician found himself convicted of a criminal offence he did not commit – because of what ChatGPT told about him. This is just one example of what can go wrong with uncontrolled use of artificial intelligence


So who is responsible for the information that an AI tool generates? (Photo: ShutterStock)

ChatGPT, OpenAI's artificial intelligence tool, introduced us to a revolutionary technology founded to train in learning human intentions when they ask questions. This ability is astounding, especially because of the seemingly human qualities that flow through his veins. The tool is able to predict answers using a technological model called Generative AI, one that not only processes data but also creates the embodiment of new content, at a quality that resembles two drops of water to a human product. An additional technological layer built into it enables training and the use of discourse in order to help the tool acquire the ability to follow instructions and create desired responses towards the various types of people.

A plot twist

then came Meta and in a typical move began to compete and develop a model that is reportedly supposed to be much more elaborate. The model has not yet been officially launched, but surprisingly has found its way onto the dark web - home to murderers, pedophiles, illegal traffickers and more. With the leak of Meta's code, reports began to emerge of programmers around the world getting their hands on the model and using it to create engineered AI tools on personal computers. Official companies such as Meta and OpenAI are obligated by regulatory power to create a constitution with clear qualifications defining what AI can and cannot do. In contrast, the criminal brain is not bound by any regulation, and thus the model begins to develop into various mutations unhindered and without binding control.

So what can we do? (Photo: ShutterStock)

Taking Responsibility

So who is responsible for the information that AI tools generate? The model allows the user to ask a question and get a detailed answer. Sometimes the answer can be wrong, and even a complete lie. One case is that of Brian Hood, one of Australia's provincial chiefs, who was horrified to learn of a tale that ChatGPT had libeled about, in which he was charged and convicted of a series of criminal offenses and sentenced to serve a prison sentence. The tool was fed by a story of exposing corruption in which Hood took part, but instead of presenting him as the hero of the hour, the model made the mistake of indicating that he was the criminal. To its credit, OpenAI issued a warning note about the model's responses and inaccuracies. Hod was not satisfied with it and through his lawyers demanded uncompromising measures such as deleting the information, correcting and correcting the accuracy of the model, a public apology and more.

Hood isn't the only one affected by misinformation provided by AI tools. This can happen frequently because the data is pulled from the global internet - the incubator of fake news and inaccuracies. Beyond that, the machine does not necessarily try to give us the right answer, but pretends to sound natural and produce fluent sentences based on probabilism - as it is exposed to it on the Internet. The manufacturer explicitly states that it cannot predict what it will say. The machine does not write the content and in fact only "hosts" it. So who is responsible when the machine tells lies and distributes offensive content? The dry legal opinion holds that a person who publishes content online is also solely responsible for it. Any consequence resulting from posting a photo on Facebook, tweeting on Twitter, writing a talkback on our own initiative tends to be our full responsibility in the eyes of the law. However, many legal experts around the world are struggling with how to deal with the issue and supervise it regulated. China's constitution explicitly prohibits the use of AI tools to spread fake news or content that could harm the economy and national security. The European Union is formulating regulations that dictate the obligations of AI tools depending on the level of risk inherent in them, and even the US government is acting in this direction.

Modern

Librarian If we're looking for information online, it's tempting to turn to a research librarian to help us, but would we have turned to her in the first place if we didn't know she couldn't tell the difference between fact and hallucination? Many years ago, we started using classic search engines to find content that is in demand on the Internet. We have reached a time when content is spoon-fed to us using algorithms that recommend and even track us. Dopamine runs rampant within us, causing us to continue the technological interaction. However, the truth is that excessive internet roaming can distort our view of the world – and the use of algorithms only makes things worse. It is important that we understand that the danger lies not in the algorithms themselves, but in our lack of awareness about them.

So what can we do? Take a step back, and consider the types of content we consume and how they affect our sense of reality, do not lend a hand to fake news and always try to maintain a respectful discourse online. Limit the scope of experimentation with AI tools that pop up in app stores like mushrooms after the rain, and if you really insist, at least make sure it's a reputable manufacturer. If we are not careful, we may be sucked into a storm of misinformation and half-truths without realizing it, and even hurt ourselves.

The Chinese philosopher Lao Tzu argued that bad people are the work of good people. The evolution of AI creates an opportunity for us to be on the side of the good guys – for a better world.

Yuval Lev is Director of Service Management in Bank Hapoalim's Technology Division from CodeReview - Bank Hapoalim's

Technology Division Magazine

Yuval Lev, in cooperation with "The Technologist", Bank Hapoalim's Technology Division

  • technology
  • news

Tags

  • Bank Hapoalim

Source: walla

All tech articles on 2023-05-24

You may like

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.