The Limited Times

Now you can see non-English news...

The creators of ChatGPT admitted what is the biggest problem of artificial intelligence

2023-06-07T10:21:28.155Z

Highlights: In OpenAI they claim to be working on correcting the 'hallucinations' of the chatbot. The way generative text AIs work can sometimes lead to something described in jargon as a hallucination, which occurs when a machine provides a convincing but completely invented answer. OpenAI engineers are working on ways for their AI models to reward themselves for generating correct data when they move toward an answer, rather than rewarding themselves only at the point of conclusion. Meanwhile, as OpenAI says, ChatGPT can occasionally generate incorrect information, so it's key to confirm your answers.


In OpenAI they claim to be working on correcting the 'hallucinations' of the chatbot.


Although everyone is surprised with the technical skills that ChatGPT deploys in each of its answers, the software is not exempt from errors since, on some occasions, it presents failures that show that it is a technology in progress and with a wide margin for improvement.

The way generative text AIs work, such as ChatGPT, can sometimes lead to something described in jargon as a hallucination, which occurs when a machine provides a convincing but completely invented answer.

This is how the ChatGPT-based version of Bing began to incur these hallucinations. Even above the original ChatGPT, which had already starred in 'hallucinations', responding with detailed but false data, to the question of the record of crossing the English Channel on foot or about the last time the Golden Gate was transported through Egypt.

To make their chatbot technology more reliable, OpenAI engineers have revealed that they are currently focusing on improving their software to reduce and hopefully eliminate these problematic occurrences.

The challenge for engineers is to make ChatGPT more reliable. AP Photo.

A group of researchers specializing in AI within OpenAI was the one who revealed the company's plans to appease these hallucinations. They explain that "even the most advanced models are prone" to produce them, given that they tend to "invent facts in times of uncertainty."

"These hallucinations are particularly problematic in domains that require multi-step reasoning, as a single logical error is enough to derail a much larger solution," they added.

In that sense, they also say that if the company's objective is to end up creating a general artificial intelligence (concept commonly known as AGI), it is critical to mitigate this type of hallucinations.

To address chatbot errors, OpenAI engineers are working on ways for their AI models to reward themselves for generating correct data when they move toward an answer, rather than rewarding themselves only at the point of conclusion.

This model has been trained with a dataset consisting of more than 800,000 labels generated by humans, and in the first tests it has been possible to obtain results far superior to those obtained with models based on the monitoring of final results.

And while it's important that OpenAI is working to resolve this flaw, it could be a while before the changes are reflected. Meanwhile, as OpenAI says, ChatGPT can occasionally generate incorrect information, so it's key to confirm your answers if they're part of any important tasks.

Refine questions to minimize error

How to avoid double meanings or mistakes. Photo REUTERS

ChatGPT, like its competitors, need people to be clear and avoid rambling, because otherwise the results will be generic and inaccurate.

At this point it is also important to direct the answer, that is, if a recommendation on a Renaissance painter is needed, it is essential to include in the request all the period data so that the chatbot understands where the interest is pointing.

All this is key because AI tends to easily fall into failures due to the lack of context, for example, it is not the same to create a script for a video on YouTube, than for a television program.

SL

See also

Google contacts are updated with a function that can empty the phonebook: how to avoid it

U.S. Tiktoker Arrested for Interviewing Students Without Permission and Offering Marijuana

Source: clarin

All tech articles on 2023-06-07

You may like

News/Politics 2024-03-02T04:56:09.192Z

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.