First libel suit against OpenAI, the American developer of ChatGPT, which uses artificial intelligence to help users with a variety of tasks and queries: Mark Walters, an American radio broadcaster, is suing the company after ChatGPT accused him of previously being accused of fraud and embezzlement of funds from a nonprofit organization.
Sam Altman, CEO of OpenAI at Tel Aviv University, Photo: Reuters/Amir Cohen
The incident occurred when a journalist asked ChatGPT to finalize a federal case involving Walters, the radio broadcaster. ChatGPT's summary contained correct information, along with false accusations against Walters. The chatbot said Walters believes he used funds from a nonprofit. In reality, Walters was never accused of this. The journalist did not publish the false information generated by ChatGPT, but checked the details with another party, and it reached Walters, who decided to file a libel suit.
This isn't the first time the chatbot has created problems: an academic professor threatened to "throw" his students out of his classroom after ChatGPT claimed his students used artificial intelligence to write their essays. In another case, a lawyer was sanctioned after using ChatGPT to investigate legal cases.
Wave of lawsuits against OpenAI on the way?, Photo: GettyImages
In Israel, a class-action lawsuit was filed against OpenAI in April, alleging that the company was not doing enough to protect minors from using the system. It is argued that ChatGPT's terms of use explicitly state that users must be at least 13 years old, and users under the age of 18 cannot use it without a guardian's permission. The plaintiffs argue that despite this condition, there is no requirement in the registration process to declare or confirm that the user is over the age of 13, or to present the authorization of a parent or guardian.
Wrong? We'll fix it! If you find a mistake in the article, please share with us