A US lawyer is facing possible sanctions after he used the popular ChatGPT to write a brief and discovered that the application of artificial intelligence (AI) had invented a whole series of supposed legal precedents.
According to The New York Times, the lawyer in trouble is Steven Schwartz, lawyer in a case that is settled in a New York court, a lawsuit against the airline Avianca filed by a passenger who claims he suffered an injury when he was hit with a service cart during a flight.
[Discrimination by weight or height prohibited in New York]
Schwartz represents the plaintiff and used ChatGPT to prepare a brief opposing a defense request to have the case dismissed.
The 10-page document cites numerous court decisions to support its contents: including Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines and Varghese v. China Southern Airlines, The New York Times reported.
ChatGPT is an artificial intelligence chatbot developed by the company OpenAI and launched on the market in November 2022.Rayner Peña R. / EFE
The problem is that none of those cases are real. When the airline's lawyers and the judge went looking for the case law that Schwartz presented, they found nothing. It wasn't long before it was discovered that the company's well-known chatbot OpenAI had invented the cases.
ChatGPT is a platform that prepares answers to queries using artificial intelligence, the system feeds billions of texts on the Internet and writes a series of guesses based on patterns in the data with which it has been trained. Although the texts are realistic, there is no guarantee that the information it produces is legitimate.
[Virgin Galactic Makes Last Human Test Flight]
"The Court is facing an unprecedented situation. A filing submitted by plaintiff's attorney opposing a motion to dismiss (the case) is replete with citations from nonexistent cases," Judge Kevin Castel wrote this month.
On Friday, Castel issued an order convening a hearing on June 8 in which Schwartz must try to explain why he should not be sanctioned after having tried to use totally false assumptions of precedents.
The judge's order came a day after the lawyer himself filed an affidavit in which he admitted to using ChatGPT to prepare the brief and acknowledged that the only verification he had carried out was to ask the application if the cases he cited were real.
The mystery of the nun whose body remains almost intact four years after her death persists
May 27, 202300:19
Schwartz justified himself by saying that he had never used a tool of this type before and that, therefore, "he was not aware of the possibility that its content could be false."
The lawyer stressed that he had no intention of misleading the court and fully cleared another lawyer from the firm who is also exposed to possible sanctions.
The document, seen by EFE, closes with an apology in which Schwartz deeply regrets having used artificial intelligence to support his research and promises never to do so again without fully verifying its authenticity.
With information from the EFE agency.