The Limited Times

Now you can see non-English news...

Because of ChatGPT, an American lawyer cites rulings... that never existed

2023-05-29T09:10:32.174Z

Highlights: A New York law firm gave the judge a brief riddled with errors. Some of the cited case law is fanciful. The author admits to having used ChatGPT, unaware that the chatbot is not foolproof. The two lawyers, Steven A. Scwartz and Peter LoDuca, are summoned to a hearing on June 8, for possible disciplinary proceedings against them. He promised the court that he would no longer research ChatG PT without then checking for himself the reality of the rulings proposed by artificial intelligence.


A New York law firm gave the judge a brief riddled with errors: some of the cited case law is fanciful. The author admits to having used ChatGPT, unaware that the chatbot is not foolproof.


Originally, it was just a trivial lawsuit between an individual and an airline, accused of being responsible for injuries he says he sustained. But as the New York Times reveals, the airline's lawyers were taken aback by the brief sent by the plaintiff's lawyers: among the judgments cited as case law, to support their request, they cited several cases that simply never existed.

The New York judge in charge of the case, P. Kevin Castel, then wrote to the plaintiff's lawyers to ask for explanations: "six of the judgments invoked refer to false court decisions and mention false citations," he observes.

The law firm Levidow & Oberman told the court that it was not the plaintiff's lawyer, Peter LoDuca, but one of his associates, Steven A. Schwartz, who wrote the brief sent to the court. Despite more than thirty years of experience as a lawyer and a solid knowledge of the law, Schwartz admitted to using ChatGPT, the algorithm that uses artificial intelligence to interact with Internet users and produce texts on demand, to do his research.

Read alsoAurélie Jean: "Are Hollywood screenwriters right to fear ChatGPT?"

ChatGPT had cited its sources.

Schwartz, who expressed "immense regret" to the court when he realized his mistake, explained that he had never used ChatGPT before, and was unaware that some of the answers provided by the algorithm were fabricated, and therefore false. ChatGPT warns its users, however, that it sometimes risks "providing erroneous information".

The lawyer sent the court screenshots of his exchanges with ChatGPT, showing that the chatbot had confirmed to him that one of the fanciful judgments had indeed existed. When the lawyer asked him what his sources were, the artificial intelligence cited LexisNexis and Westlaw – two databases referencing court decisions. However, when you enter "Varghese v. China Southern Airlines Co Ltd" (the name of one of the cases cited in the brief) into the LexisNexis search engine, you find no results.

The two lawyers, Steven A. Scwartz and Peter LoDuca, are summoned to a hearing on June 8, for possible disciplinary proceedings against them. Scwartz promised the court that he would no longer research ChatGPT without then checking for himself the reality of the rulings proposed by artificial intelligence.

Source: lefigaro

All tech articles on 2023-05-29

You may like

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.