Legal Filing Written by ChatGPT, Roundup - May 22nd - 28th, 2023 - F5 SIRT - This Week in Security



Introduction

Hello again, Kyle Fox here.   This week has been somewhat quiet, so I only have one main story and the usual round-up.   
 

Lawyers File Motion Written By AI, Face Sanctions and Possible Disbarment

 

As reported in the New York Times, in the case of Mata v. Avianca a case arising out of an injury on a Avianca Airlines flight in 2019, a lawyer submitted an argument generated with ChatGPT that cited nonexistent case law.   The document appears to be an affirmation in opposition to a motion to dismiss based on the statutes of limitations barring untimely filings of civil complaints.   The document can be located on Court Listener as #21 in that list.
 
ChatGPT is a large language model that has been trained on a large quantity of web content across the entire internet.    Since its a language model and not a facts model, it will at times make up facts to support an argument, referred to by AI professionals as hallucinations.   Since it is derived from the Internet as a whole it may regurgitate facts accurately, but because in inputs were bad, those facts may be invalid as well.    Since legal arguments require accurate summation of cited cases in the argument and precise citation of the cited case a large language model may not be very good at coming up with these accurate citations.
 
In this incident, the lawyers involved did double down after questioning from the judge and opposing counsel, submitting further queries to ChatGPT where it says its not lying when its citing the nonexistant cases.    This on a level reflects behavior of some people who debate on the internet, as anyone used to reading arguments on sites like Hacker News or Reddit has seen.    The further doubling down has had the effect of increasing the concern the judge has about these filings and so the court has ordered a hearing to show cause for why the attorneys should not be sanctioned for this behavior.  This court action is expected to play out over the next few weeks.
 
The situation has been reviewed by federal copyright lawyer Leonard French as well as Automobile Lemon Law lawyer Steve Lehto.    Both have commented on the potential impacts to the lawyers involved in this case,  the immediate being potential contempt charges for the lawyers and dismissal of the case.  Not so immediate will be bar association actions which may result in the disbarment of the attorneys in question, since a lawyer is ethically obligated to perform to the best of their abilities and letting ChatGPT do your homework without even checking it and then signing off on that is nowhere near that level of care.
 
This further reinforces the warning that should and does come with ChatGPT, that like Markov Chains before, it is a language model trained on unvetted data that may contain false information or it may assemble that language to create false information.  Due care will be needed every time it is used to make sure it has not introduced bad data, bad language or bad code.
 

Round up:

Until Next Time

If you're new to This Week In Security you can check out past issues.  You can also view the wider library of content published by the F5 SIRT, which covers a variety of subjects.

Published Jun 07, 2023
Version 1.0

Was this article helpful?

No CommentsBe the first to comment