John Minor Wisdom United States Court of Appeals Building in New Orleans
NEW ORLEANS - A lawyer was sanctioned for filing a brief riddled with AI-generated errors – in a case where her firm was fighting sanctions for filing a poorly researched complaint.
The U.S. Court of Appeals for the Fifth Circuit, citing a growing problem of lawyers using artificial intelligence to draft briefs, ordered Dallas attorney Heather Hersh to pay $2,500 for submitting a brief filled with “hallucinations,” or nonexistent court citations.
Hersh might have avoided the sanction if she had come clean with the court, but she first denied using AI, then said she only used it to clean up her prose and the fake court citations were from “publicly available sources.”
“Had Hersh accepted responsibility and been more forthcoming, it is likely that the court would have imposed lesser sanctions,” the appeals court said in a Feb. 18 order by Judge Jennifer Walker Elrod. “However, when confronted with a serious ethical misstep, Hersh misled, evaded, and violated her duties as an officer of this court.”
Hersh submitted the brief in an appeal of a $33,000 punishment a lower court ordered against her firm, Jaffer & Assoc., in a lawsuit against Experian and Bridgecrest Credit, for failing to investigate their client’s claims in a lawsuit filed in Houston federal court. Hersh’s brief included multiple nonexistent court citations, or attributed language to real cases that wasn’t there.
When asked if she used AI to produce the brief, Hersh first said she had drawn the citations from public databases including CourtListener, Justia and FindLaw. But when the court investigated those sources, it could find no examples of Hersh’s citations.
The lawyer then admitted she used AI but continued to tell the court her citations were from public sources.
“Believing that response to be incredible on its face, the court directed Hersh to answer additional questions,” which only reinforced the court’s conclusion the false cites were AI hallucinations Hersh had failed to check.
Courts around the country have been sanctioning lawyers for using AI without checking, but the problem continues to grow. The first high-profile case emerged in 2023 in New York, where a lawyer submitted an error-filled brief that even placed federal appellate Judge Patrick Higgenbotham on a panel in a nonexistent case.
The Fifth Circuit proposed a new rule requiring lawyers to affirm they have not used AI to write briefs, or if they did, to certify they had checked all the citations. The rule was withdrawn after public comments suggested existing disciplinary process is adequate to enforce shoddy lawyering.
“It is a problem that is getting worse—not better,” the Fifth Circuit said, however. “If it were ever an excuse to plead ignorance of the risks of using generative AI to draft a brief without verifying its output, it is certainly no longer so.
“To ethically use generative AI in the practice of law—which we do not dispute can be helpful if done properly and carefully—a lawyer must `ensure that the legal propositions and authority generated are trustworthy.’”
