James A. Byrne U.S. Courthouse in Philadelphia
PHILADELPHIA – Over a colleague’s objections, two federal appellate judges issued a light punishment to a Pennsylvania lawyer who submitted a brief littered with mistakes from using artificial intelligence.
There will be no financial penalty for Daniel Pallen, whose work on a lawsuit against the Drug Enforcement Administration irked a federal judge handed a brief prepared by a non-lawyer that contained summaries of previous cases that did not exist.
Pallen is far from the first attorney to have this trouble – something Judge Jane Richards Roth noted when she called for a tougher punishment than the reprimand ordered last week by the 2-1 majority.
After the DEA pointed out the errors to the court, Pallen “put his nose in the air” and attacked the government for disregarding “the forest for the trees,” Roth wrote. This behavior should have been enough for monetary sanctions, she said, despite the lack of case law on when they are appropriate in similar cases of AI hallucinations.
“I agree with my colleagues that such technology may be useful when used with proper supervision and vetting,” Roth wrote.
“But punishing an attorney for failure to verify information obtained from AI is consistent with the standard to which attorneys historically have been held. No forewarning is necessary when it is clear what standard the attorney was required to follow.
“The ethical practice of the law is innate in the responsibilities of each practicing attorney. It needs no reminder as each case is accepted and resolved.”
Pallen, whose office is in Media, challenged DEA proceedings when it revoked the Certificate of Registration of a physician assistant. In September 2024, his brief had summaries of eight DEA adjudications to show its handling of his client was inconsistent.
Those case summaries were created by AI and “riddled with factual and legal inaccuracies.” Another one “simply did not exist,” Judge Cindy Chung wrote for the majority.
The government pointed this out in a response brief, but Pallen did not investigate. His reply brief said any errors were part of “a good faith effort to chronicle Agency disparities.” In February 2025, he suspected the mistakes were generated by AI but took no action.
The Third Circuit ordered copies of the cases he cited three months later, leading Pallen to finally admit AI generated the false summaries. Chung said the Third Circuit is “deeply troubled by” his “cavalier stance towards his various submissions,” but Pallen did not violate his duty to provide competent representation to a client.
“As Attorney signed his brief and submitted it as an officer of the Court, this Court initially credited Attorney with earnest, but mistaken, efforts in offering legal authority to this Court,” Chung wrote.
“It was highly disappointing to learn that Attorney’s status as an officer of the Court did not prevent him from blindly submitting erroneous authority and to learn that this Court’s confidence in him was misplaced.
“Attorney’s conduct is somewhat mitigated by the actions he took after our show-cause order. Attorney has displayed sincere contrition. He has been forthcoming and admitted his many failures to this Court without minimizing his conduct.”
