ChatBot.jpg

Every industry in the world is scrambling to put artificial intelligence tools to creative use, and America’s robust sector of lawyers, law firms, and jurists is no exception. 

Judges have hilariously busted lawyers for using ChatGPT to hallucinate court citations in briefs, and AI-generated attorneys have pleaded for their clients and faced ridicule online. There have been AI videos created using court transcripts from cases where no cameras were allowed, bringing cases to life rather ingeniously.

YaelOssowski.jpg

Ossowski

Courts around the country face disruptive challenges brought on by the growing capabilities of chatbots and video generation, and as with all technology, there are good and bad elements that merit both optimism and caution.

Now that the AI boom has collided with the $400 billion a year legal industry, it’s time to consider reasonable limits on its use to ensure a fair and balanced system of justice. Consider the realm of mass litigation advertising, the industry’s main feeding line and a nearly $2.5 billion business as of 2024. It won’t be long before late-night legal ads offering compensation for some product harm will be replete with AI-produced voices, videos, and even language.

Plaintiff attorneys involved in mass claims lawsuits have already conquered the industry of digital advertising based on search terms. The top 25 most expensive keywords for Google Ads in the United States are related to injury lawyers, making it one of the most lucrative categories for search engines. Overall spending on legal advertising increased approximately 39% between 2020 and 2024.

Without ethical and legal considerations for how this advertising uses artificial intelligence to recruit plaintiffs or launch lawsuits, there’s no telling how more litigious our society will become.

In order to launch lawsuits against major companies for alleged misdeeds, known as torts, legal firms recruit classes of alleged victims to bolster their case in both local and federal courts. With larger classes of victims come the chances of larger settlements and payouts, regardless of how serious the alleged injuries are. Beginning in 2015, advertisements for lawsuits against the herbicide company Roundup reached an estimated $131 million based on more than 625,000 national and local TV ads.

In 2020, the mass tort industry cost the US economy an estimated $443 billion, beefing up legal departments and insurance premiums of small and large companies that have had to focus on lawsuit risk rather than delivering the most affordable goods and services for consumers. Combatting AI-fueled recruitment drives for lawsuits will be costly for firms, large and small. 

Claims range from serious cases where cancer diagnoses are alleged to be the result of product exposure or unadvertised side effects from certain pharmaceutical drugs, as well less charged consumer disputes over whether a juice contains 100% fruit or if a bleach-based cleaning product is as “nature-based” as advertised. 

Then there’s the $10 million settlement based on whether Wheat Thins are 100% whole wheat.

For every faulty product lawsuit that led to real consumer harm, there are thousands more based on frivolous claims.

Because these plaintiff firms work on contingency fees based on the classes they recruit for cases against publicly listed companies, lawyers’ fees can be as much as 40% of the total compensation, leaving most class members with pocket change. 

An AI-boost will put the legal ad industry into newcharted waters that would affect us all. That’s why it’s time to think of reasonable limits.

That said, legal advertising is protected speech. The 1977 Supreme Court case in Bates v. State Bar of Arizona enshrined this, and no one is seeking to outlaw an attorney office’s ability to advertise its services to the public.

Rather, it’s about modernizing our courts to ensure people’s rights are protected amidst major technological change.

Seven states have already enacted laws to curb predatory lawsuit advertising, requiring disclosures and clear identification of how much potential plaintiffs stand to gain if they qualify. That’s a good start. Light-touch ethics rules and disclosures of AI-generated material in all legal advertising could take that even further, and make sure consumers and citizens remain informed about who is advertising to them. Congress could introduce federal reform to force better vetting of mass claims and weed out any AI slop that is generally deceptive.

The transformational nature of AI for our nation’s courts is overall a positive trend that our system should embrace, where necessary. At the same time, we cannot allow our system to be abused by prompts and generated material that may sway justice and corrupt our institutions. More can and should be done. 

Ossowski writes on legal reform and is deputy director at the Consumer Choice Center.

More News