This illustration shows a representation of AI, created using Perplexity.AI.
BRYAN - A Texas A&M law professor is cautioning people that while generative artificial intelligence promises legal help at scale, it has the potential to amplify inequalities in the justice system.
Though many in the legal community think AI is the long-awaited solution to America’s “access to justice” crisis, law professor Milan Markovic believes AI may entrench inequality.
Markovic
Markovic, a professor of law and presidential impact fellow at Texas A&M’s School of Law, says that while generative AI tools like ChatGPT and other large language models will be increasingly important sources of legal assistance for underserved populations, “techno-optimists” are too bullish on AI’s potential to help.
While AI may democratize access to legal information, it also risks reinforcing, or even exacerbating, existing inequalities, he argues in a forthcoming article in the Ohio State Law Journal.
Proponents contend that AI will expand access to law, making it far easier for people to navigate the legal system and vindicate their rights.
Even Chief Justice John Roberts has hailed AI’s “welcome potential to smooth out any mismatch between available resources and urgent needs in our court system.”
This optimism, Markovic said, risks obscuring the deeper realities of America’s adversarial justice system.
He acknowledges the transformative potential for some aspects of generative AI, especially for everyday legal problems like responding to legal communications or understanding basic rights. But without reforms, he says, AI could entrench inequality rather than dismantle it.
AI may slash transaction costs by automating tasks traditionally performed by lawyers, like drafting contracts and other legal documents, but Markovic said this will make it easier for “sophisticated legal actors” like landlords, debt collectors and corporations to initiate legal action. Repeat litigants will also be able to pursue claims more aggressively and at higher volumes, increasing the legal burdens on those least equipped to respond.
“This is not just a problem for people who are going to be hurt by AI — particularly underrepresented communities — it’s a problem for the legal system itself,” Markovic said.
Another issue is asymmetric information, a condition in which service providers have far more knowledge than consumers. In legal markets, the imbalance makes it difficult for people to know what help they need or whether the advice they receive is reliable.
The dynamic becomes especially problematic when individuals turn to generative AI for legal guidance. Even when AI produces confident-sounding answers, users may lack the expertise to evaluate the accuracy of outputs or their limitations.
“AI systems tell you that they’re not an attorney and you should consult a lawyer, so if you rely on their advice, there’s not much of a recourse if you’re harmed,” Markovic said. “When you have a market that is rife with asymmetric information, it really puts consumers at a disadvantage and advantages unscrupulous providers.”
Generative AI’s tendency to produce false or entirely fabricated information in its outputs — known as hallucinations — also poses serious risks to courts and litigants.
Markovic said because legal claims depend on accurate citations to statutes and cases, hallucinated sources can undermine judicial decision-making. Lawyers have already been sanctioned for citing AI-generated cases that don’t exist, and courts are grappling with how to address the issue.
“It’s a huge problem, and the only way for it to stop is for lawyers and others to vet every single citation in a legal filing,” Markovic said. “But that’s time-consuming, and now we’re talking about undercutting the efficiencies that AI is supposed to provide.”
To prevent AI from amplifying existing inequalities, Markovic proposes two reforms.
First, he calls for training publicly funded “justice tech workers” to help underrepresented individuals use AI responsibly. These workers could steer people toward vetted, nonprofit legal tools and help review filings to prevent factual or legal errors from entering the court system.
Second, Markovic urges courts to strengthen requirements for verifying factual claims and legal authorities in cases that commonly involve unrepresented parties. Raising standards in high-volume litigation, he argues, would reduce abuse and protect the integrity of the justice system.
As AI adoption accelerates, Markovic predicts there will be “a lot of hardship and chaos along the way.” He hopes the legal system will take a more deliberate approach than other sectors in how best to integrate these processes.
“We have an oath not only to our clients but to the legal system,” he said. “There’s a lot of pressure on lawyers and courts and law schools to deploy AI immediately.”


