AI in Courts: A New Frontier for Legal Tech & Ethics Training
The discussion around AI errors in US courts highlights a critical emerging challenge for established institutions. This creates significant opportunities in:
- AI Ethics & Auditing Services: Developing services to audit AI systems used in legal contexts for fairness, bias, and accuracy.
- Professional Education: Creating specialized training programs or certifications for judges, lawyers, and legal professionals on AI literacy, responsible AI use, and how to identify potential AI-generated errors.
- Legal Tech Solutions: Developing AI tools specifically designed for legal applications that prioritize transparency, explainability (XAI), and built-in error checking, reducing risks for courts.
- Consulting: Offering expert consultation to legal firms and government bodies on integrating AI safely and ethically.
Origin Reddit Post
r/technology
It’s “frighteningly likely” many US courts will overlook AI errors, expert says | Judges pushed to bone up on AI or risk destroying their court's authority.
Posted by u/chrisdh79•07/21/2025
Top Comments
u/HumbleHubris
"Should" is doing a lot of heavy lifting here. Spend any amount of time in an American court and people quickly learn it's a place of lies and damn lies.
u/fullmetaljackass
>What I find so terrifyingly naive at this point, is that a judge admits that "AI errors" will go undetected in the future. In clear text that means "human errors" are going undetected rig
u/Guilty-Mix-7629
Isn't it "fun" to see that if you, the individual, provide false testimony or fake evidence you're subject to consequences, but if AI is being used for it that is _most likely_ with bad inten
u/tmoeagles96
But they do actually
u/Chaotic-Entropy
This sort of AI tool is inherently flawed, rather than bugged.
u/Corronchilejano
I love this present where huge companies can force people to consume their defective products.
u/Vickrin
Just today my coworker googled a really simple 'how to restore iPad' and the AI result was completely wrong.
These AI's are confidently incorrect a lot.
u/AdeptFelix
To be fair, Apple changes the procedure every 3rd or 4th iOS version. I'm still upset at how they changed the emergency 911 command to something that you'll trigger when trying an older hard
u/dodecakiwi
Hallucinations aren't bugs in the code. All LLMs know is what content is most likely to follow the previous content based on the training data. They have no concept of right or wrong, so an L
u/[deleted]
[deleted]
u/ShockedNChagrinned
I believe there's already punishment for lying to a court as an attorney. It would seem that any evidence found to be false opens the door for disbarment of the presenting attorney, and I wo
u/Guilty-Mix-7629
Isn't it "fun" to see that if you, the individual, provide false testimony or fake evidence you're subject to consequences, but if AI is being used for it that is _most likely_ with bad inten
u/Vickrin
Just today my coworker googled a really simple 'how to restore iPad' and the AI result was completely wrong.
These AI's are confidently incorrect a lot.
u/HumbleHubris
"Should" is doing a lot of heavy lifting here. Spend any amount of time in an American court and people quickly learn it's a place of lies and damn lies.
u/Panda_hat
The overreach from grifters using AI to try and enrich themselves really is something.
u/Tremolat
Reminder: AI doesn't "hallucinate". That's the industry spin word for bug ridden code. If a traditional app failed at this level and frequency, users would revolt and lawsuits would follow.
u/Wollff
That's easy to answer: Because I was an annoying provocatice internet asshole know it all. That was basically the style of my reply. You were rightly put off by that!
I could have said that
u/Chaotic-Entropy
This sort of AI tool is inherently flawed, rather than bugged.
u/[deleted]
[deleted]
u/AdeptFelix
People need to realize that general LLMs are not fact-oriented, they are language oriented. They don't give a fuck about facts - just that the words follow a statistically likely organization
u/Tremolat
Reminder: AI doesn't "hallucinate". That's the industry spin word for bug ridden code. If a traditional app failed at this level and frequency, users would revolt and lawsuits would follow.
u/Tremolat
Reminder: AI doesn't "hallucinate". That's the industry spin word for bug ridden code. If a traditional app failed at this level and frequency, users would revolt and lawsuits would follow.
u/hulbhen
AI LLMs isnt a truthtelling machine, its just a relatively complex text predictor. Hallucination is a much better descriptor than bug in these instances.
u/Wollff
> AI doesn't "hallucinate". That's the industry spin word for bug ridden code.
No. You are wrong.
It's not industry spin. The term is something from before AI was the industry it is toda
u/imaginary_num6er
The courts have no authority against big AI
u/Xznograthos
$2,500 in sanctions hardly seems adequate in terms of reprimand here. Slap on the wrist, at best.
u/chalbersma
The "hallucinations" are the feature not the bug. LLMs are non-deterministic by design. They're useful for some things, but for factual discovery they are very much so not good.
Use AI to r
u/chalbersma
The "hallucinations" are the feature not the bug. LLMs are non-deterministic by design. They're useful for some things, but for factual discovery they are very much so not good.
Use AI to r
u/Wollff
> AI doesn't "hallucinate". That's the industry spin word for bug ridden code.
No. You are wrong.
It's not industry spin. The term is something from before AI was the industry it is toda
u/Xznograthos
$2,500 in sanctions hardly seems adequate in terms of reprimand here. Slap on the wrist, at best.
u/Wollff
> AI doesn't "hallucinate". That's the industry spin word for bug ridden code.
No. You are wrong.
It's not industry spin. The term is something from before AI was the industry it is toda
u/Wollff
That's easy to answer: Because I was an annoying provocatice internet asshole know it all. That was basically the style of my reply. You were rightly put off by that!
I could have said that
u/imaginary_num6er
The courts have no authority against big AI
u/tmoeagles96
But they do actually
u/OkFineIllUseTheApp
They should be fined with fees dictated by non-existent laws.
u/RecordRich777
This is all so fucking pathetic
u/Sloogs
For as much as I'm weary of generative AI and the push to use it in ways that shouldn't be acceptable, this is just not really a good interpretation of things I'm sorry to say. If you underst
u/AdeptFelix
People need to realize that general LLMs are not fact-oriented, they are language oriented. They don't give a fuck about facts - just that the words follow a statistically likely organization
u/OkFineIllUseTheApp
I'm pro ironic punishment. If you are caught citing non-existent laws/cases in your filings, you are to be penalized, as dictated by [non-existent law with a real looking citation]
And then
u/OkFineIllUseTheApp
They should be fined with fees dictated by non-existent laws.
u/Chaotic-Entropy
This sort of AI tool is inherently flawed, rather than bugged.
u/dodecakiwi
Hallucinations aren't bugs in the code. All LLMs know is what content is most likely to follow the previous content based on the training data. They have no concept of right or wrong, so an L
u/Guilty-Mix-7629
Isn't it "fun" to see that if you, the individual, provide false testimony or fake evidence you're subject to consequences, but if AI is being used for it that is _most likely_ with bad inten
u/hulbhen
AI LLMs isnt a truthtelling machine, its just a relatively complex text predictor. Hallucination is a much better descriptor than bug in these instances.
u/Comfortable-Sound944
What are the sanctions for the lawyers submitting AI documents full with quotes of non existing cases? There have already been several that got to the news.
u/ShockedNChagrinned
I believe there's already punishment for lying to a court as an attorney. It would seem that any evidence found to be false opens the door for disbarment of the presenting attorney, and I wo
u/AdeptFelix
To be fair, Apple changes the procedure every 3rd or 4th iOS version. I'm still upset at how they changed the emergency 911 command to something that you'll trigger when trying an older hard
u/Sloogs
For as much as I'm weary of generative AI and the push to use it in ways that shouldn't be acceptable, this is just not really a good interpretation of things I'm sorry to say.
u/Panda_hat
The overreach from grifters using AI to try and enrich themselves really is something.
u/1_________________11
Why is this writing style so offputting to me.
u/rollingForInitiative
Traditional software is typically bug-ridden and fails at tasks all the time. All pieces of software I've ever worked on has a never-ending list of bugs that customers have reported that shou
u/RecordRich777
This is all so fucking pathetic
u/imaginary_num6er
The courts have no authority against big AI
u/Comfortable-Sound944
What are the sanctions for the lawyers submitting AI documents full with quotes of non existing cases? There have already been several that got to the news.
u/Xznograthos
$2,500 in sanctions hardly seems adequate in terms of reprimand here. Slap on the wrist, at best.
u/fullmetaljackass
>What I find so terrifyingly naive at this point, is that a judge admits that "AI errors" will go undetected in the future. In clear text that means "human errors" are going undetected rig
u/Corronchilejano
I love this present where huge companies can force people to consume their defective products.
u/OkFineIllUseTheApp
I'm pro ironic punishment. If you are caught citing non-existent laws/cases in your filings, you are to be penalized, as dictated by [non-existent law with a real looking citation]
And then
u/1_________________11
Why is this writing style so offputting to me.