The propensity for AI systems to make mistakes and for humans to miss those mistakes has been on full display in the US legal system as of late. The follies began when lawyersโincluding some atย prestigious firmsโsubmitted documents citing cases that didnโt exist. Similar mistakes soon spread to other roles in the courts. In December, a Stanford professor submitted sworn testimony containingย hallucinations and errorsย in a case about deepfakes, despite being an expert on AI and misinformation himself.
But now judges are experimenting with generative AI too. Some are confident that with the right precautions, the technology can expedite legal research, summarize cases, draft routine orders, and overall help speed up the court system, which is badly backlogged in many parts of the US.
…
The results of these early-adopter experiments make two things clear. One, the category of routine tasksโfor which AI can assist without requiring human judgmentโis slippery to define. Two, while lawyers face sharp scrutiny when their use of AI leads to mistakes, judges may not face the same accountability, and walking back their mistakes before they do damage is much harder.















