Address
33-17, Q Sentral.
2A, Jalan Stesen Sentral 2, Kuala Lumpur Sentral,
50470 Federal Territory of Kuala Lumpur
Contact
+603-2701-3606
info@linkdood.com
Address
33-17, Q Sentral.
2A, Jalan Stesen Sentral 2, Kuala Lumpur Sentral,
50470 Federal Territory of Kuala Lumpur
Contact
+603-2701-3606
info@linkdood.com
Artificial intelligence tools promise to speed up legal research—but they can also lead lawyers astray. On June 6, 2025, England’s High Court issued a stark warning: attorneys who file AI-generated, fictitious cases risk contempt of court or even criminal sanctions.
In two separate matters, advocates submitted skeleton arguments peppered with citations to judgments that don’t exist:
In both instances, AI assistants—prompted to “find relevant precedents on banking negligence” or “list key housing-law cases”—fabricated names, dates, and even Headnotes that sounded authoritative.
The Solicitors Regulation Authority is expected to formalize these requirements by year’s end. Meanwhile, law firms are weighing specialty “AI-audit” roles to oversee technology-driven research. In courtrooms, judges may begin requiring practitioners to certify, under oath, that all authorities cited have been independently verified.
1. What exactly is an AI “hallucination”?
An AI hallucination occurs when a model confidently generates false or fabricated information—such as case names, statutes, or quotations—that blend plausibly with real data but have no basis in fact.
2. Can firms still use AI for legal research?
Yes—but only as an assistive tool. All AI-generated authorities must be cross-checked by a qualified lawyer. Think of AI as a fast “first draft,” not a substitute for professional judgment.
3. How might this change everyday legal practice?
Expect more rigorous workflows: every brief may include an AI-audit section, and firms might assign dedicated “AI verifiers” to ensure technology speeds don’t compromise accuracy or ethics.
Sources The New York Times