AI Malpractice: Your New Legal Minefield

By Adam Pease
AI Malpractice: Your New Legal Minefield
The State Bar of California recently initiated formal disciplinary proceedings against three attorneys—Omid Emile Khalifeh, Steven Thomas Romeyn, and Sepideh Ardestani—following the discovery of fabricated legal citations in court filings. These cases represent a significant escalation from general warnings about generative AI to active enforcement of professional standards. The attorneys are accused of submitting nonexistent case law and false quotations, with one lawyer allegedly violating a direct court order regarding the disclosure of AI-assisted drafting. This blog overviews the “California State Bar AI Disciplinary Actions” and offers our analysis.
Why Did the State Bar Charge These Attorneys
The State Bar of California is charging these individuals because the submission of “hallucinated” legal authorities constitutes a breach of the duty of candor to the court and a failure to perform legal services with competence. While the attorneys used generative AI tools to conduct research or draft pleadings, they failed to perform the mandatory human-in-the-loop verification required by professional conduct rules. In the case of Ardestani, a disciplinary stipulation has already been reached involving a stayed suspension and mandatory technology-focused education.
The charges against Khalifeh and Romeyn involve multiple counts of misconduct, including making false representations about the accuracy of their filings. The regulatory body is signaling that the era of treating AI-driven errors as “clerical mistakes” is over. Instead, these errors are being categorized as gross negligence or intentional misrepresentation when a lawyer fails to check the work of the machine. This move reinforces the principle that professional licenses are granted to humans who must remain accountable for every word filed under their signature.
Analysis
This enforcement action signals a critical maturation of the AI market where the “experimentation phase” has collided with the reality of professional liability. At Aragon Research, we see this as a watershed moment for both the legal profession and the broader enterprise software market. The impact of these charges extends far beyond a few specific law firms. It establishes a precedent that “algorithmic excuse” is no longer a valid defense for inaccuracies in regulated industries.
For technology vendors, this development necessitates a shift in product design. The market will now demand much higher levels of grounding and transparency from generative AI platforms. We expect a surge in demand for “Legal-Grade AI” tools that provide verifiable citations and direct links to source materials. Vendors who rely solely on probabilistic text generation without built-in auditability features will find themselves excluded from the procurement lists of risk-averse corporate legal departments.
Furthermore, this news indicates that professional licensing bodies across various sectors—including finance, medicine, and engineering—will likely follow the California Bar’s lead. The shift from guidance to enforcement means that AI governance is no longer a “nice-to-have” feature but a core operational requirement. The legal technology market, specifically, will bifurcate into consumer-grade tools that are too risky for professional use and enterprise-grade systems that include rigorous fact-checking layers and disclosure logging.
The implications for the technology stack are profound. Enterprises will need to move away from general-purpose LLMs for specialized tasks and toward specialized agents that can perform iterative self-correction. This news means that the human-in-the-loop requirement has evolved from a best practice into a legal mandate. Any firm that continues to deploy AI without a robust governance framework is effectively inviting professional negligence litigation.
What Should Enterprises Do About This News
Enterprises must immediately transition from passive AI usage policies to active governance frameworks that include mandatory human review of all AI-generated external communications. It is not sufficient to simply warn employees about hallucinations. Organizations need to implement technical guardrails, such as retrieval-augmented generation (RAG) and automated citation verification, to reduce the cognitive load on human reviewers and ensure accuracy.
Management should also evaluate their existing technology vendors to ensure they provide “Enterprise-Grade” protections, including data isolation and verifiable output mechanisms. You should consider conducting an internal audit of all departments where AI is currently being used for drafting regulated or high-stakes documentation. Ensure that your insurance coverage for professional liability accounts for AI-related errors, as the definition of “negligence” is clearly being redefined by regulatory bodies like the State Bar.
Bottom Line
The California State Bar actions prove that professional accountability cannot be outsourced to an algorithm. Organizations should treat generative AI as a junior assistant that requires constant and expert supervision rather than a replacement for professional judgment. Establish clear governance frameworks now to ensure that your firm does not become a test case for AI-related professional negligence.




Have a Comment on this?