by Joseph Brophy for the Maricopa Lawyer, a publication of the Maricopa County Bar Association
At this point in the AI revolution, most lawyers will have read an article about an unfortunate, downtrodden, or overworked lawyer who made a fatefully ill-advised decision to rely on generative AI to “assist” them in drafting a motion. For those of you that have not heard, AI is not ready for prime time when it comes to legal research, analysis and especially case citations. There is a long list of lawyers who have learned the hard way that no matter how coherent and logical AI-generated legal analysis may look and sound, commercial large language model AI (ChatGPT, Grok, Perplexity) is prone to “hallucinations” – bad ones – that can leave a lawyer arguing gibberish and getting into ethical hot water. Indeed, a recent study of inquiries made to law-specific AI programs used by Westlaw and Lexis (which are far superior to commercial AI when it comes to use by legal practitioners) showed an error rate of 17%.
As the lawyer-involved cautionary tales have accumulated over the last two years, various courts and state bars have issued rules and ethical opinions addressing a lawyer’s duties of competence (ER 1.1) and duty of candor to the court (ER 3.3) when using AI. Common measures taken include requirements that lawyers certify to the court that AI-generated content has been reviewed by human lawyers for accuracy. However, little has been said or written about a judge’s obligations with respect to AI. People, even lawyers, often forget that judges are lawyers too, with one of the primary differences being that judges have caseloads that dwarf that of most lawyers. The temptation of the potential efficiency of using AI does not disappear when a lawyer dons a black robe. So it was only a matter of time before AI mishaps occurred on the bench.
On July 23rd of this year a federal judge in New Jersey had to withdraw an opinion entered in error after the court received from counsel a letter pointing out that the court’s opinion contained misstatements of case outcomes and fake quotes attributed to opinions and to the defendants. Although the letter made no mention of AI, these kinds of errors have been a common thread running through the many cases in which lawyers have been subject to sanctions over the use of AI in submissions to the court.
In Shahid v. Esaam, 376 Ga. App. 145 (2025), a brief submitted to the trial court contained arguments adopted by the court that relied on fake cases, which the trial court cited in its order. The Georgia Court of Appeals stated that “we are troubled by the citation of bogus cases in the trial court’s order,” but made “no findings of fact as to how this impropriety occurred.” Nevertheless, the appellate court proceeded to explain the problems with reliance on generative AI, which is about as close to making a factual finding as a court can come without doing so. Why the Georgia court did not admonish the trial court for not catching the citation to fake cases is unclear. But perhaps that omission was, at least in part, because there is not much in the way of legal authority on the subject of judges and AI.
Arizona, however, has taken the lead on this issue. On August 28, 2025, the Arizona Supreme Court issued Order No. R-24-0052 entitled “Order Amending Arizona Code of Judicial Conduct Rule 2.5, Rule 81 of the Rule of the Supreme Court of Arizona.” In the order, the Supreme Court amended Rule 2.5 of the Code of Judicial Conduct as follows: “Competence in the performance of judicial duties requires the legal knowledge, skill, thoroughness, and preparation reasonably necessary to perform a judge’s responsibilities of judicial office, including the use of, and knowledge of the benefits and risks associated with, technology relevant to service as a judicial officer.”
This amendment, set to take effect on January 1, 2026, obviously brings the rules governing judicial use of technology further in line with those regulating attorneys’ use of such technologies per comment 8 to Rule 1.1, Model Rules of Professional Conduct. Thus, even those lawyers now on the bench must understand, and be wary of, relying on generative AI. You can expect other states to quickly follow suit.
There appears to be a reluctance to take action on, or even acknowledge, the potential problems with, judicial reliance on generative AI. While much has been written, and appropriately so, on the damage to the integrity of the legal profession (and especially to clients) when lawyers misuse generative AI, there has been little written about the potential havoc that can be caused by a judicial opinion that relies on bogus authorities. Not the least among these concerns is the possibility that lawyers may rely on such opinions in the belief that if certain cases were relied upon by a court in an opinion, then those same cases can be relied upon by counsel. And while there is an appellate process that can rectify judicial misuse of AI, every lawyer knows the high cost of that process (in both time and money) to the litigants. The judicial system requires legal decisions and opinions that are grounded in precedent and not in the fever dreams of ChatGPT, which exists to respond to inquiries made by its user (right or wrong), and has no obligation to provide accurate information.
To illustrate the nature of this very real problem, in March of this year, Eugene Volokh, a law professor at UCLA, wrote an article for Reason Magazine with the self-explanatory title: “11 Court Opinions in the Last 30 Days Mention AI-Hallucinated Material, and . . . That’s Likely Just the Tip of the Iceberg.” While this suggests that many (most?) courts are catching these issues as they arise, it would strain credulity to believe that, given the expanding scope of the problem, the courts are catching every hallucination that is submitted.
As the situation develops, it will be interesting to see whether any jurisdiction decides that there need to be consequences for judges who issue rulings that rely on hallucinated authorities, similar to Rule 11 of the Rules of Civil Procedure, as a way to deter carelessness.
Download the article here.
