by Joseph Brophy for the Maricopa Lawyer, a publication of the Maricopa County Bar Association
As judges across the country implement rules for the use of generative artificial intelligence (“AI”) in their courtroom and lawyers find themselves in ethical trouble for over reliance on AI, state bars are rushing to catch up. Last November, California and Florida recommended adoption of the first guidelines for lawyers’ use of generative AI. Let’s take a look.
The California State Bar’s board of trustees recommended adoption of guidelines called Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law. California’s guidance begins by stating that “[the committee believes] that the existing Rules of Professional Conduct are robust, and the standards of conduct cover the landscape of issues presented by generative AI in its current forms.” That claim might seem dubious given the potential of AI to radically change how legal services are performed. But it is true that the lawyers who have run into trouble using generative AI did so by: (1) failing to confirm that the arguments and case law/citations generated by AI were valid before submitting AI generated briefing to the court; and (2) failing to admit they used AI when confronted by the court over what AI generated, which one judge described as “gibberish.” Those are not problems attributable to AI, but rather to good old-fashioned incompetence and lack of candor.
Although the California guidelines did not recommend changes to the Rules of Professional Conduct, other recommendations were made, including: (1) developing attorney education programs to help lawyers gain competence in using generative AI; (2) consideration of directing California’s bar examiners to explore requirements for California accredited law schools to require courses regarding the competent use of generative AI; (3) working with the California legislature and supreme court to determine whether the unauthorized practice of law should be more clearly defined; and (4) working with the legislature to determine whether legal generative AI products should be licensed or regulated.
The California trustees also expressed concern over how AI might affect unrepresented persons, stating “while generative AI may be of great benefit in minimizing the justice gap, it could also create harm if self-represented individuals are relying on generative AI outputs that provide false information.” The trustees’ concern over “minimizing the justice gap” is somewhat ironic given that last year California rejected Arizona-style reforms that would have allowed more flexibility in the rules governing law firm ownership and fee sharing that might have helped self-represented individuals obtain affordable legal services. That rejection occurred because of a ferocious lobbying effort by – you guessed it – California law firms.
Meanwhile, on the other side of the country, Florida issued its own Proposed Advisory Opinion 24-1 Regarding Use of Generative Artificial Intelligence. The Florida opinion does not mention law schools, bar exams or continuing education on AI. The Florida opinion also does not suggest regulating or licensing AI. Florida placed a heavier emphasis than California on issues of confidentiality – confidential client information cannot be provided to AI absent informed consent by the client. And the Florida opinion did not address the use of AI by self-represented parties.
But the primary difference between the California and Florida guidance was Florida’s focus on a lawyer’s duty to supervise non-lawyers. Florida analogized AI to a legal assistant whose work must be supervised and reviewed. According to the Florida guidance, “while Rule 4-5.3(a) defines a nonlawyer assistant as a ‘a person,’ many of the standards applicable to nonlawyer assistants provide useful guidance for a lawyer’s use of generative AI.”
The Florida guidance also admonished lawyers that “first and foremost, a lawyer may not delegate to generative AI any act that could constitute the practice of law such as the negotiation of claims or any other function that requires a lawyer’s personal judgment and participation.” The Florida opinion used as an example chatbots that some firms use on their website for client intake. Citing prior Florida ethics opinions regarding non-lawyers performing client intake, the Florida opinion stated that the use of AI for client intake must be limited to obtaining factual information and not offer any legal advice concerning the prospective client’s matter. All legal questions must be answered by a lawyer, not by AI. For any readers who utilize chatbots for client intake, the Florida opinion is worth reading.
Perhaps the most interesting aspect of the California guidelines and the Florida opinion is the dog that did not bark – the use of AI by judges is not mentioned in either document. Judges have the same access as lawyers to generative AI programs such as ChatGPT or the AI programs being unveiled by Westlaw and Lexis. AI is no doubt a tempting option to judges for the same reason it is tempting to lawyers – the possibility of efficiency and a lightened workload. And the potential harm from a judge over relying on AI, which is prone to “hallucinations” in both legal analysis and case citations, is no less (and perhaps more) than the harm caused when a lawyer or self-represented party over relies on AI. As other states begin to weigh in on this subject, it will be interesting to see which, if any, states recommend imposing any obligations on judges’ use of AI such as, for example, requiring disclosure to the parties if a judge utilizes AI to draft a ruling.
About Joseph A. Brophy
Joseph Brophy is a partner with Jennings Haug Keleher McLeod in Phoenix. His practice focuses on professional responsibility, lawyer discipline, and complex civil litigation. He can be reached at jab@jkwlawyers.com.
The original article appeared in the January 2024 issue of Maricopa Lawyer and can be viewed here:
https://jkwlawyers.com/wp-content/uploads/2024/02/Florida-and-California-Weigh-In-on-AI.pdf