Generative artificial intelligence has hit the first of likely many roadblocks on its journey to adoption by the legal industry. New generative AI filing requirements in the Northern District of Texas and the US Court of International Trade, as well as ChatGPT-created case law in New York show that there’s much to consider when it comes to implementing this technology into the practice of law.
At the same time, many attorneys are considering how generative AI can be used in legal practice, according to a recent Bloomberg Law survey.
Of the approximately 750 attorneys surveyed, over half agree that generative AI can be used for completing common legal tasks. Additionally, nearly three-fourths of those attorneys agree that the technology can be used for legal research—the task that landed two New York lawyers in some hot water.
The lawyers are now awaiting a decision regarding Rule 11 sanctions not for using the technology, but for not using it competently—which could also lead to potential disciplinary action from the New York Bar Association.
Although legal ethics rules don’t yet specifically address AI usage, they certainly still apply. For lawyers using AI in their practice, here are three considerations to ensure their ethical duties stay top of mind.
1. Understand the Technology
A crucial preliminary step to using generative AI in legal practice is becoming familiar with it.
For attorneys who practice in the nearly 40 states that have adopted Comment 8 to Rule 1.1 of the American Bar Association’s Model Rules of Professional Conduct, this type of basic technical competence is required. For many generative AI tools, particularly chatbots, this includes considering:
- the data sources a model has been trained on;
- whether there are any staleness issues with the data; and
- potential biases in the dataset.
“It’s really critical for users of generative AI to understand both the input and the output of the service,” said Amanda Jones in a recent Bloomberg Law webinar, Generative AI & Legal Ethics: The Intersection of Efficiency and Ethical Discord.
If attorneys grasp these concepts, they can set reasonable expectations for their use of generative AI, Jones, institutional compliance program manager at Yale University, explained.
Knowing what a model can—and more importantly—cannot do allows attorneys to better assess the scope of the project that can be handled by the generative AI tool and manage expectations from the outset.
2. Review the Terms and Conditions
Attorneys should also review the terms and conditions associated with the generative AI tool they’re using. This will better help them understand the technology and what an AI developer is warranting or claiming in terms of their liability.
“There’s a lot of … let’s say exit doors for them to say, ‘well we didn’t promise anything,’” said Eran Kahana, an attorney and Stanford Law Fellow focused on artificial intelligence, who participated in the webinar. There’s “a big gap between what the promise is of the application and then actually what the developer in the legalese of their terms of use actually say they are signing up for,” Kahana said.
Reviewing the terms and conditions will also alert users to what happens to the data—including client information—put into the tool. This understanding is essential for assessing whether client information can be input into the tool without implicating confidentiality under ABA Model Rule 1.6 (or a state equivalent).
With many large technology companies and service providers partnering with AI companies or specifically integrating generative AI models into their existing products, it may be necessary to review the terms and services at multiple levels (that of the original company or provider and also of the AI developer).
3. Verify the Results
Finally, if a lawyer does decide to use generative AI to assist in legal tasks, it’s imperative to review the results provided and verify the accuracy of the model’s output for two reasons.
Nonlawyer Assistants
There’s an argument to be made that artificial intelligence is the equivalent of a “nonlawyer assistant” and thus requires supervision under ABA Model Rule 5.3 (or a state equivalent).
“If generative AI doesn’t understand, for example, the legal nuances in a particular jurisdiction or includes other errors, the lawyer still has a duty to supervise,” Jan Jacobowitz, founder and owner of Legal Ethics Advisor, said during the webinar.
Competence
Attorneys must provide “competent representation,” which requires a reasonable level of “legal knowledge, skill, thoroughness and preparation,” under ABA Model Rule 1.1.
It’s no secret at this point that various AI models are producing outputs that are riddled with inaccuracies but delivered with an alarming level of confidence. If an attorney is unfamiliar with the subject matter or certain jurisdictional nuances, or just simply doesn’t verify the information provided by the model, they may find themselves “duped” by generative AI like the New York attorneys.
But the mistakes made by the attorneys in New York shouldn’t deter other lawyers from taking advantage of generative AI tools, nor should they chill innovation efforts in the industry.
Innovation and technological advancements like generative AI can greatly increase efficiency in legal practice, but they must be implemented and used with an attorney’s ethical duties in mind.
Bloomberg Law subscribers can find related content on our Legal Operations, ABA/Bloomberg Law Lawyers’ Manual on Professional Conduct, and In Focus: Artificial Intelligence pages.
If you’re reading this on the Bloomberg Terminal, please run BLAW OUT <GO> to access the hyperlinked content, or click here to view the web version of this article.