What is AI?
AI, or Artificial Intelligence, refers to a set of technologies that enables computers to simulate human intelligence and perform tasks that typically require human cognition. Examples of AI applications include ChatGPT, Harvey.AI, and Google Gemini. These systems are designed to think and learn like humans, continually improving as users interact with them. They are trained through algorithms of data to improve their performance, which allows them to enhance their performance over time without being explicitly programmed for every task. Unlike Google, which provides search results based on web queries, ChatGPT generates human-like answers to prompts through the process by which computers learn from examples.
Cost-Benefit Analysis of AI in the Legal Field
The primary areas where AI is being applied in the legal field include; reviewing documents for discovery, which is generally referred to as technology-assisted review (TAR), legal research through automated searches of case law and statutes, contract and legal document analysis, proofreading, and document organization.
One of the main reasons AI is used in the legal field is because it saves time. By having AI conduct routine tasks, such as proofreading, AI frees up attorneys’ time to focus on more complex tasks. This increased efficiency may also enable law firms to reduce their staff headcount and save money. For example, without AI, proofreading a document can take hours, but with AI, it can be completed in less than a minute, identifying and correcting errors instantly. As they say, “time is money.” AI is also valuable because it produces high-quality work. Since AI doesn’t get tired or become distracted, it can deliver flawless, error-free, and enhanced results. Tasks like document review, proofreading, and legal research can be tedious, but AI handles the initial “heavy lifting,” reducing stress and frustration for attorneys. As one saying goes, “No one said attorneys had to do everything themselves!“
While AI has the potential to save law firms money, I do not think the promised cost reduction always materialized in the way that one may anticipate. It may not be worth it for a law firm to use AI because the initial investment in AI technology can be substantial. The cost can range from $5,000 for simple models to over $500,000 for complex models. After law firms purchase the AI system, they then have to train their staff to use it effectively and upgrade the software regularly. “These costs can be substantial and may take time to recoup.” Law firms might consider doing a cost-benefit analysis before determining if using AI is the right decision for them.
Problems With AI in the Legal Field
One issue with AI applications is that they can perform tasks, such as writing and problem-solving, in ways that closely mimic human work. This makes it difficult for others to determine whether the work was created by AI or a human. For example, drafting documents now requires less human input because AI can generate these documents automatically. This raises concerns about trust and reliability, as myself and others may prefer to have a human complete the work rather than relying on AI, due to skepticism about AI’s accuracy and dependability.
A major concern with the shift towards AI use is the potential spread of misinformation. Lawyers who rely on AI to draft documents without thoroughly reviewing what is produced may unknowingly present “hallucinations” which are made-up or inaccurate information. This can potentially lead to serious legal errors. Another critical issue is the risk of confidential client information being compromised. When lawyers put sensitive client data into AI systems to generate legal documents, they are potentially handing that data over to large technology companies. These companies usually prioritize their commercial interests, and without proper regulation, they could misuse client data for profit, potentially compromising client confidentiality, enabling fraud, and threatening the integrity of the judicial system.
A Case Where Lawyers Misused ChatGPT in Court
As a law student who hopes to become a lawyer one day, it is concerning to see lawyers facing consequences for using AI. However, it is also understandable that if a lawyer does not use AI carefully, they will get sanctioned. Two of the first lawyers to use AI in court and encounter “hallucinations” were Steven Schwartz and Peter LoDuca. The lawyers were representing a client in a personal injury lawsuit against an airline company. Schwartz used ChatGPT to help prepare a filing, allegedly unaware that the AI had fabricated several case citations. Specifically, AI cited at least six cases, including Varghese v. China Southern Airlines and Shaboon v. Egypt Air, the court found these cases didn’t exist. The court said these cases had “bogus judicial decisions with bogus quotes and bogus internal citations.” As a result, both attorneys were each fined $5,000. Judge P. Kevin Castel said he might not have punished the attorneys if they had come “clean” about using ChatGPT to find the purported cases the AI cited.
AI Limitations in Court
As of February 2024, about 2% of the more than 1,600 United States District and Magistrate judges have issued 23 standing orders addressing the use of AI. These standing orders mainly block or put guidelines on using AI due to concerns about technology accuracy issues. Some legal scholars have raised concerns that these orders might discourage attorneys and self-represented litigants from using AI tools. I think instead of completely banning the use of AI, one possible approach could be requiring attorneys to disclose to the court when they use AI for their work. For example, U.S. District Judge Leslie E. Kobayashi of Hawaii wrote in her order, “The court directs that any party, whether appearing pro se or through counsel, who utilizes any generative artificial intelligence (AI) tool in the preparation of any documents to be filed with the court, must disclose in the document that AI was used and the specific AI tool that was used. The unrepresented party or attorney must further certify in the document that the person has checked the accuracy of any portion of the document drafted by generative AI, including all citations and legal authority.”
Ethicality of AI
Judicial officers include judges, magistrates, and candidates for judicial office. Under the Model Code of Judicial Conduct (MCJC) Rule 2.5, judicial officers have a responsibility to maintain competence and stay up to date with technology. Similarly, the Model Rules of Professional Conduct (MRPC) Rule 1.1 states that lawyers must provide competent representation to their clients which includes having technical competence.
The National Counterintelligence and Security Center (NCSC) emphasizes that both judicial officers and lawyers must have a basic understanding of AI and be aware of the risks associated with using AI for research and document drafting. Furthermore, judicial officers must uphold their duty of confidentiality. This means they should be cautious when they or their staff are entering sensitive or confidential information into AI systems for legal research or document preparation, ensuring that the information is not being retained or misused by the AI platform. I was surprised to find out that while the National Cyber Security Center provides these guidelines, they are not legally binding, but are strongly recommended.
Members of the legal field should also be aware that there may be additional state-specific rules and obligations depending on the state where they practice. For instance, in April 2024, the New York State Bar Association established a Task Force on AI and issued a Report and Recommendations. The New York guidance notes that “attorneys [have a duty] to understand the benefits, not just the risks, of AI in providing competent and ethical legal representation and allows the use of AI tools to be considered in the reasonableness of attorney fees.” In New Jersey, “although lawyers do not have to tell a client every time they use AI, they may have an obligation to disclose the use of AI if the client cannot make an informed decision without knowing.” I think lawyers and judicial officers should be aware of their state’s rules for AI and make sure they are not blindly using it.
Disclosing the Use of AI
Some clients have explicitly requested that their lawyers refrain from using AI tools in their legal representation. However, for the clients who do not express their wishes, lawyers wrestle with the question of whether they should inform their clients that they use AI in their case matters. While there is no clear answer, some lawyers have decided to discuss with their clients that they wish to use AI and before doing so obtain consent, which seems like a good idea.
Rule 1.4(2) of the American Bar Association (ABA) Model Rules of Professional Conduct addresses attorney-client communication. It provides that a lawyer must “reasonably consult with the client about the means by which the client’s objectives are to be accomplished.” This raises the question of whether this rule covers the use of AI. If it does, how much AI assistance should be disclosed to clients? For instance, should using ChatGPT to draft a brief be disclosed, while using law students for the same task does not require disclosure? These are some of the ethical questions currently being debated in the legal field.
While AI can be beneficial, it raises various questions regarding its ethicality. As a profession, it is much better to be proactive than reactive in regulating Artificial Intelligence. The biggest issue is that AI is still evolving, so regulations today may be inefficient compared to what AI may become tomorrow.
You raise an issue about cost-benefit analysis within the legal field and whether it is worth it to purchase models at this time, which is very interesting. If these companies determine that purchasing models would be beneficial, How will this impact the gap between larger and smaller firms? Since they generate more money, they will likely have a competitive advantage in the profession as they can afford the AI models, impacting smaller firms and government agencies that will lack access to these beneficial resources. This raises the point of a potential necessity to make the model accessible/affordable so that the entire industry has a competitive playing field to promote justice.
I believe regulating AI starts with transparency—ensuring that Lawyers tell their clients and the courts about their use. As a profession, it is essential to embrace technological advancements to further careers and increase effectiveness. Making it transparent sets the message that AI is being promoted, and there are still regulations under the rules of professional conduct that ensure lawyers’ competency.