AI in the Legal Field

What is AI? 

Photo Source

AI stands for Artificial Intelligence, which refers to a collection of technologies that allow computers to simulate human intelligence and perform tasks that typically require human cognition. Examples of AI applications include ChatGPT, Harvey.AI, and Google Gemini. These systems are designed to think and learn like humans, continually improving as users interact with them. They are trained through algorithms of data to improve their performance, which allows them to enhance their performance over time without being explicitly programmed for every task. Unlike Google, which provides search results based on web queries, ChatGPT generates human-like answers to prompts through the process by which computers learn from examples.

Cost-Benefit Analysis of AI in the Legal Field 

Photo Source

The primary areas where AI is being applied in the law include; reviewing documents for discovery, which is generally referred to as technology-assisted review (TAR), legal research through automated searches of case law and statutes, contract and legal document analysis, proofreading, and document organization.  

One of the main reasons AI is used in the legal field is because it saves time. By having AL conduct routine tasks, such as proofreading, AI frees up attorneys’ time to focus on more complex aspects of their work. This increased efficiency may also enable law firms to reduce their staff headcount and save money. For example, without AI, proofreading a document can take hours, but with AI, it can be completed in less than a minute, identifying and correcting errors instantly. Like they say, time is money. AI is also valuable because it produces high-quality work. Since AI doesn’t get tired or become distracted, it can deliver flawless, error-free, and enhanced results. Tasks like document review, proofreading, and legal research can be tedious, but AI handles the initial “heavy lifting,” reducing stress and frustration for attorneys. As one saying goes, “No one said attorneys had to do everything themselves!  

While AI has the potential to save law firms money, I do not think the promised cost reduction always materialized in the way that one may anticipate. It may not be worth it for a law firm to use AI because the initial investment in AI technology can be substantial. The cost can range from $5,000 for simple models to over $500,000 for complex models. After law firms purchase the AI system, they then have to train their staff to use it effectively and maintain the software upgrade regularly. “These costs can be substantial and may take time to recoup.” Law firms might consider doing a cost-benefit analysis before determining if using AI is the right decision for them.

Problems With AI in the Legal Field 

One issue with AI applications is that they can perform tasks, such as writing and problem-solving, in ways that closely mimic human work. This makes it difficult for others to determine whether the work was created by AI or a human. For example, drafting documents now requires less human input because AI can generate these documents automatically. This raises concerns about trust and reliability, as myself and others may prefer to have a human complete the work rather than relying on AI, due to skepticism about AI’s accuracy and dependability. 

A major concern with the shift towards AI use is the potential spread of misinformation. Lawyers who rely on AI to draft documents without thoroughly reviewing what is produced may unknowingly present “hallucinations” which are made-up or inaccurate information. This can potentially lead to serious legal errors. Another critical issue is the risk of confidential client information being compromised. When lawyers put sensitive client data into AI systems to generate legal documents, they are handing that data over to large technology companies. These companies usually prioritize their commercial interests, and without proper regulation, they could misuse client data for profit, potentially compromising client confidentiality, enabling fraud, and threatening the integrity of the judicial system.

A Case Where Lawyers Misused ChatGPT in Court 

As a law student who hopes to become a lawyer one day, it is concerning to see lawyers facing consequences for using AI. However, it is also understandable that if a lawyer misuses AI, they will get sanctioned. Two of the first lawyers to use AI in court and encounter “hallucinations” were Steven Schwartz and Peter LoDuca. The lawyers were representing a client in a personal injury lawsuit against an airline company. Schwartz used ChatGPT to help prepare a filing, allegedly unaware that the AI had fabricated several case citations. Specifically, AI cited at least six cases, including Varghese v. China Southern Airlines and Shaboon v. Egypt Air, but the court found these cases didn’t exist. The court said these cases had “bogus judicial decisions with bogus quotes and bogus internal citations.” As a result, both attorneys were each fined $5,000. Judge P. Kevin Castel said he might not have punished the attorneys if they had come “clean” about using ChatGPT to find the purported cases the AI cited.

AI Limitations in Court

Photo Source

As of February 2024, about 2% of the more than 1,600 United States district and magistrate judges have issued 23 standing orders addressing the use of AI. These standing orders mainly block or put guidelines on using artificial intelligence due to concerns about technology accuracy issues. Some legal scholars have raised concerns that these orders might discourage attorneys and self-represented litigants from using AI tools. I think instead of completely banning the use of AI, one possible approach could be requiring attorneys to disclose to the court when they use AI for their work. For example, U.S. District Judge Leslie E. Kobayashi of Hawaii wrote in her order, “The court directs that any party, whether appearing pro se or through counsel, who utilizes any generative artificial intelligence (AI) tool in the preparation of any documents to be filed with the court, must disclose in the document that AI was used and the specific AI tool that was used. The unrepresented party or attorney must further certify in the document that the person has checked the accuracy of any portion of the document drafted by generative AI, including all citations and legal authority.” 

Ethicality of AI 

Judicial officers include judges, magistrates, and candidates for judicial office. Under the Model Code of Judicial Conduct (MCJC) Rule 2.5, judicial officers have a responsibility to maintain competence and stay up to date with technology.  Similarly, the Model Rules of Professional Conduct (MRPC) Rule 1.1 states that lawyers must provide competent representation to their clients which includes having technical competence.  

The National Counterintelligence and Security Center (NCSC) emphasizes that both judicial officers and lawyers must have a basic understanding of AI and be aware of the risks associated with using AI for research and document drafting. Furthermore, judicial officers must uphold their duty of confidentiality. This means they should be cautious when they or their staff are entering sensitive or confidential information into AI systems for legal research or document preparation, ensuring that the information is not being retained or misused by the AI platform. I was surprised to find out that while the National Cyber Security Center provides these security guidelines, they are not legally binding, but are strongly recommended. 

Members of the legal field should also be aware that there may be state-specific rules and obligations depending on the state where they practice. For instance, in April 2024, the New York State Bar Association established a Task Force on AI and issued a Report and Recommendations in April 2024. The New York guidance notes that “attorneys [have a duty] to understand the benefits, not just the risks, of AI in providing competent and ethical legal representation and allows the use of AI tools to be considered in the reasonableness of attorney fees.” In New Jersey, “although lawyers do not have to tell a client every time they use AI, they may have an obligation to disclose the use of AI if the client cannot make an informed decision without knowing.” I think lawyers and judicial officers should be aware of their state’s rules for AI and make sure they are not blindly using it. 

Disclosing the Use of AI 

Some clients have explicitly requested that their lawyers refrain from using AI tools in their legal representation. However, for the clients who do not express their wishes, lawyers wrestle with the question of whether they should inform their clients that they use AI in their case matters. While there is no clear answer, some lawyers have decided to discuss with their clients that they wish to use AI and before doing so obtain consent, which seems like a good idea.  

Photo Source

Rule 1.4(2) of the American Bar Association (ABA) Model Rules of Professional Conduct addresses attorney-client communication. It provides that a lawyer must “reasonably consult with the client about the means by which the client’s objectives are to be accomplished.” So, this raises the question of whether this rule includes discussing the use of AI? If it does, then how much AI assistance should be disclosed to clients? Should the use of ChatGPT to draft a brief be disclosed to the client, meanwhile the use of law students to do the same work does not need to? These are some ethical questions that the legal field is currently contemplating.

Skip to toolbar