AI in the Legal Field

What is AI? 

Photo Source

AI, or Artificial Intelligence, refers to a set of technologies that enables computers to simulate human intelligence and perform tasks that typically require human cognition. Examples of AI applications include ChatGPT, Harvey.AI, and Google Gemini. These systems are designed to think and learn like humans, continually improving as users interact with them. They are trained through algorithms of data to improve their performance, which allows them to enhance their performance over time without being explicitly programmed for every task. Unlike Google, which provides search results based on web queries, ChatGPT generates human-like answers to prompts through the process by which computers learn from examples.

Cost-Benefit Analysis of AI in the Legal Field 

Photo Source

The primary areas where AI is being applied in the legal field include; reviewing documents for discovery, which is generally referred to as technology-assisted review (TAR), legal research through automated searches of case law and statutes, contract and legal document analysis, proofreading, and document organization.  

One of the main reasons AI is used in the legal field is because it saves time. By having AI conduct routine tasks, such as proofreading, AI frees up attorneys’ time to focus on more complex tasks. This increased efficiency may also enable law firms to reduce their staff headcount and save money. For example, without AI, proofreading a document can take hours, but with AI, it can be completed in less than a minute, identifying and correcting errors instantly. As they say, “time is money.” AI is also valuable because it produces high-quality work. Since AI doesn’t get tired or become distracted, it can deliver flawless, error-free, and enhanced results. Tasks like document review, proofreading, and legal research can be tedious, but AI handles the initial “heavy lifting,” reducing stress and frustration for attorneys. As one saying goes, “No one said attorneys had to do everything themselves!  

While AI has the potential to save law firms money, I do not think the promised cost reduction always materialized in the way that one may anticipate. It may not be worth it for a law firm to use AI because the initial investment in AI technology can be substantial. The cost can range from $5,000 for simple models to over $500,000 for complex models. After law firms purchase the AI system, they then have to train their staff to use it effectively and upgrade the software regularly. “These costs can be substantial and may take time to recoup.” Law firms might consider doing a cost-benefit analysis before determining if using AI is the right decision for them.

Problems With AI in the Legal Field 

One issue with AI applications is that they can perform tasks, such as writing and problem-solving, in ways that closely mimic human work. This makes it difficult for others to determine whether the work was created by AI or a human. For example, drafting documents now requires less human input because AI can generate these documents automatically. This raises concerns about trust and reliability, as myself and others may prefer to have a human complete the work rather than relying on AI, due to skepticism about AI’s accuracy and dependability. 

A major concern with the shift towards AI use is the potential spread of misinformation. Lawyers who rely on AI to draft documents without thoroughly reviewing what is produced may unknowingly present “hallucinations” which are made-up or inaccurate information. This can potentially lead to serious legal errors. Another critical issue is the risk of confidential client information being compromised. When lawyers put sensitive client data into AI systems to generate legal documents, they are potentially handing that data over to large technology companies. These companies usually prioritize their commercial interests, and without proper regulation, they could misuse client data for profit, potentially compromising client confidentiality, enabling fraud, and threatening the integrity of the judicial system.

A Case Where Lawyers Misused ChatGPT in Court 

As a law student who hopes to become a lawyer one day, it is concerning to see lawyers facing consequences for using AI. However, it is also understandable that if a lawyer does not use AI carefully, they will get sanctioned. Two of the first lawyers to use AI in court and encounter “hallucinations” were Steven Schwartz and Peter LoDuca. The lawyers were representing a client in a personal injury lawsuit against an airline company. Schwartz used ChatGPT to help prepare a filing, allegedly unaware that the AI had fabricated several case citations. Specifically, AI cited at least six cases, including Varghese v. China Southern Airlines and Shaboon v. Egypt Air, the court found these cases didn’t exist. The court said these cases had “bogus judicial decisions with bogus quotes and bogus internal citations.” As a result, both attorneys were each fined $5,000. Judge P. Kevin Castel said he might not have punished the attorneys if they had come “clean” about using ChatGPT to find the purported cases the AI cited.

AI Limitations in Court

Photo Source

As of February 2024, about 2% of the more than 1,600 United States District and Magistrate judges have issued 23 standing orders addressing the use of AI. These standing orders mainly block or put guidelines on using AI due to concerns about technology accuracy issues. Some legal scholars have raised concerns that these orders might discourage attorneys and self-represented litigants from using AI tools. I think instead of completely banning the use of AI, one possible approach could be requiring attorneys to disclose to the court when they use AI for their work. For example, U.S. District Judge Leslie E. Kobayashi of Hawaii wrote in her order, “The court directs that any party, whether appearing pro se or through counsel, who utilizes any generative artificial intelligence (AI) tool in the preparation of any documents to be filed with the court, must disclose in the document that AI was used and the specific AI tool that was used. The unrepresented party or attorney must further certify in the document that the person has checked the accuracy of any portion of the document drafted by generative AI, including all citations and legal authority.” 

Ethicality of AI 

Judicial officers include judges, magistrates, and candidates for judicial office. Under the Model Code of Judicial Conduct (MCJC) Rule 2.5, judicial officers have a responsibility to maintain competence and stay up to date with technology.  Similarly, the Model Rules of Professional Conduct (MRPC) Rule 1.1 states that lawyers must provide competent representation to their clients which includes having technical competence.  

The National Counterintelligence and Security Center (NCSC) emphasizes that both judicial officers and lawyers must have a basic understanding of AI and be aware of the risks associated with using AI for research and document drafting. Furthermore, judicial officers must uphold their duty of confidentiality. This means they should be cautious when they or their staff are entering sensitive or confidential information into AI systems for legal research or document preparation, ensuring that the information is not being retained or misused by the AI platform. I was surprised to find out that while the National Cyber Security Center provides these guidelines, they are not legally binding, but are strongly recommended. 

Members of the legal field should also be aware that there may be additional state-specific rules and obligations depending on the state where they practice. For instance, in April 2024, the New York State Bar Association established a Task Force on AI and issued a Report and Recommendations. The New York guidance notes that “attorneys [have a duty] to understand the benefits, not just the risks, of AI in providing competent and ethical legal representation and allows the use of AI tools to be considered in the reasonableness of attorney fees.” In New Jersey, “although lawyers do not have to tell a client every time they use AI, they may have an obligation to disclose the use of AI if the client cannot make an informed decision without knowing.” I think lawyers and judicial officers should be aware of their state’s rules for AI and make sure they are not blindly using it. 

Disclosing the Use of AI 

Some clients have explicitly requested that their lawyers refrain from using AI tools in their legal representation. However, for the clients who do not express their wishes, lawyers wrestle with the question of whether they should inform their clients that they use AI in their case matters. While there is no clear answer, some lawyers have decided to discuss with their clients that they wish to use AI and before doing so obtain consent, which seems like a good idea.  

Photo Source

Rule 1.4(2) of the American Bar Association (ABA) Model Rules of Professional Conduct addresses attorney-client communication. It provides that a lawyer must “reasonably consult with the client about the means by which the client’s objectives are to be accomplished.” This raises the question of whether this rule covers the use of AI. If it does, how much AI assistance should be disclosed to clients? For instance, should using ChatGPT to draft a brief be disclosed, while using law students for the same task does not require disclosure? These are some of the ethical questions currently being debated in the legal field.

Parents Using Their Children for Clicks on YouTube to Make Money

With the rise of social media, an increasing number of people have turned to these platforms to earn money. A report from Goldman Sachs reveals that 50 million individuals are making a living as influencers, and this number is expected to grow by 10% to 20% annually through 2028. Alarmingly, some creators are exploiting their children in the process by not giving them fair compensation.

Photo Credits

How Do YouTubers Make Money? 

You might wonder how YouTubers make money from their videos. YouTube pays creators for views through ads that appear in their content. The more clicks they get the more money they make. Advertisers pay YouTube a set rate for every 1,000 ad views, YouTube keeps 45% of the revenue while creators receive the remaining 55%. To earn money from ads, creators must be eligible for the YouTube Partner Program (YPP). YYP allows revenue sharing from ads that are played on the influencer’s content. On average, a YouTuber earns about $0.018 per view, which totals approximately $18 for every 1,000 views. As of September 30, 2024, the average annual salary for a YouTube channel in the United States is $68,714, with well-known YouTubers earning between $48,500 and $70,500, and top earners making around $89,000. Some successful YouTubers even make millions annually. 

In addition to ad revenue, YouTubers can earn through other sources like AdSense, which also pays an average of $18 per 1,000 ad views. However, only 15% of total video views count toward the required 30 seconds of view time for the ad to qualify for payment. Many YouTubers also sell merchandise such as t-shirts, sweatshirts, hats, and phone cases. Channels with over 1 million subscribers often have greater opportunities for sponsorships and endorsements. Given the profit potential, parents may be motivated to create YouTube videos that attract significant views. Popular genres that feature kids include videos unboxing and reviewing new toys, demonstrating how certain toys work, participating in challenges or dares, creating funny or trick videos, and engaging in trending TikTok dances. 

Photo Credits 

Child Labor Laws Relating to Social Media 

Only a few states have established labor laws specifically for child content creators, with California and Illinois being worthy examples. Illinois was one of the first states to implement such regulations, started by 16-year-old Shreya Nallamothu. She brought attention to the issue of parents profiting from their children’s appearances in their content to Governor J.B. Pritzker. Shreya noted that she “kept seeing cases of exploitation” during her research and felt compelled to act. In a local interview, she explained her motivation for the change was triggered by “…very young children who may not understand what talking to a camera means, they can’t grasp what a million viewers look like. They don’t comprehend what they’re putting on the internet for profit, nor that it won’t just disappear, and their parents are making money off it.” 

As a result, Illinois passed Illinois Law SB 1782, which took effect on July 1, 2024. This law mandates that parent influencers compensate their children for appearing in their content. It amends the state’s Child Labor Law to include children featured in their parents’ or caregivers’ social media. Minors 16 years old and under must be paid 15% of the influencer’s gross earnings if they appear in at least 30% of monetized content. Additionally, they are entitled to 50% of the profits based on the time they are featured. The adult responsible for creating the videos is required to set aside the gross earnings in a trust account within 30 days for the child to access when they turn 18. The law also grants children the right to request the deletion of content featuring them. This part of the legislation is a significant step in ensuring that children have some control over the content that follows them into adulthood. If the adult fails to comply, the minor can sue for damages once they become adults. Generally, children who are not residents of Illinois can bring an action under this law as long as the alleged violation occurred within Illinois, the law applies to the case, and the court has jurisdiction over the parent (defendant).

California was the second state to pass a law on this. The California Content Creator Rights Act was authored by Senator Steve Padilla (D-San Diego) and passed in August 2024. This law requires influencers who feature minors in at least 30% of their videos to set aside a proportional percentage of their earnings in a trust for the minor to access upon reaching adulthood. This bill is broader than Illinois’s bill, but they both aim to ensure that creators who are minors receive fair financial benefits from the use of their image. 

There is hope that other states will see Illinois and California laws that give children influencers fair financial benefits for the use of their image in their parent’s videos and create similar laws. Parents should not be exploiting their kids by making a profit off of them. 

Photo Credits

Can Social Media Platforms Be Held Legally Responsible If Parents Do Not Pay Their Children? 

Social media platforms will probably not be held liable because of Section 230 of the Communications Decency Act of 1996. This law protects social media platforms from being held accountable for users’ actions and instead holds the user who made the post responsible for their own words and actions. For example, if a user posts defamatory content on Instagram, the responsibility lies with the user, not Instagram.  

Currently, the only states that have requirements for parent influencers to compensate their children featured on their social media accounts are Illinois and California. If a parent in these states fails to set aside money for their child as required by law, most likely only the parent will be held liable. It is unlikely that social media platforms will be held responsible for violations by the parent because of Section 230.

Skip to toolbar