AI in the Legal Field

What is AI? 

Photo Source

AI stands for Artificial Intelligence, which refers to a collection of technologies that allow computers to simulate human intelligence and perform tasks that typically require human cognition. Examples of AI applications include ChatGPT, Harvey.AI, and Google Gemini. These systems are designed to think and learn like humans, continually improving as users interact with them. They are trained through algorithms of data to improve their performance, which allows them to enhance their performance over time without being explicitly programmed for every task. Unlike Google, which provides search results based on web queries, ChatGPT generates human-like answers to prompts through the process by which computers learn from examples.

Cost-Benefit Analysis of AI in the Legal Field 

Photo Source

The primary areas where AI is being applied in the law include; reviewing documents for discovery, which is generally referred to as technology-assisted review (TAR), legal research through automated searches of case law and statutes, contract and legal document analysis, proofreading, and document organization.  

One of the main reasons AI is used in the legal field is because it saves time. By having AL conduct routine tasks, such as proofreading, AI frees up attorneys’ time to focus on more complex aspects of their work. This increased efficiency may also enable law firms to reduce their staff headcount and save money. For example, without AI, proofreading a document can take hours, but with AI, it can be completed in less than a minute, identifying and correcting errors instantly. Like they say, time is money. AI is also valuable because it produces high-quality work. Since AI doesn’t get tired or become distracted, it can deliver flawless, error-free, and enhanced results. Tasks like document review, proofreading, and legal research can be tedious, but AI handles the initial “heavy lifting,” reducing stress and frustration for attorneys. As one saying goes, “No one said attorneys had to do everything themselves!  

While AI has the potential to save law firms money, I do not think the promised cost reduction always materialized in the way that one may anticipate. It may not be worth it for a law firm to use AI because the initial investment in AI technology can be substantial. The cost can range from $5,000 for simple models to over $500,000 for complex models. After law firms purchase the AI system, they then have to train their staff to use it effectively and maintain the software upgrade regularly. “These costs can be substantial and may take time to recoup.” Law firms might consider doing a cost-benefit analysis before determining if using AI is the right decision for them.

Problems With AI in the Legal Field 

One issue with AI applications is that they can perform tasks, such as writing and problem-solving, in ways that closely mimic human work. This makes it difficult for others to determine whether the work was created by AI or a human. For example, drafting documents now requires less human input because AI can generate these documents automatically. This raises concerns about trust and reliability, as myself and others may prefer to have a human complete the work rather than relying on AI, due to skepticism about AI’s accuracy and dependability. 

A major concern with the shift towards AI use is the potential spread of misinformation. Lawyers who rely on AI to draft documents without thoroughly reviewing what is produced may unknowingly present “hallucinations” which are made-up or inaccurate information. This can potentially lead to serious legal errors. Another critical issue is the risk of confidential client information being compromised. When lawyers put sensitive client data into AI systems to generate legal documents, they are handing that data over to large technology companies. These companies usually prioritize their commercial interests, and without proper regulation, they could misuse client data for profit, potentially compromising client confidentiality, enabling fraud, and threatening the integrity of the judicial system.

A Case Where Lawyers Misused ChatGPT in Court 

As a law student who hopes to become a lawyer one day, it is concerning to see lawyers facing consequences for using AI. However, it is also understandable that if a lawyer misuses AI, they will get sanctioned. Two of the first lawyers to use AI in court and encounter “hallucinations” were Steven Schwartz and Peter LoDuca. The lawyers were representing a client in a personal injury lawsuit against an airline company. Schwartz used ChatGPT to help prepare a filing, allegedly unaware that the AI had fabricated several case citations. Specifically, AI cited at least six cases, including Varghese v. China Southern Airlines and Shaboon v. Egypt Air, but the court found these cases didn’t exist. The court said these cases had “bogus judicial decisions with bogus quotes and bogus internal citations.” As a result, both attorneys were each fined $5,000. Judge P. Kevin Castel said he might not have punished the attorneys if they had come “clean” about using ChatGPT to find the purported cases the AI cited.

AI Limitations in Court

Photo Source

As of February 2024, about 2% of the more than 1,600 United States district and magistrate judges have issued 23 standing orders addressing the use of AI. These standing orders mainly block or put guidelines on using artificial intelligence due to concerns about technology accuracy issues. Some legal scholars have raised concerns that these orders might discourage attorneys and self-represented litigants from using AI tools. I think instead of completely banning the use of AI, one possible approach could be requiring attorneys to disclose to the court when they use AI for their work. For example, U.S. District Judge Leslie E. Kobayashi of Hawaii wrote in her order, “The court directs that any party, whether appearing pro se or through counsel, who utilizes any generative artificial intelligence (AI) tool in the preparation of any documents to be filed with the court, must disclose in the document that AI was used and the specific AI tool that was used. The unrepresented party or attorney must further certify in the document that the person has checked the accuracy of any portion of the document drafted by generative AI, including all citations and legal authority.” 

Ethicality of AI 

Judicial officers include judges, magistrates, and candidates for judicial office. Under the Model Code of Judicial Conduct (MCJC) Rule 2.5, judicial officers have a responsibility to maintain competence and stay up to date with technology.  Similarly, the Model Rules of Professional Conduct (MRPC) Rule 1.1 states that lawyers must provide competent representation to their clients which includes having technical competence.  

The National Counterintelligence and Security Center (NCSC) emphasizes that both judicial officers and lawyers must have a basic understanding of AI and be aware of the risks associated with using AI for research and document drafting. Furthermore, judicial officers must uphold their duty of confidentiality. This means they should be cautious when they or their staff are entering sensitive or confidential information into AI systems for legal research or document preparation, ensuring that the information is not being retained or misused by the AI platform. I was surprised to find out that while the National Cyber Security Center provides these security guidelines, they are not legally binding, but are strongly recommended. 

Members of the legal field should also be aware that there may be state-specific rules and obligations depending on the state where they practice. For instance, in April 2024, the New York State Bar Association established a Task Force on AI and issued a Report and Recommendations in April 2024. The New York guidance notes that “attorneys [have a duty] to understand the benefits, not just the risks, of AI in providing competent and ethical legal representation and allows the use of AI tools to be considered in the reasonableness of attorney fees.” In New Jersey, “although lawyers do not have to tell a client every time they use AI, they may have an obligation to disclose the use of AI if the client cannot make an informed decision without knowing.” I think lawyers and judicial officers should be aware of their state’s rules for AI and make sure they are not blindly using it. 

Disclosing the Use of AI 

Some clients have explicitly requested that their lawyers refrain from using AI tools in their legal representation. However, for the clients who do not express their wishes, lawyers wrestle with the question of whether they should inform their clients that they use AI in their case matters. While there is no clear answer, some lawyers have decided to discuss with their clients that they wish to use AI and before doing so obtain consent, which seems like a good idea.  

Photo Source

Rule 1.4(2) of the American Bar Association (ABA) Model Rules of Professional Conduct addresses attorney-client communication. It provides that a lawyer must “reasonably consult with the client about the means by which the client’s objectives are to be accomplished.” So, this raises the question of whether this rule includes discussing the use of AI? If it does, then how much AI assistance should be disclosed to clients? Should the use of ChatGPT to draft a brief be disclosed to the client, meanwhile the use of law students to do the same work does not need to? These are some ethical questions that the legal field is currently contemplating.

The New Face of Misinformation: Deepfakes and Their Legal Challenges

The New Face of Misinformation: Deepfakes and Their Legal Challenges

What if I told you that the video of a celebrity endorsing a product or a politician delivering a shocking confession wasn’t real? That’s the unsettling reality deepfakes have created, where seeing and hearing are no longer believing. Deepfakes are a double edged sword – while they can be used for entertainment and creative expression, they also raise significant risks to privacy, spread misinformation, and enable manipulation. In a 2022 iProov study, 71% of global respondents were unaware of what deepfakes were, yet 57% believed they could distinguish between real videos and deepfakes. This gap in knowledge is alarming, considering deepfake technology can create entirely convincing and fictional representations of public figures​.

What Are Deepfakes, and Why Are They Problematic?

A deepfake is a highly realistic image, audio, or video generated by artificial intelligence, through deep learning models, to make someone appear as if they are saying or doing things they never did. Think face swaps, cloned voices, or AI-generated speeches. Deepfakes have been used to defame individuals, including public figures, by falsely attributing statements to them, and have been used to create non-consensual explicit content, such as deepfake pornography.

A prime example is when sexually explicit deepfake images of Taylor Swift flooded social media in January 2024, forcing platforms like X (formerly Twitter) to take drastic action by removing content and banning accounts involved. This raised questions about social media companies’ responsibility in controlling the spread of harmful AI-generated content.

Legal Ramifications: Are Deepfakes Protected by the First Amendment?

Deepfakes exist in a legal gray area. Defamation and copyright can sometimes address misuse of AI, but they don’t fully account for the complexities of AI-generated content due to its rapidly evolving nature. This creates tension with constitutional rights, particularly the First Amendment, which protects free speech. Courts now face difficult decisions, such as whether and how to punish deepfake creators, where to draw the line between free speech and harmful content, and how to address the legal ambiguity that has complicated federal regulation, prompting some states to take action.

The DEEPFAKES Accountability Act: A Federal Response

While no comprehensive federal regulation addressing deepfakes currently exists, the DEEPFAKES Accountability Act provides a framework for future legislation. Initially introduced in 2019 and renewed by Representative Yvette Clarke in 2023, the bill requires clear disclosures for AI-generated content to inform viewers, gives victims the right to seek damages, and introduces criminal penalties for malicious use of deepfakes. It specifically targets harmful applications of deepfakes, such as spreading disinformation, manipulating elections, and creating non-consensual explicit content.

If enacted, this law would empower agencies like the FTC and DOJ to enforce these regulations. However, achieving a balance between protecting victims and safeguarding free speech rights presents a challenge. Courts would likely need to evaluate each case individually, carefully weighing the totality of the circumstances surrounding the use of deepfake technology, which also poses significant challenges within legal practice, particularly regarding the authenticity of evidence.

The “Deepfake Defense” – How Deepfakes Impact the Legal System

Deepfakes present evidentiary challenges in courtrooms. Lawyers and defendants can now argue that incriminating videos are fabricated. In fact, recent high-profile cases have seen defendants cast doubt on the authenticity of video evidence by claiming it was AI-manipulated. This concept has been coined the “deepfake defense” by Judge Herbert B. Dixon Jr., who discusses its implications in his article, “The Deepfake Defense: An Evidentiary Conundrum.” Judge Dixon notes that deepfakes are videos created or altered with the aid of AI, leading to situations where it becomes difficult for courts to determine the authenticity of evidence presented.

This defense has emerged in high-profile cases. Attorneys for defendants charged with storming the Capitol on January 6, 2021, argued that the jury could not trust the video evidence displaying their clients at the riot, as there was no assurance that the footage was real or had not been altered. Experts have proposed amendments to the Federal Rules of Evidence to clarify the responsibilities of parties in authenticating evidence. Without a consistent approach, the risk of misjudgment remains, highlighting the urgent need for legal systems to adapt to the modern realities of AI technology. Prolonged trial processes result from these challenges, ultimately undermining judicial economy and wasting valuable resources. Courts are increasingly forced to rely on experts to authenticate digital evidence, analyzing factors such as eye movements, lighting, and speech patterns. However, as deepfakes become more advanced, even these experts struggle to detect them, raising significant concerns about how the judicial system will adapt.

What’s Next: A New Era of Regulation and Responsibility?

Deepfakes force us to rethink the role of social media platforms in preventing the spread of misinformation. Platforms like Instagram, X, and Facebook have started labeling manipulated media, but should these warnings be mandatory? Should platforms be held liable for the spread of deepfakes? These are critical questions that legislators and tech companies must address. Moving forward, balancing regulation and innovation will be key. Policymakers must develop laws that protect victims without stifling technological progress. At the same time, raising public awareness about the risks of deepfakes is essential.

In a world where deepfake technology is becoming increasingly accessible, trust in digital media is more fragile than ever. Laws and policies are starting to catch up, but it will take time to fully address the legal challenges deepfakes present. Until then, the best defense is an informed public, aware of the power and pitfalls of artificial intelligence.

 

Navigating Torts in the Digital Age: When Social Media Conduct Leads to Legal Claims

Navigating Torts in the Digital Age: When Social Media Conduct Leads to Legal Claims

Traditional tort law was developed in a world of face-to-face interactions. It was developed with the purpose of providing compensation for harm, deterring wrongful conduct, and ensuring accountability for violations to individual rights. However, the digital age has created new scenarios that often do not neatly fit within existing legal frameworks. This blog post explores how conduct on social media—be it intentional or accidental—can lead to tort claims such as defamation, right of publicity, or even battery, and how courts might apply tort law —sometimes even unusually— to address these modern challenges.

Torts and Social Media: Where the Two Intersect

Some traditional tort claims, like defamation, may seem to naturally extend to social media. However, at the beginning of the social media age, courts struggled with how to address wrongful conduct on social media that harmed individuals, requiring creative legal thinking to apply existing laws to the digital world..

  1. Battery in the Digital Space: Eichenwald v. Rivello

One of the most compelling cases that pushes the boundaries of tort law is Eichenwald v. Rivello. The parties are Kurt Eichenwald, a journalist with epilepsy, and John Rivello, a social media user. Eichenwald publicly disclosed his epilepsy and happened to be a frequent critic of certain political and social issues. Rivello —likely motivated by animosity toward Eichenwald, due to his public commentary on political issues— sent Eichenwald a tweet with a GIF containing flashing strobe lights designed to trigger his epilepsy, with the accompanying message, “You deserve a seizure for your post.” When Eichenwald opened his Twitter notifications, he suffered a seizure as a result of the GIF. This case posed a novel issue of law at the time: can sending a harmful image online constitute physical contact?

Trolls try to trigger seizures - is it assault? - BBC News

Despite the fact that battery traditionally required physical contact, the Court in Eichenwald held that Rivello’s conduct met the elements of battery. The strobing GIF made indirect contact with Eichenwald’s cornea, undeniably causing him harm. In this case, the Court had to stretch traditional tort principles to accommodate claims arising from digital conduct.

  1. Defamation and the Viral Nature of Social Media

Another tort commonly seen in social media cases is defamation. With the ability to share statements quickly with a wide audience, defamation claims have become the primary claim seen arising out of social media interactions. One situation we can analyze under this claim is the ‘Central Park Karen’ incident. In 2020, a bystander recorded Amy Cooper’s altercation with an African American birdwatcher and shared it online where it went viral. Following the incident, her employer, Franklin Templeton, made a public statement condemning racism and Cooper was fired.

Before “Karen” and “Becky” there was “John” – Communist Party USA

Cooper sued for defamation, arguing that the viral video and public statements caused harm to her reputation. Unfortunately for her, the Court dismissed her claim, reasoning that the employer’s statements were opinions, which are protected under the First Amendment. The controversy serves as a cautionary tale, not only warning people about their online behavior, but also their actions in public. Videos of behaviors in public now are subject to recordings that can spread like wildfire. Cooper herself writes that the video still haunts her to this day.

As exemplified in the dismissal of Cooper’s case, the key to defamation claims is distinguishing between factual statements, false statements, and opinions—especially in the context of social media, where free-flowing ideas and opinions can cause significant reputational harm. In the social media age, analyzing defamation claims requires balancing free speech with the protection of individuals’ reputations.

  1. Cancel Culture and Tortious Interference with Business Relations

In the case of Amy Cooper, she has been what one would call “canceled,” but in the real world, rather than in the context of social media. The rise of cancel culture has posed a threat to influencers and public figures who often rely on brand deals and partnerships for their livelihoods. In many controversies, the “cancellation” is a result of fair criticism to the public figure. But what happens when it is the result of false or harmful misinformation spread online? While defamation may be one avenue, tortious interference with business relations might also come into play.

An example fake tweet created by using Tweetgen. | Download Scientific Diagram
Disclaimer: This tweet is a fake example and was not actually posted by NASA. It is being used here purely for illustrative purposes.

Imagine an influencer who becomes the target of a viral campaign based on photoshopped offensive tweets. As the “screenshots” roam the internet, the influencer’s followers drop, brand deals are canceled, and new partnerships become difficult to secure. Since the false information led to a disruption of business relationships, this may be a scenario giving rise to a claim for tortious interference, especially if the creation of that false information was done so maliciously, targeting the influencer’s success.

Tortious interference claims require showing that a third party intentionally caused harm to the plaintiff’s business relationships. In the context of social media, competitors or malicious individuals could spread misinformation that causes financial loss.

The Future of Torts and Social Media

As social media continues to influence how we communicate, courts face the challenge of adapting traditional tort law to address new types of harm in the digital age. While many no longer consider Social media a “new” concept, you can imagine that courts will have to similarly adapt old law to new technologies, such as Artificial Intelligence. Cases like Eichenwald v. Rivello demonstrate how legal frameworks can be stretched to accommodate harm caused by online conduct. Claims like defamation, tortious interference, and right of publicity claims highlight the real consequences of social media scandals. As we navigate social media spaces, it’s important for individuals—whether influencers, content creators, or casual users—to recognize when their actions cross the line into actionable torts. Understanding the potential legal consequences of online behavior, and even in public, is essential for avoiding disputes and protecting rights in this rapidly changing environment.

 

 

Meta AI: Innovation, but at what cost?

Artificial Intelligence has become the cutting edge of technology for decades to come, and to this point, nobody knows its complete capabilities. AI is limitless. The more recent advancements include Social Media Companies developing their own AI configurations to enhance the user’s experiences and allows users to use AI to do tasks like text/image generation, assist users navigation through the app, and more.  So what’s the issue? Well, companies like Meta are creating their own AI for their platforms as open-sourced models which can pose significant privacy risks to their users.

What is Meta & Meta AI?

Meta which was formerly known as “The Facebook Inc.” rebranded to encompass a variety of platforms under one corporation which includes relevant social networks such as Instagram and WhatsApp. Both are commonly known social media platforms that also connect millions of people around the globe. Meta developed their AI platform “Meta AI” in April of 2024 which can do things like Answer questions, Generate photos, Search Instagram reels, Provide emotional support, Assist with tasks like solving scholastic problems, write emails, and more.

Photo Credits

Open-Source V. Closed-Source

Meta has established that their AI is an open-source model, but what’s the difference? Well, AI can be either an open-sourced or closed-sourced. An open-sourced AI Model means that the data and software are publicly available to anyone. By sharing code and data, developers can learn from each other and continue to innovate the AI model. Users of an open-source AI Model have the ability to examine the AI systems they use, which can promote transparency. However, there can be difficulties in regulating bad actors.

Closed-sourced models keep their data and software secret strictly to their owners and developers. By keeping their code and data secret, closed-source AI companies can protect their trade secrets and prevent unauthorized access or copying. Closed-source AI, however, tends to be less innovative as 3rd party developers cannot contribute to future technological advancements of the AI model. It is also difficult for users to examine and patrol the model because they  do not have access to the data inputted and the software.

The Cost:

In order to train this open sourced model Meta used a variety of users data. What data exactly Meta is taking from you? Well to highlight some of the controversial data they are taking, it includes: Content that users create, Messages users send and receive that aren’t end-to-end encrypted, users engagement with posts, Purchases users make through meta, users Contact Info, Device information, GPS location, IP Address, and Cookie Data. All of which according to their privacy policy are permitted for their use. Meta disclaims in their privacy policy that “Meta may share certain information about you that is processed when using the AI’s with third parties who help us provide you with more relevant or useful responses.”This includes personal information.

By Meta being committed to open-sourcing their AI they pose a great privacy risk to their users. While they have already noted that they may share personal information with 3rd parties in certain situations, outside developers have the opportunity to expose vulnerabilities within their algorithm by reverse-engineering the code to extract data that the Algorithm was trained with. Which in Meta’s case, can involve the personal information of the users that they used to train the model. Additionally, 3rd parties will also now have access to a wide variety of consumer information without consumers’ giving direct consent to them. Companies can then use this information to their commercial advantage. 

Meta has stated that they have taken exemplary steps in order to ensure the protection of their user’s data from third parties. This includes the development of third-party oversight and management programs that mitigate risk and implement what they believe to be the necessary steps to do so. To note, Facebook has been breached on more than one occasion, most notably in relation to the Cambridge Analytica Scandal. where Cambridge Analytica stole more than 10 million users of Facebook personal information for voter profiling and targeting.

Innovative:

Upon release, there were privacy concerns amongst users since Meta’s AI model was open-sourced. Mark Zuckerberg, CEO of Meta issued a public statement highlighting the benefits of their AI model being open-sourced, to summarize:

  1. Open-sourced AI is good for developers because it gives them the technological freedom to control the software, and open-source models are developing at a faster rate than closed models. 
  2. The model will all meta to continue to be competitive, allowing them to spend more money on research. 
  3. By being open-sourced it gives the world an opportunity for economic growth and better security for everyone because it will allow Meta to be at the forefront of AI advancement.

Effectively, Metas’ open-source model is beneficial to ensure consistent technological achievement for the company.

Photo Credits:

What Users Can Do:

In reality, it is difficult to regulate open-sourced AI from bad actors. Therefore, governmental action is needed to protect users personal data from being exploited. Recently 12 states have taken initiative to protect users. For example, the State of California amended the CCPA to protect users’ personal information for the usage of training AI models. Imposing that users must affirmatively authorize the usage of their info, otherwise, it is prohibited. As for the rest of the nation, there is little to none state or federal regulation regarding users’ privacy, The American Data and Protection Act failed to pass a congressional vote, therefore rendering millions of people defenseless.

For users who are looking to stop Meta from using their data, there is no sort of opt-out button across the United States. However, according to Meta, depending on a user’s setting preferences, a photo or post can be stopped from being used by making them Private. Unfortunately, this is not retroactive and all previous data will not be removed from the model. 

While Meta looks to be at the forefront of AI, their open-sourced model poses serious security risks for their users due to lack of regulation and questionable protection.

Sport Regulation of Legal Matters with Social Media

The internet is becoming more accessible to individuals throughout the world. With more access to the internet, there is a growing population on the social media platforms. Social media platforms, such as Facebook (Meta), X (Twitter), Snapchat, and YouTube. These platforms provide an opportunity for engagement between consumers and producers.

 

Leagues such as the MLB, NFL, La Liga, and more have created an account, establishing presences in the social media world where they may interact with their fans (consumers) and their athletes (employees).

Why Social Media matters in sports.

As presence on Social Media platforms continue to grow so does the need for businesses to market themselves on the platforms. Therefore, leagues such as the MLB have created policies for its employees and athletes to follow. The MLB is a private organization even though it is spread around the United States. Usually sports leagues are private organizations headquartered in a specific state, New York HQ is where employees handle league matters. These organizations may create their own policies or guidelines which they may enforce internally. Even though organizations such as the MLB may go ahead an place their own policies, they must abide by Federal and State labor, corporate, criminal and more types of law. The policies that these leagues provide can give the leagues more power to ensure that they are abiding by the laws necessary to continue on the national and at times international scale.

MLB’s Management of Social Media. 

MLB’s Social Media policies are prefaced by this paragraph explaining who within the MLB establishes the policies. “Consistent with the authority vested in the Commissioner by the Major League Constitution (“MLC”) and the Major League Baseball Interactive Media Rights Agreement (“IMRA”), the Commissioner has implemented the following policy regarding the use of social media by individuals affiliated with Major League Baseball and the 30 Clubs. Nothing contained in this policy is intended to restrict or otherwise alter any of the rights otherwise granted by the IMRA.” To enforce power and regulation in Social Media, the league has referred to their Interactive Media Rights Agreement and their commissioner. These organizations generally will have an elected to serve the organization and help with executive managerial decisions.

There is a list of 10 explcit types of conduct related to Social Media for which the MLB Prohibits (A few rules that stand out will be listed):

1. Displaying or transmitting Content via Social Media in a manner that reasonably could be construed as an official public communication of any MLB Entity or attributed to any MLB Entity.

2. Using an MLB Entity’s logo, mark, or written, photographic, video, or audio property in any way that might indicate an MLB Entity’s approval of Content, create confusion as to attribution, or jeopardize an MLB Entity’s legal rights concerning a logo or mark.

3. Linking to the website of any MLB Entity on any Social Media outlet in any way that might indicate an MLB Entity’s approval of Content or create confusion as to attribution.

NOTE: Only Covered Individuals who are authorized by the Senior Vice
President, Public Relations of the Commissioner’s Office to use Social Media on behalf of an MLB Entity and display Content on Social Media in that capacity are exempt from Sections 1, 2 and 3 of this policy.

5. Displaying or transmitting Content that reasonably could be construed as
condoning the use of any substance prohibited by the Major or Minor League Drug Programs, or the Commissioner’s Drug Program.

7. Displaying or transmitting Content that reasonably could be viewed as
discriminatory, bullying, and/or harassing based on race, color, ancestry, sex, sexual orientation, national origin, age, disability, religion, or other categories protected by law and/or which would not be permitted in the workplace, including, but not limited to, Content that could contribute to a hostile work environment (e.g., slurs, obscenities, stereotypes) or reasonably could be viewed as retaliatory.

10. Displaying or transmitting Content that violates applicable local, state or federal law or regulations.

 

Notice that these policies are provided to the organization as a whole, but there are exceptions for individuals whose role for the league involves Social Media. Workers are privileged to not be bound by rules 1-3 but employees/athletes such as Ohtani are bound.

Mizuhara/Ohtani Gambling Situation.

One of the biggest stories of the MLB this year was the illegal gambling situation of Ohtani and his interpreter. In the MLB’s policies, gambling is strictly prohibited regardless if it is legal in the state where the athlete is a citizen.

In California, the state has yet to legalize betting. Therefore to place a bet, one would have to do so with a bookie and bookkeeper, not with an application such as Fanduel or go to a Tribal location where gambling is administered. 

Per the commissioner’s orders, the MLB launched an internal investigation on the matter as the situation involves violations of their policies and even criminal acts. The MLB may deem a punishment they find fit at the end of their investigation. However, the DOI is limited to how much the MLB funds them. The MLB’s Department of Investigation can only do so much with the limited resources that the MLB provides them to conduct investigations.

However, Ohtani was found to be a victim and there was a federal investigation launched. The complaint lists many counts of bank fraud allegations. In conducting the investigation, a forensic review of Mizuhara’s phone and texts were acquired. In addition, so were the suspected bookkeepers. There was evidence of the individuals discussing ways to bet, how to earn and pay debts, and discussions of wiring money from banks in excessive amounts.

What Does This All Mean?

The law and its administrations are beginning to adapt and acknowledge the presence of the internet. It is common to find Phones and communications through the internet seized for evidence in cases. The internet is essential for life. It must be determined if, as a society, do we want to have limits set since we are required to use the internet to live. Also, if we want to set limits to speech dependent on employment.

Parents Using Their Children for Clicks on YouTube to Make Money

With the rise of social media, an increasing number of people have turned to these platforms to earn money. A report from Goldman Sachs reveals that 50 million individuals are making a living as influencers, and this number is expected to grow by 10% to 20% annually through 2028. Alarmingly, some creators are exploiting their children in the process by not giving them fair compensation.

Photo Credits

How Do YouTubers Make Money? 

You might wonder how YouTubers make money from their videos. YouTube pays creators for views through ads that appear in their content. The more clicks they get the more money they make. Advertisers pay YouTube a set rate for every 1,000 ad views, YouTube keeps 45% of the revenue while creators receive the remaining 55%. To earn money from ads, creators must be eligible for the YouTube Partner Program (YPP). YYP allows revenue sharing from ads that are played on the influencer’s content. On average, a YouTuber earns about $0.018 per view, which totals approximately $18 for every 1,000 views. As of September 30, 2024, the average annual salary for a YouTube channel in the United States is $68,714, with well-known YouTubers earning between $48,500 and $70,500, and top earners making around $89,000. Some successful YouTubers even make millions annually. 

In addition to ad revenue, YouTubers can earn through other sources like AdSense, which also pays an average of $18 per 1,000 ad views. However, only 15% of total video views count toward the required 30 seconds of view time for the ad to qualify for payment. Many YouTubers also sell merchandise such as t-shirts, sweatshirts, hats, and phone cases. Channels with over 1 million subscribers often have greater opportunities for sponsorships and endorsements. Given the profit potential, parents may be motivated to create YouTube videos that attract significant views. Popular genres that feature kids include videos unboxing and reviewing new toys, demonstrating how certain toys work, participating in challenges or dares, creating funny or trick videos, and engaging in trending TikTok dances. 

Photo Credits 

Child Labor Laws Relating to Social Media 

Only a few states have established labor laws specifically for child content creators, with California and Illinois being worthy examples. Illinois was one of the first states to implement such regulations, started by 16-year-old Shreya Nallamothu. She brought attention to the issue of parents profiting from their children’s appearances in their content to Governor J.B. Pritzker. Shreya noted that she “kept seeing cases of exploitation” during her research and felt compelled to act. In a local interview, she explained her motivation for the change was triggered by “…very young children who may not understand what talking to a camera means, they can’t grasp what a million viewers look like. They don’t comprehend what they’re putting on the internet for profit, nor that it won’t just disappear, and their parents are making money off it.” 

As a result, Illinois passed Illinois Law SB 1782, which took effect on July 1, 2024. This law mandates that parent influencers compensate their children for appearing in their content. It amends the state’s Child Labor Law to include children featured in their parents’ or caregivers’ social media. Minors 16 years old and under must be paid 15% of the influencer’s gross earnings if they appear in at least 30% of monetized content. Additionally, they are entitled to 50% of the profits based on the time they are featured. The adult responsible for creating the videos is required to set aside the gross earnings in a trust account within 30 days for the child to access when they turn 18. The law also grants children the right to request the deletion of content featuring them. This part of the legislation is a significant step in ensuring that children have some control over the content that follows them into adulthood. If the adult fails to comply, the minor can sue for damages once they become adults. Generally, children who are not residents of Illinois can bring an action under this law as long as the alleged violation occurred within Illinois, the law applies to the case, and the court has jurisdiction over the parent (defendant).

California was the second state to pass a law on this. The California Content Creator Rights Act was authored by Senator Steve Padilla (D-San Diego) and passed in August 2024. This law requires influencers who feature minors in at least 30% of their videos to set aside a proportional percentage of their earnings in a trust for the minor to access upon reaching adulthood. This bill is broader than Illinois’s bill, but they both aim to ensure that creators who are minors receive fair financial benefits from the use of their image. 

There is hope that other states will see Illinois and California laws that give children influencers fair financial benefits for the use of their image in their parent’s videos and create similar laws. Parents should not be exploiting their kids by making a profit off of them. 

Photo Credits

Can Social Media Platforms Be Held Legally Responsible If Parents Do Not Pay Their Children? 

Social media platforms will probably not be held liable because of Section 230 of the Communications Decency Act of 1996. This law protects social media platforms from being held accountable for users’ actions and instead holds the user who made the post responsible for their own words and actions. For example, if a user posts defamatory content on Instagram, the responsibility lies with the user, not Instagram.  

Currently, the only states that have requirements for parent influencers to compensate their children featured on their social media accounts are Illinois and California. If a parent in these states fails to set aside money for their child as required by law, most likely only the parent will be held liable. It is unlikely that social media platforms will be held responsible for violations by the parent because of Section 230.

Privacy Please: Privacy Law, Social Media Regulation and the Evolving Privacy Landscape in the US

Social media regulation is a touchy subject in the United States.  Congress and the White House have proposed, advocated, and voted on various bills, aimed at protecting and guarding people from data misuse and misappropriation, misinformation, harms suffered by children, and for the implications of vast data collection. Some of the most potent concerns about social media stem from use and misuse of information by the platforms- from the method of collection, to notice of collection and use of collected information. Efforts to pass a bill regulating social media have been frustrated, primarily by the First Amendment right to free speech. Congress has thus far failed to enact meaningful regulation on social media platforms.

The way forward may well be through privacy law. Privacy laws give people some right to control their own personhood including their data, right to be left alone, and how and when people see and view them. Privacy laws originated in their current form in the late 1800’s with the impetus being one’s freedom from constant surveillance by paparazzi and reporters, and the right to control your own personal information. As technology mutated, our understanding of privacy rights grew to encompass rights in our likeness, our reputation, and our data. Current US privacy laws do not directly address social media, and a struggle is currently playing between the vast data collection practices of the platforms, immunity for platforms under Section 230, and private rights of privacy for users.

There is very little Federal Privacy law, and that which does exist is narrowly tailored to specific purposes and circumstances in the form of specific bills. Somes states have enacted their own privacy law scheme, California being on the forefront, Virginia, Colorado, Connecticut, and Utah following in its footsteps. In the absence of a comprehensive Federal scheme, privacy law is often judge-made, and offers several private rights of action for a person whose right to be left alone has been invaded in some way. These are tort actions available for one person to bring against another for a violation of their right to privacy.

Privacy Law Introduction

Privacy law policy in the United States is premised on three fundamental personal rights to privacy:

  1. Physical right to privacy- Right to control your own information
  2. Privacy of decisions– such as decisions about sexuality, health, and child-rearing. These are the constitutional rights to privacy. Typically not about information, but about an act that flows from the decision
  3. Proprietary Privacy – the ability to protect your information from being misused by others in a proprietary sense.

Privacy Torts

Privacy law, as it concerns the individual, gives rise to four separate tort causes of action for invasion of privacy:

  1. Intrusion upon Seclusion- Privacy law provides a tort cause of action for intrusion upon seclusion when someone intentionally intrudes upon the reasonable expectation of seclusion of another, physically or otherwise, and the intrusion is objectively highly offensive.
  2. Publication of Private Facts- One gives publicity To a matter concerning the Private life of another that is not of legitimate concern to the public, and the matter publicized would be objectively highly offensive. The first amendment provides a strong defense for publication of truthful matters when they are considered newsworthy.
  3. False Light – One who gives publicity to a matter concerning another that places the other before the public in a false light when The false light in which the other was placed would be objectively highly offensive and the actor had knowledge of or acted in reckless disregard as to the falsity of the publicized matter and the false light in which the other would be placed.
  4. Appropriation of name and likeness- Appropriation of one’s name or likeness to the defendant’s own use or benefit. There is no appropriation when a persona’s picture is used to illustrate a non-commercial, newsworthy article. This is usually commercial in nature but need not be. The appropriation could be of “identity”. It need not be misappropriation of name, it could be the reputation, prestige, social or commercial standing, public interest, or other value on the plaintiff’s likeness.

These private rights of action are currently unavailable for use against social media platforms because of Section 230 of the Decency in Communications Act, which provides broad immunity to online providers for posts on their platforms. Section 230 prevents any of the privacy torts from being raised against social media platforms.

The Federal Trade Commission (FTC) and Social Media

Privacy law can implicate social media platforms when their practices become unfair or deceptive to the public through investigation by the Federal Trade Commission (FTC). The FTC is the only federal agency with both consumer protection and competition jurisdiction in broad sectors of the economy. FTC investigates business practices where those practices are unfair or deceptive. FTC Act 15 U.S.C S 45- Act prohibits “unfair or deceptive acts or practices in or affecting commerce” and grants broad jurisdiction over privacy practices of businesses to the FTC. Trade practice is unfair if it causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and is not outweighed by countervailing benefits to consumers or competition. A deceptive act or practice is a material representation, omission, or practice that is likely to mislead the consumer acting reasonably in the circumstances, to the consumer’s detriment.

Critically, there is no private right of action in FTC enforcement. The FTC has no ability to enforce fines for S5 violations but can provide injunctive relief. By design, the FTC has very limited rulemaking authority, and looks to consent decrees and procedural, long-lasting relief as an ideal remedy. The FTC pursues several types of misleading or deceptive policy and practices that implicate social media platforms: notice and choice paradigms, broken promises, retroactive policy changes, inadequate notice, and inadequate security measures. Their primary objective is to negotiate a settlement where the company submits to certain measures of control of oversight by the FTC for a certain period of time. Violations of the agreements could yield additional consequences, including steep fines and vulnerability to class action lawsuits.

Relating to social media platforms, the FTC has investigated misleading terms and conditions, and violations of platform’s own policies. In Re Snapchat, the platform claimed that user’s posted information disappeared completely after a certain period of time, however, through third party apps and manipulation of user’s posts off of the platform, posts could be retained. The FTC and Snapchat settled, through a consent decree, to subject Snapchat to FTC oversight for 20 years.

The FTC has also investigated Facebook for violation of its privacy policy. Facebook has been ordered to pay a $5 billion penalty and to submit to new restrictions and a modified corporate structure that will hold the company accountable for the decisions it makes about its users’ privacy to settle FTC charges claiming that they violated a 2012 agreement with the agency.

Unfortunately, none of these measures directly give individuals more power over their own privacy. Nor do these policies and processes give individuals any right to hold platforms responsible for being misled by algorithms using their data, or for intrusion into their privacy by collecting data without allowing an opt-out.

Some of the most harmful social media practices today relate to personal privacy. Some examples include the collection of personal data, the selling and dissemination of data through the use of algorithms designed to subtly manipulate our pocketbooks and tastes, collection and use of data belonging to children, and the design of social media sites to be more addictive- all in service of the goal of commercialization of data.

No current Federal privacy scheme exists. Previous Bills on Privacy have been few and narrowly tailored to relatively specific circumstances and topics like healthcare and medical data protection by HIPPA, protection of data surrounding video rentals as in the Video Privacy Protection Act, and narrow protection for children’s data in Children’s Online Protection Act. All the schemes are outdated and fall short of meeting the immediate need of broad protection of widely collected and broadly utilized data from social media.

Current Bills on Privacy

Upon request from some of the biggest platforms, outcry from the public, and the White House’s request for Federal Privacy regulation, Congress appears poised to act. The 118th Congress has pushed privacy law as a priority in this term by introducing several bills related to social media privacy. There are at least ten Bills currently pending between the House of the Senate addressing a variety of issues and concerns from Children’s data privacy to the minimum age for use and designation of a new agency to monitor some aspects of privacy.

S744The Data Care Act of 2023 aims to protect social media user’s data privacy by imposing fiduciary duties on the platforms. The original iteration of the bill was introduced in 2021 and failed to receive a vote. It was re-introduced in March of 2023 and is currently pending. Under the act, social media platforms would have the duty to reasonably secure user’s data from access, refrain from using the data in a way that could foreseeably “benefit the online service provider to the detriment of the end user” and to prevent disclosure of user’s data unless the party is also bound by these duties. The bill authorizes the FTC and certain state officials to take enforcement actions upon breach of those duties. The states would be permitted to take their own legal action against companies for privacy violations. The bill would also allow the FTC to intervene in the enforcement efforts by imposing fines for violations.

H.R.2701 – Perhaps the most comprehensive piece of legislation on the House floor is the Online Privacy Act. In 2023, the bill was reintroduced by democrat Anna Eshoo after an earlier version on the bill failed to receive a vote and died in Congress. The Online Privacy Act aims to protect users by providing individuals rights relating to the privacy of their personal information. The bill would also provide privacy and security requirements for treatment of personal information. To accomplish this, the bill established a new agency – the Digital Privacy Agency- which would be responsible for enforcement of the rights and requirements. The new individual rights in privacy are broad and include the rights of access, correction, deletion, human review of automated decision, individual autonomy, right to be informed, and right to impermanence, amongst others. This would be the most comprehensive plan to date. The establishment of a new agency with a task specific to administration and enforcement of privacy laws would be incredibly powerful. The creation of this agency would be valuable irrespective of whether this bill is passed.

HR 821– The Social Media Child Protection Act is a sister bill to one by a similar name which originated in the Senate. This bill aims to protect children from the harms of social media by limiting children’s access to it. Under the bill, Social Media platforms are required to verify the age of every user before accessing the platform by submitting a valid identity document or by using another reasonable verification method. A social media platform will be prohibited from allowing users under the age of 16 to access the platform. The bill also requires platforms to establish and maintain reasonable procedures to protect personal data collected from users. The bill affords for a private right of action as well as state and FTC enforcement.

S 1291The Protecting Kids on Social Media Act is similar to its counterpart in the House, with slightly less tenacity. It similarly aims to protect children from social media’s harms. Under the bill, platforms must verify its user’s age, not allow the user to use the service unless their age has been verified, and must limit access to the platform for children under 12. The bill also prohibits retention and use of information collected during the age verification process. Platforms must take reasonable steps to require affirmative consent from the parent or guardian of a minor who is at least 13 years old for the creation of a minor account, and reasonably allow access for the parent to later revoke that consent. The bill also prohibits use of data collected from minors for algorithmic recommendations. The bill would require the Department of Commerce to establish a voluntary program for secure digital age verification for social media platforms. Enforcement would be through the FTC or state action.

S 1409– The Kids Online Safety Act, proposed by Senator Blumenthal of Connecticut, also aims to protect minors from online harms. This bill, as does the Online Safety Bill, establishes fiduciary duties for social media platforms regarding children using their sites. The bill requires that platforms act in the best interest of minors using their services, including mitigating harms that may arise from use, sweeping in online bullying and sexual exploitation. Social media sites would be required to establish and provide access to safeguards such as settings that restrict access to minor’s personal data and granting parents the tools to supervise and monitor minor’s use of the platforms. Critically, the bill establishes a duty for social media platforms to create and maintain research portals for non-commercial purposes to study the effect that corporations like the platforms have on society.

Overall, these bills indicate Congress’s creative thinking and commitment to broad privacy protection for users from social media harms. I believe the establishment of a separate body to govern, other than the FTC which lacks the powers needed to compel compliance, to be a necessary step. Recourse for violations on par with the EU’s new regulatory scheme, mainly fines in the billions, could help.

Many of the bills, for myriad aims, establish new fiduciary duties for the platforms in preventing unauthorized use and harms for children. There is real promise in this scheme- establishing duty of loyalty, diligence and care for one party has a sound basis in many areas of law and would be more easily understood in implementation.

The notion that platforms would need to be vigilant in knowing their content, studying its affects, and reporting those effects may do the most to create a stable future for social media.

The legal responsibility for platforms to police and enforce their policies and terms and conditions is another opportunity to further incentivize platforms. The FTC currently investigates policies that are misleading or unfair, sweeping in the social media sites, but there could be an opportunity to make the platforms legally responsible for enforcing their own policies, regarding age, against hate, and inappropriate content, for example.

What would you like to see considered in Privacy law innovation for social media regulation?

Social Media, Minors, and Algorithms, Oh My!

What is an algorithm and why does it matter?

Social media algorithms are intricately designed data organization systems aimed at maximizing user engagement by sorting and delivering content tailored to individual preferences. At their core, social media algorithms collect and subsequently use extensive user data, employing machine learning techniques to better understand and predict user behavior. Social media algorithms note and analyze hundreds of thousands of data points, including past interactions, likes, shares, content preferences, time spent viewing content, and social connections to curate a personalized feed for each user. Social media algorithms are designed this way to keep users on the site, thus giving the site more time to put advertisements on the user’s feed and drive more profits for the social media site in question. The fundamental objective of an algorithm is to capture and maintain user attention, expose the user to an optimal amount of advertisements, and use data from users to curate their feed to keep them engaged for longer.

Addiction comes in many forms

One key element contributing to the addictiveness of social media is the concept of variable rewards. Algorithms strategically present a mix of content, varying in type and engagement level, to keep users interested in their feed. This unpredictability taps into the psychological principle of operant conditioning, where intermittent reinforcement, such as receiving likes, comments, or discovering new content, reinforces habitual platform use. Every time a user sees an entertaining post or receives a positive notification, the brain releases dopamine, the main chemical associated with addiction and addictive behaviors. The constant stream of notifications and updates, fueled by algorithmic insights and carefully tailored content suggestions, can create a sense of anticipation in users for their next dopamine fix, which encourages users to frequently update and scan their feeds to receive the next ‘reward’ on their timeline. The algorithmic and numbers-driven emphasis on user engagement metrics, such as the amount of likes, comments, and shares on a post, further intensifies the competitive and social nature of social media platforms, promoting frequent use.

Algorithms know you too well

Furthermore, algorithms continuously adapt to user behavior through real-time machine learning. As users engage with content, algorithms will analyze and refine their predictions, ensuring that the content remains compelling and relevant to the user over time. This iterative feedback loop further deepens the platform’s understanding of individual users, creating a specially curated and highly addictive feed that the user can always turn to for a boost of dopamine. This heightened social aspect, coupled with the algorithms’ ability to surface content that resonates deeply with the user, enhances the emotional connection users feel to the platform and their specific feed, which keeps users coming back time after time. Whether it be from seeing a new, dopamine-producing post, or posting a status that receives many likes and shares, every time one opens a social media app or website, it can produce seemingly endless new content, further reinforcing regular, and often unhealthy use.

A fine line to tread

As explained above, social media algorithms are key to user engagement. They are able to provide seemingly endless bouts of personalized content and maintain users’ undivided attention through their ability to understand the user and the user’s preferences in content. This pervasive influence extends to children, who are increasingly immersed in digital environments from an early age. Social media algorithms can offer constructive experiences for children by promoting educational content discovery, creativity, and social connectivity that would otherwise be impossible without a social media platform. Some platforms, like YouTube Kids, leverage algorithms to recommend age-appropriate content tailored to a child’s developmental stage. This personalized curation of interest-based content can enhance learning outcomes and produce a beneficial online experience for children. However, while being exposed to age-appropriate content may not harm the child viewers, it can still cause problems related to content addiction.

‘Protected Development’

Children are generally known to be naïve and impressionable, meaning full access to the internet can be harmful for their development, as they may take anything they see at face value. The American Psychological Association has said that, “[d]uring adolescent development, brain regions associated with the desire for attention, feedback, and reinforcement from peers become more sensitive. Meanwhile, the brain regions involved in self-control have not fully matured.” Social media algorithms play a pivotal role in shaping the content children can encounter by prioritizing engagement metrics such as likes, comments, and shares. In doing this, social media sites create an almost gamified experience that encourages frequent and prolonged use amongst children. Children also have a tendency to intensely fixate on certain activities, interests, or characters during their early development, further increasing the chances of being addicted to their feed.

Additionally, the addictive nature of social media algorithms poses significant risks to children’s physical and mental well-being. The constant stream of personalized content, notifications, and variable rewards can contribute to excessive screen time, impacting sleep patterns and physical health. Likewise, the competitive nature of engagement metrics may result in a sense of inadequacy or social pressure among young users, leading to issues such as cyberbullying, depression, low self-esteem, and anxiety.

Stop Addictive Feeds Exploitation (SAFE) for Kids

The New York legislature has spotted the anemic state of internet protection for children and identified the rising mental health issues relating to social media in the youth.  Announced their intentions at passing laws to better protect kids online. The Stop Addictive Feeds Exploitation (SAFE) for Kids Act is aimed explicitly at social media companies and their feed-bolstering algorithms. The SAFE for Kids Act is intended to “protect the mental health of children from addictive feeds used by social media platforms, and from disrupted sleep due to night-time use of social media.”

Section 1501 of The Act would essentially prohibit operators of social media sites from providing addictive, algorithm-based feeds to minors without first obtaining parental permission. Instead the default feed on the program would be a chronologically sorted main timeline, one more popular in the infancy of social media sites. Section 1502 of The Act would also require social media platforms to obtain parental consent before allowing notifications between the hours of 12:00 AM and 6:00 AM and creates an avenue for opting out of access to the platform between the same hours. The Act would also provide a limit on the overall number of hours a minor can spend on a social media platform. Additionally, the Act would authorize the Office of the Attorney General to bring a legal action to enjoin or seek damages/civil penalties of up to $5,000 per violation and allow any parent/guardian of a covered minor to sue for damages of up to $5,000 per user per incident, or actual damages, whichever is greater.

A sign of the times

The Act accurately represents the growing concerns of the public in its justification section, where it details many of the above referenced problems with social media algorithms and the State’s role in curtailing the well-known negative effects they can have on a protected class. The New York legislature has identified the problems that social media addiction can present, and have taken necessary steps in an attempt to curtail it.

Social media algorithms will always play an intricate role in shaping user experiences. However, their addictive nature should rightfully subject them to scrutiny, especially in their effects among children. While social media algorithms offer personalized content and can produce constructive experiences, their addictive nature poses significant risks, prompting legislative responses like the Stop Addictive Feeds Exploitation (SAFE) for Kids Act.  Considering the profound impact of these algorithms on young users’ physical and mental well-being, a critical question arises: How can we effectively balance the benefits of algorithm-driven engagement with the importance of protecting children from potential harm in the ever evolving digital landscape? The SAFE for Kids Act is a step in the right direction, inspiring critical reflection on the broader responsibility of parents and regulatory bodies to cultivate a digital environment that nurtures healthy online experiences for the next generation.

 

From Hashtags to Hazards: Dangerous Diets and Digital Doses

Dieting, weight loss, and the need to be skinny has been prevalent in society from as early as the 19th century. People will find and try anything these days, healthy or not, to lose weight fast: diet pills, eating plans, radiofrequency lasering, you name it. People will go through such lengths to lose weight the wrong way – not exercising, not eating right, and not getting enough sleep. The emergence of social media has only compounded these issues. Social media creates pathways leading to social comparison, thin/fit ideal internalization, and self-objectification.

Type 2 diabetes is often associated with obesity and occurs when the body does not produce enough insulin, or does not react to insulin, and therefore cannot function properly. This disease is usually diagnosed in people ages 45-64 who are physically inactive and not leading a healthy lifestyle. In the early 2000s, pharmaceutical companies were looking for an easy solution to lower blood sugar to manage this disease. Enter: Ozempic.

Drugmaker Novo Nordisk introduced Ozempic in 2017 when the Food and Drug Administration authorized its use for adults with type 2 diabetes. It started as a relatively mundane drug with a straightforward goal: to help individuals manage their blood sugar levels and lead healthier lives. The weekly injection was designed to simulate insulin production and suppress glucagon release, ultimately leading to a rise in hormone levels that go to your brain, telling it that the stomach is full. It also increases the time it takes for ingested food to leave the body, slowing digestion. Originally, the marketing for Ozempic only targeted adults with type 2 diabetes and was to be used with diet and exercise as a healthy way to lower blood sugar.

Turning an Unintended Outcome into a Marketing Advantage

Soon after Ozempic hit the market, surveys and studies came out that showed those who used the drug also lost weight. People who took it lost an average of 14.9% of their body weight in six months of use. The unintended weight loss from Ozempic would have usually been listed as a side effect for the medication. Now having an additional benefit of losing weight, ads for Ozempic included it along with the diabetes usage. Marketers knew their audience and this new marketing campaign attracted a large group of people who wanted to lose weight. They tapped into this market to increase sales and revenue for the drug, which continues to be very successful.

In recent years, the pharmaceutical industry has witnessed a dramatic shift in how drugs are marketed, perceived, and consumed. This is largely due to the power of social media platforms and its influence on users. The allure of social media’s vast audience, the power of user-generated content, and its complex algorithms turned Ozempic into a trending topic. In the last year, social media helped Ozempic become widely known that the drug could double as a potential solution for weight loss. The drug went viral as hashtags and posts illuminated Ozempic as a cheat to losing weight, and losing weight fast. No diet or exercise needed. Individuals, not just those diagnosed with diabetes, were captivated by this prospect, and sought after Ozempic.

The new social media sensation garnered attention on platforms like TikTok, Instagram, and YouTube, with users, influencers, and celebrities sharing their experiences, before-and-after photos, and purported success stories. The influx of advertisements and users mentioning Ozempic increased the drug’s sales by 111% since last year. Elon Musk credited fasting, no tasty food, and Ozempic/ Wegovy (a drug very similar to Ozempic), as the reasons he shed almost 30 pounds. Other celebrities who have taken the drug, and have been vocal about it, include Amy Schumer, Chelsea Handler, Charles Barkley, Sharon Osborne, Tracy Morgan, and many more who are known to not have type 2 diabetes.

Rewards Turn to Consequences

Now being marketed almost strictly as a weight loss drug from different vendors, the viral run on Ozempic has led to worldwide shortages, doctors over-prescribing the drug, and many different legal issues. The blowup of Ozempic online was at least in part fueled by people who wanted to lose weight but who did not have any medical reasons to take it. The scarcity of Ozempic, coupled with the high demand, poses a threat to the health of individuals with type 2 diabetes who depend on this medication. As a result of this issue, Novo Nordisk paused advertisements for Ozempic in May of 2023. However, most of the ads on social media were not coming from the drugmaker, and instead were coming from online pharmacies and smaller marketers. These marketers attract vulnerable users who are seeking that quick fix to weight loss. While pharmaceutical companies can be held liable if their advertisements are proven to be false and/or misleading, the social media platforms are not liable under Section 230.

Users were not walking; they were running to doctors begging for Ozempic, even users who are not overweight, let alone have diabetes. It is very easy to get a prescription for Ozempic since only an online telehealth appointment is needed. Medicines and drugs that are approved for specific uses in the United States can be prescribed off-label for any use. Off-label use is when doctors prescribe medications for purposes not approved by the Food and Drug Administration. Doctors were prescribing Ozempic for patients that did not have type 2 diabetes and did not need it. At this time, the FDA has not approved Ozempic for the sole purpose of weight loss (yet). Doctors have gotten around this by prescribing other weight loss drugs such as Wegovy. Even though off-label use is not illegal, it still raises a slew of legal issues.

Off-Label Dangers and Legal Showdowns

To this day, there have not been adequate studies of how Ozempic works for people without diabetes and there may not be enough evidence to support using the drug for people who are not diabetic. Off-label use of Ozempic can lead to serious side effects. In August of 2023, after being prescribed Ozempic for weight management, a Louisiana resident claimed to have developed gastroparesis and argued that Novo Nordisk failed in their duty to adequately warn about potential adverse side effects associated with the drug. Gastroparesis is a condition that impacts the normal movement of muscles in the stomach. Less than a month after this suit was filed, the FDA and Novo Nordisk added a warning for Ozempic that it could cause intestinal blockage. This case is still in its early stages, but more and more people are coming forward and hiring attorneys for this condition in relation to taking Ozempic. A class action or multi-district litigation is predicted to occur in these cases.

Another potential legal implication of the off-label use of Ozempic going viral is medical malpractice and the potential for mass claims against doctors and manufacturers for prescribing the weight loss drug without proper medical justification. Social media users who see advertisements on platforms and want to lose weight are not asking doctors to prescribe Ozempic to them; they are begging. The drug manufacturers aren’t providing comprehensive information to patients about potential adverse reactions and are actively promoting the use of these drugs among individuals who may receive only minimal or no long-term benefits from them.

Predicting the Future of Ozempic

To better understand the Ozempic situation, it is valuable to draw parallels with the OxyContin opioid epidemic. OxyContin was first introduced in 1996 and is a powerful narcotic designed for the management of severe pain. However, as a result of over-promotion and improper sales tactics, it was overprescribed and led to widespread abuse, addiction overdose and death. The similarities between the issues surrounding the two drugs include:

  • Over-prescription– in both cases, doctors and manufacturers have played a pivotal role in the over-prescription of the medications. OxyContin was prescribed for chronic pain, a use that went beyond its intended purpose, while Ozempic was prescribed off-label for weight loss.
  • Patient demand– in both cases, patient demand and pressure have played a significant role in prescription practices. Patients seeking quick and easy solutions are more likely to want and receive medications that may not be appropriate for their condition and health.
  • Pharmaceutical company responsibility– Purdue Pharma, makers of OxyContin, faced, and continue to face, lawsuits for aggressively marketing the drug. Although no lawsuits have been filed against Ozempic yet for this, the responsibility of pharmaceutical companies in promoting medications beyond their FDA-approved uses could show a common thread between both drugs.

The one key difference between the OxyContin epidemic and the issues with Ozempic today is that in the early 2000s, social media sites were not as prolific. The advent of social media amplifies the speed and scale at which information, whether accurate or not, spreads. The contagious nature of user-generated content, testimonials, and before-and-after narratives on platforms has the potential to magnify the off-label promotion and demand for Ozempic as a weight loss solution. This can fuel an unwarranted surge in prescriptions without proper medical assessment, potentially leading to increased risks, adverse effects, and challenges in regulating the medication’s use. The ease with which information circulates on social media might intensify the scope and speed of the ‘Ozempic epidemic,’ raising concerns about patient safety and regulatory control.

Where Does the Liability Land?

The story of Ozempic’s transformation from a diabetes medication to a weight loss sensation driven by social media is a compelling example of how the digital age can shape public perception and lead to a vast number of legal issues. If Section 230 is amended and sets forth certain parameters in which social media sites can be liable, could platforms be held accountable for the shortage of the drug due to social media’s contributions of Ozempic’s popularity? Could the platforms be responsible for the possible increase in body image issues and eating disorders associated with the trend to be skinny?

Sharing is NOT Always Caring

Where There’s Good, There’s Bad

Social media’s vast growth over the past several years has attracted millions of users who use these platforms to share content, connect with others, conduct business, and spread news and information. However, social media is a double-edged sword. While it creates communities of people and bands them together, it destroys privacy in the meantime. All of the convenient aspects of social media that we know and love lead to significant exposure of personal information and related privacy risks. Social media companies retain massive amounts of sensitive information regarding users’ online behavior, including their interests, daily activities, and political views. Algorithms are embedded within these functions to promote specific goals of social media companies, such as user engagement and targeted advertising. As a result, the means to achieve these goals conflict with consumers’ privacy concerns.

Common Issues

In 2022, several U.S. state and federal agencies banned their employees from using TikTok on government-subsidized devices, fearful that foreign governments could acquire confidential information. While a lot of the information collected through these platforms is voluntarily shared by users, much of it is also tracked using “cookies,” and you can’t have these with a glass of milk! Tracking cookies allows information regarding users’ online browsing activity to be stored and displayed in a way that targets specific interests and personalizes content tailored to these particular likings. Signing up for a social account and agreeing to the platform’s terms permits companies to collect all of this data.

Social media users leave a “digital footprint” on the internet when they create and use their accounts. Unfortunately, enabling a “private” account does not solve the problem because data is still retrieved in other ways. For example, engagement in certain posts through likes, shares, comments, buying history, and status updates all increase the likelihood that privacy will be intruded on.

Two of the most notorious issues related to privacy on social media are data breaches and data mining. Data breaches occur when individuals with unauthorized access steal private or confidential information from a network or computer system. Data mining on social media is the process in which user information is analyzed to identify specific tendencies which are subsequently used to inform research and other advertising functions.

Other issues that affect privacy are certain loopholes that can be taken around preventive measures already in place. For example, if an individual maintains a private social account but then shares something with their friend, others who are connected with the friend can view the post. Moreover, location settings enable a person’s location to be known even if the setting is turned off. Other means, such as Public Wi-Fi and websites can still track users’ locations.

Taking into account all of these prevailing issues, only a small amount of information is actually protected under federal law. Financial and healthcare transactions as well as details regarding children are among the classes of information that receive heightened protection. Most other data that is gathered through social media can be collected, stored, and used. Social media platforms are unregulated to a great degree with respect to data privacy and consumer data protection. The United States does have a few laws in place to safeguard privacy on social media but more stringent ones exist abroad.

Social media platforms are required to implement certain procedures to comply with privacy laws. They include obtaining user consent, data protection and security, user rights and transparency, and data breach notifications. Social media platforms typically ask their users to agree to their Terms and Conditions to obtain consent and authorization for processing personal data. However, most are guilty of accepting without actually reading these terms so that they can quickly get to using the app.

Share & Beware: The Law

Privacy laws are put in place to regulate how social media companies can act on all of the information users share, or don’t share. These laws aim to ensure that users’ privacy rights are protected.

There are two prominent social media laws in the United States. The first is the Communications Decency Act (CDA) which regulates indecency that occurs through computer networks. Nevertheless, Section 230 of the CDA provides enhanced immunity to any cause of action that would make internet providers, including social media platforms, legally liable for information posted by other users. Therefore, accountability for common issues on social media like data breaches and data misuse is limited under the CDA. The second is the Children’s Online Privacy Protection Act (COPPA). COPPA protects privacy on websites and other online services for children under the age of thirteen. The law prevents social media sites from gathering personal information without first providing written notice of disclosure practices and obtaining parental consent. The challenge remains in actually knowing whether a user is underage because it’s so easy to misrepresent oneself when signing up for an account. On the other hand, the European Union has General Data Protection Regulation (GDPR) which grants users certain control over when and how their data is processed. The GDPR contains a set of guidelines that restrict personal data from being disseminated on social media platforms. In the same way, it also gives internet users a long set of rights in cases where their data is shared and processed. Some of these rights include the ability to withdraw consent that was previously given, access information that is collected from them, and delete or restrict personal data in certain situations. The most similar domestic law to the GDPR is the California Consumer Privacy Act (CCPA) which was enacted in 2020. The CCPA regulates what kind of information can be collected by social media companies, giving platforms like Google and Facebook much less freedom in harvesting user data. The goal of the CCPA is to make data collection transparent and understandable to users.

Laws on the state level are lacking and many lawsuits have occurred as a result of this deficiency. A class action lawsuit was brought in response to the collection of users’ information by Nick.com. These users were all children under the age of thirteen who sued Viacom and Google for violating privacy laws. They argued that the data collected by the website together with Google’s stored data relative to its users was personally identifiable information. A separate lawsuit was brought against Facebook for tracking users when they visited third-party websites. Individuals who brought suit claimed that Facebook was able to personally identify and track them through shares and likes when they visited certain healthcare websites. Facebook was able to collect sensitive healthcare information as users browsed these sites, without their consent. However, the court asserted that users did indeed consent to these actions when they agreed to Facebook’s data tracking and data collection policies. The court also stated that the nature of this data was not subject to any stricter requirements as plaintiffs claimed it was because it was all available on publicly accessible websites. In other words, public information is fair game for Facebook and many other social media platforms when it comes to third-party sites.

In contrast to these two failed lawsuits, TikTok agreed to pay a $92 million settlement for twenty-one combined lawsuits due to privacy violations earlier this year. The lawsuit included substantial claims, such as allegations that the app analyzed users’ faces and collected private data on users’ devices without obtaining their permission.

We are living in a new social media era, one that is so advanced that it is difficult to fully comprehend. With that being said, data privacy is a major concern for users who spend a large amount of time sharing personal information, whether they realize it or not. Laws are put in place to regulate content and protect users, however, keeping up with the growing presence of social media is not an easy task–sharing is inevitable and so are privacy risks.

To share or not to share? That is the question. Will you think twice before using social media?

Skip to toolbar