Fame, Free Speech, and Fantasy: Why It’s Time for Federal Action

The rise of fantasy sports has transformed how fans engage with professional athletics, blurring the boundaries between data, identity, and commerce. But, while online fantasy sports platforms continue to evolve into a mature, multi-billion-dollar industry, the balance between the publicity rights of the professional athletes featured on those mediums, and the First Amendment rights of the online intermediaries is still on delicate footing. 

As these platforms continue to advance, they expose a fundamental tension in our legal system: the fragmented and inconsistent nature of publicity rights across states. Professional athletes argue that the unlicensed or unapproved use of their names and statistics constitutes a commercial exploitation of their identities, while fantasy sports enterprises contend that such data are publicly available facts protected by the First Amendment. This unresolved conflict underscores the urgent need for a federal right of publicity statute…one that harmonizes legal standards, reduces costly litigation, and provides a coherent framework for balancing economic innovation with individual rights in the digital era.

What is the Right of Publicity?

The right of publicity protects against unauthorized commercial use of someone’s name, image, or likeness. Most states have a publicity rights statute, however a statute is not a prerequisite to enforce one’s right of publicity. Many courts arrive at the same outcome using state common law.

In New York, the state’s publicity rights statute was interpreted by the court in Stephano v. News Group Publications, Inc.. In this case, the plaintiff was a fashion model who brought suit against the defendant, a photographer, who used his picture for commercial advertising purposes without the plaintiff’s consent, thus violating his statutory right of publicity. Here, the New York Court of Appeals ruled in favor of the plaintiff.

Moreover, the court reasoned that the statute is not limited to situations where the defendant’s conduct has caused distress or harm to a person who wishes to lead a “private life free of all commercial publicity.” Rather, the court held that by its plain language, the statute applies to any use of any person’s image for commercial purposes whenever the defendant has not obtained the person’s written consent to do so. It follows from this decision that, regardless of a person’s publicity status (i.e., a professional athlete v. your average Joe), he is covered under the statute.

What’s in a Name?  

So — what does the court’s decision in Stephano have to do with the world of online fantasy sports and professional athletes? Well, athletes argue that the use of their names, images, and stats constitutes a commercial appropriation of their identities, allowing private companies to profit from their identities without consent or compensation. On the other hand, fantasy sports platforms maintain that players’ statistical data are publicly available facts, not proprietary information, and thus their use is protected under the First Amendment. This disagreement has placed courts in a difficult position, as fantasy sports platforms do not fit neatly into either category of commercial exploitation or pure free speech.

Conflict in the Courts

The unpredictable nature of the right of publicity is best illustrated through the inconsistent outcomes in key cases involving fantasy sports platforms. In C.B.C. Distribution & Marketing, Inc. v. Major League Baseball Advanced Media, the Eighth Circuit confronted whether the use of player names and statistics in fantasy baseball products violated players’ rights of publicity. While acknowledging that a violation technically existed, the court held that the First Amendment interests in disseminating factual data outweighed those rights.

The District of Minnesota reached a similar conclusion in CBS Interactive v. NFL Players’ Association, extending the Eighth Circuit’s reasoning to fantasy football and reaffirming that the publication of player statistics equates constitutionally protected expression. Conversely, in Gridiron.com, Inc. v. NFL, the court took the opposite approach, rejecting the First Amendment defense and finding that the online platform’s use of player images and information constituted commercial exploitation in violation of the NFL Players Association’s exclusive licensing rights.

High Stakes Moving Forward

The latest test of this legal imbalance is now before the U.S. District Court for the Eastern District of Pennsylvania, in the case of MLB Players Inc. v. DraftKings & Bet365. In its complaint, MLB Players, Inc. (the MLB’s Player Association group licensing subsidiary) alleges the online fantasy/gambling platforms of misappropriating the images and likenesses of numerous MLB players on their online and mobile platforms. Plaintiff emphasizes it was not suing “to protect MLB players’ personal privacy interest, but rather the commercial value of their NIL rights.” Still pending in federal court, the court’s ruling here could seta new precedent after 71 years of sports-related litigation arguing over professional athlete’s publicity rights.

Without a federal publicity rights statute and a lack of uniformity across jurisdictions, the ultimate burden falls onto the litigants. The troubling fact is that identical conduct may be lawful in one jurisdiction and unlawful in another. The end result? A legal patchwork that breeds uncertainty, invites forum shopping, and imposes significant litigation costs on all parties involved.

Regulating the Scroll: How Lawmakers Are Redefining Social Media for Minors

In today’s digital world, the question is no longer if minors use social media but how they use it. 

Social media platforms don’t just host young users, they shape their experiences through algorithmic feeds and “addictive” design features that keep kids scrolling long after bedtime. As the mental health toll becomes increasingly clear, lawmakers are stepping in to limit how much control these platforms have over young minds.

What is an “addictive” feed and why target it? 

Algorithms don’t just show content, they promote it. By tracking what users click, watch, or like, these feeds are designed to keep attention flowing. For minors, that means endless scrolling and constant engagement which typically is at the expense of sleep, focus, and self-esteem.

Under New York’s Stop Addictive Feeds Exploitation (SAFE) for Kids Act, lawmakers found that:

 “social media companies have created feeds designed to keep minors scrolling for dangerously long periods of time.”

The Act defines an “addictive feed” as one that recommends or prioritizes content based on data linked to the user or their device.

The harms aren’t hypothetical. Studies link heavy social media use among teens with higher rates of depression, anxiety, and sleep disruption. Platforms often push notifications late at night or during school hours. Times when young users are most vulnerable. 

Features like autoplay, for you page, endless “you may also like” suggestions, and quick likes or comments can trap kids in an endless scroll. What begins as fun and harmless entertainment soon becomes a routine they struggle to escape.                              

 

Key Developments in Legislation 

It’s no surprise that minors exposure to social media algorithms sits at the center of today’s policy debates. Over the past two years, state and federal lawmakers have introduced laws seeking to rein in the “addictive” design features of online platforms. While many of these measures face ongoing rule making or constitutional challenges, together they signal a national shift toward stronger regulations of social media’s impact on youth. 

Let’s take a closer look at some of the major legal developments shaping this issue.

New York’s SAFE for Kids Act

New York’s Stop Addictive Feeds Exploitation (SAFE) for Kids Act represents one of the nation’s most ambitious efforts to regulate algorithmic feeds. The law prohibits platforms from providing “addictive feeds” to users under 18 unless the platform obtains verifiable parental consent or reasonably determines that the user is not a minor. It also bans push notifications and advertisements tied to those feeds between 12 a.m. and 6 a.m. unless parents explicitly consent. The rule making process remains ongoing, and enforcement will likely begin once these standards are finalized.

The Kids Off Social Media Act (KOSMA)

At the federal level, the Kids Off Social Media Act (KOSMA) seeks to create national baselines for youth protections online. Reintroduced to Congress, the bill would:

  • Ban social media accounts for children under 13.
  • Prohibit algorithmic recommendation systems for users under 17.
  • Restrict social media access in schools during instructional hours.

Supporters argue the bill is necessary to counteract the addictive nature of social media design. Critics, including digital rights advocates, question whether such sweeping restrictions could survive First Amendment scrutiny or prove enforceable at scale. 

KOSMA remains pending in Congress but continues to shape the national conversation about youth and online safety.

California’s SB 976 

California’s Protecting Our Kids from Social Media Addiction Act (SB 976) reflects a growing trend of regulating design features rather than content. The law requires platforms to:

  • Obtain parental consent before delivering addictive feeds to minors.
  • Mute notifications for minors between midnight and 6 a.m. and during school hours unless parents opt in.

The statute is currently under legal challenge for potential First Amendment violations, however, the Ninth Circuit allowed enforcement of key provisions to proceed suggesting that narrowly tailored design regulations aimed at protecting minors may survive early constitutional scrutiny.

Other State Efforts

Other states are following suit. According to the National Conference of State Legislatures (NCSL), at least 13 states have passed or proposed laws requiring age verification, parental consent, or restrictions on algorithmic recommendations for minors. Mississippi’s HB 1126, for example, requires both age verification and parental consent, and the U.S. Supreme Court allowed the law to remain in effect while litigation continues. 

Final Thoughts

We are at a pivotal moment. The era when children’s digital consumption went largely unregulated is coming to an end. The question now isn’t if  regulation is on the horizon, it’s how it will take shape, and whether it can strike the right balance between safety, free expression, and innovation.

As lawmakers, parents, and platforms navigate this evolving landscape, one challenge remains constant: ensuring that efforts to protect minors from harmful algorithmic design do not come at the expense of their ability to connect, learn, and express themselves online.

What do you think is the right balance between protecting minors from harmful algorithmic exposure and preserving their access to social media as a space for connection and expression?

From Cute to Concerning: The Legal and Emotional Costs of Sharenting

After a long day at work, most people now sit down for a nice relaxing…scroll. That’s right, most people have social media and enjoy going through the latest posts to wind down or pass the time. Whether it’s on Instagram, Facebook, or TikTok someone is looking at a post made by a parent displaying their child doing something adorable, funny, documenting a family trip or marking a milestone like the first day of school. What seems like an innocent post, can be something much darker.

What is Sharenting

As social media gained traction in recent years, so did sharenting. Sharenting is  when

a parent overshares or excessively posts information, pictures stories or updates about their child’s life.

A proud parent could post the smiling face of their child at a sporting event on their private account thinking only family and friends will see it. Some parents even post daily vlogs involving their children making money on filming their day to day with strangers. Most parents engage in sharenting to share details of their child because they are proud of them. Some want to build a digital archive, or want to connect with loved ones. Others are even trying to build camaraderie with other parents, and they could even be trying to help others. Most parents do this with the purest motives in mind; however, their content is not always received as it is intended.

The Risks of Sharenting

Legal Risks

As established in Troxel v. Grainville, parents have a fundamental right to raise their children as they see fit. This includes education, religion, and even social media. Parents have a First Amendment right to speech just as much as a child does when it comes to posting online. Parents are protected in their posting videos and pictures of their children under the First Amendment; however, this right is not unlimited. These restrictions apply in certain circumstances such as child explosion laws, or other compelling state interests.

Children also have a right to privacy that conflicts with their parents First Amendment right of speech and expression in the context of posting them online.  Under the Children’s Online Privacy Protection Act (COPPA), significant protections for children’s online privacy were established. COPPA imposes certain requirements on operators of websites or online services directed

to children under 13 years of age, and on operators of other websites or online services that have actual knowledge that they are collecting personal information online from a child under 13 years of age.

COPPA, however, only targets protecting children’s data not the actual child from the risks of being online.

Psychological Risks

 In addition to the legal risks of sharenting, there are also many psychological risks. What happens when a parent posts that one picture that comes back to haunt their child later on. These videos and images can be used by other students to bully the child down the road. Children can have a harder time developing their own image and identity when they are prescribed an online persona by their parents through their posts.

Even with pure motives, a survey of parents discussed by Dr. Albers of the Cleveland Health Clinic found that:

74% of parents using social media knew another parent engaging in sharenting behavior.

56% said the parents shared embarrassing information about their kid.

51% said the parent provided details that revealed their child’s location.

27% said the parent circulated inappropriate phots.

The impact that these posts that, once are made are always out there, can be detrimental to a child’s mental health. Social media, according to the Mayo Clinic, already amplifies adolescents’ anxiety and depression. Parents can add to this by sharenting.

Other Risks

These seemingly innocent posts can often lead to greater risks,  for their children than most parents realize. In addition to negative psychological impacts, sharenting can endanger the child’s mental health as well as their physical health. Sharenting is a window directly into a child’s life, one which a predator can abuse. Images can be taken from their parents accounts and shared to sites for pedophiles.

The taking of these images can also enable identity theft, harassment, bullying, exploitation and even violence.

Parents who have gotten famous from posting their kids like the Labrant Family and The Fisher’s have increased their kids risk of being subject to one of these crimes by constantly posting them online.

Sharenting can blur the line between a fun posts and advertising your child to strangers.  In extreme situations creating dangerous environments for internet famous children.

Parents are also contributing to their child’s digital identity which could impact their future educational and employment prospects. It could also lead to embarrassment that the content was shared, and they cannot get rid of it.

How Can Parents Protect their Kids

As social media continues to grow and be a part of our daily lives, parents can take action to protect their children going forward. One way parents can do this is by blurring or covering their child’s face with an emoji. Parents can still have the excitement of posting their child’s achievements or milestones without exposing their identity to the internet.

Parents can think before they post.

If you’re trying to decide whether a post counts as sharenting, ask yourself these questions:

What’s the content?

Why am I posting it?

Who’s my intended audience? Have I set my permissions accordingly?

Is my child old enough to understand the concept of a digital footprint? If they are, did I ask their consent? If not, do I think they’d be happy to see this online when they’re older?

Sharenting is not going to stop, but it can evolve to be done in a way that protects a parent’s right to post and their child’s safety.

 

End The Loop

Have you ever found yourself stuck in an endless loop of viewing social media posts as time flies by? It’s likely. On average, people spend about 2 hours and 24 minutes on social media daily, that is 144 minutes. It is time for users to take back control of their daily lives. But how? Well, Ethan Zuckerman is at the forefront of empowering users to control their social media algorithms.

 

Photo Credits

 

 

 

 

Unfollow Everything 2.0

When users of Facebook friend request another person upon being accepted, they automatically “follow” the person. This means they will see all their posts on their Home Page. Following every page, friend, or group you are involved with is what creates the infinite loop of posts users get sucked into. Right now, there is no extension or tool that gives users the ability to combat infinite scrolling on social media platforms.

Ethan Zuckerman is in the process of creating a browser extension that lets Facebook users unfollow all of their friends, groups, and pages with the click of a button. 

Here’s how it works: When a user activates the browser extension, Unfollow Everything 2.0 causes the user’s browser to retrieve their list of friends, groups, and pages from Facebook. The tool would then comb through the “followed” list, causing the browser to ask Facebook to unfollow each friend, group, or page on the users list. The tool would allow the user to select friends, groups, and pages to refollow or gives the option keep their newsfeed blank and view only content that they seek out. It would also encrypt the user’s “followed list and save it locally on the user’s device, which would allow the user to keep the list private while still being able to automatically reverse the unfollowing process. By unfollowing everything, users can eliminate their entire News Feed. This leaves them free to use Facebook without the feed or to more actively curate it by refollowing only those friends and groups whose posts they really want to see.

Note that this isn’t the same as unfriending. By unfollowing their friends, groups, and pages, users remain connected to them and can look up their profiles at their convenience.

Tools like Unfollow Everything 2.0 can help users have better and safer online experiences by allowing them to gain control of their feeds without the involvement of government regulation.

 

Photo Credits:

 

 

Unfollow Everything 1.0

The original version of the toolUnfollow Everything 1.0was created by British developer Louis Barclay in 2021. Barclay believed that unfollowing everything but remaining friends with everyone on the app and staying in all the user-joined groups forced users to use Facebook deliberately rather than as an endless time-suck.“I still remember the feeling of unfollowing everything for the first time. It was near-miraculous. I had lost nothing, since I could still see my favorite friends and groups by going to them directly. But I had gained a staggering amount of control. I was no longer tempted to scroll down an infinite feed of content. The time I spent on Facebook decreased dramatically. Overnight, my Facebook addiction became manageable.”

Barclay eventually received a cease and desist letter and was permanently banned from using the Facebook platform. Meta claims he violated their terms of service.

Meta’s Current Model

Currently, there is no way for users to not automatically follow every friend, page, and group on Facebook that they have liked or befriended, forcing the endless feed of posts users to see on their timelines.

Metas’ steps to unfollow everything involve manually going through each friend, group, or business and clicking the unfollow button. This task can take hours as users tend to have hundreds of connections; this is likely deterring users from going through the extensive process of regaining control over their social media algorithm.

Meta unfollow someone’s profile Directions:

  • Go to that profile by typing their profile name into the search bar at the top of Facebook.
  • Click at the top of their profile.
  • Click Unfriend/Unfollow, then Confirm

 

 

Photo Credits:

 

 

Making a Change:

Unfollow Everything 2.0 filed a preemptive lawsuit asking the court to determine whether Facebook users’ news feeds contain objectionable material that users should be able to filter out to enjoy the platform. They argue that Unfollow Everything 2.0 is the type of tool Section 230(c)(2) intended to encourage by giving users more control over their online experiences and adequate ability to filter out content they do not want.

Zuckerman explains users currently have little to no control over how they use social media networks. “We basically get whatever controls Facebook wants. And that’s actually pretty different from how the internet has worked historically.

Meta, in its defense against Unfollow Everything 2.0 (Ethan Zuckerman), is pushing the court to rule that a platform such as Facebook can circumvent Section 230(c)(2) through its terms of service.

Section 230

Section 230 is known for providing immunity for online computer services regarding third-party content users generate. Section 230(c)(1): “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”  While Section 230(c)(1) has been a commonly litigated topic, Section 230(c)(2) however, has been rarely discussed in front of the courts.

So what is Section 230(c)(2)? Section 230(c)(2) was adopted to allow users to regulate their online experiences through technological means, including tools like Unfollow Everything 2.0.  Force V. Facebook (2019) discretions that Section 230(c)(2)(B) provides immunity from claims based on actions that “enable or make available to . . . others the technical means to restrict access to” the same categories of “objectionable” material.  Essentially, Section 230(c)(2)(B) empowers people to have control over their online experiences by providing immunity to the  3rd party developers of extensions/tools that users can use with social networking platforms such as Facebook.

 

Photo Credits:

 

 

Timeline of Litigation

May 1, 2024: Zuckerman filed a lawsuit asking the court to recognize that Section 230 protects the development of tools that empower social media users to control what they see online.

July 15, 2024: Meta filed a motion to dismiss on the lack of Zuckerman’s standing at the current time.

August 29, 2024: Zuckerman filed an opposition to Meta’s motion to dismiss.

November 7, 2024: Dismissed. However, the researcher could file at a later date because his tool was not complete at the time of the suit. Once developed, it will likely test the law.

Why social media companies do not want this:

Companies like Meta want to prevent these 3rd party extensions as much as possible because it’s in their best interest to continuously keep users engaged. Keeping users on their platform allows Meta to display more advertisements, which is their primary source of revenue. Meta’s large scale of users gives advertisers an excellent opportunity to have their message reach a broad audience. For example, in 2023, Meta generated $134 billion in revenue, 98% of which came from advertising. By making it difficult for users to control their feed adequately, Meta can make more money. If the extension of Unfollow Everything was released to the public, Meta would likely need to shift their prioritization model.

The potential future of section 230:

What’s next? In the event that the court rules in favor of Zuckerman in a future trial, giving users an expanded ability to control their social mediaitIt likely isn’t the end of the problem. Social Media Platforms have previously changed their algorithms to prevent third-party tools from being used on platforms. For example, X (then Twitter)  put an end to Block Party‘s user tool by changing its API (Application Programming Interface) pricing.

Lawmakers will need to step in to fortify users’ control over their Social media algorithms. It is unreasonable to forsee the massive media conglomerates willingly giving up control that would negatively affect their ability to generate revenue.

For now, if users wish to take the initiative and control their social media usage, Android and Apple allow their consumers to regulate specific app usage in their phone settings.

Due Process vs. Public Backlash: Is it Time to Cancel Cancel Culture?

Throughout history, people have often challenged and criticized each other’s ideas and opinions. But with the rise of internet accessibility, especially social media, the way these interactions unfold have changed. Now, it’s easy for anyone to call out someone else’s behavior or words online, and the power of social media makes it simple to gather a large group of people to join in. What starts as a single person’s post can quickly turn into a bigger movement, with others sharing the same views and adding their own criticism. This is cancel culture.

Cancel culture has become a highly relevant topic in today’s digital world, especially because it often leads to serious public backlash and consequences for people or companies seen as saying or doing something offensive. The phrase “cancel culture” first originated from the word cancel, meaning to cut ties with someone. In the abstract, this concept aims to demand accountability, but it also raises important legal questions. When does criticism go too far and become defamation? How does this “online backlash” affect a person’s right to fair treatment? And what legal options are available for those who feel unfairly targeted by “cancel culture”?

 

What Is Cancel Culture?

Cancel culture is a collective online call-out and boycott of individuals, brands, or organizations accused of offensive behavior, often driven by social media. Critics argue that it can lead to mob justice, where people are judged and punished without proper due process. On the other hand, supporters believe it gives a voice to marginalized groups and holds powerful people accountable in ways that traditional systems often fail to. It’s a debate about how accountability should work in a digital age—whether it’s a tool for justice or a dangerous trend that threatens free speech and fairness.

The impact of cancel culture can be extensive, leading to reputational harm, financial losses, and social exclusion. When these outcomes affect a person’s livelihood or well-being, the legal implications become significant, because public accusations, whether true or false, can cause real damage.

In a Pew Research study from September 2020, 44% of Americans reported being familiar with the term “cancel culture,” with 22% saying they were very familiar. Familiarity with the term varies by age, with 64% of adults under 30 aware of it, compared to 46% of those ages 30-49 and only 34% of people 50 and older. Individuals with higher levels of education are also more likely to have heard of cancel culture. Political affiliation shows little difference in awareness, although more liberal Democrats and conservative Republicans tend to be more familiar with the term than their moderate counterparts.

 

Cancel Culture x Defamation Law

In a legal context, defamation law is essential in determining when online criticism crosses the line. Defamation generally involves a false statement presented as fact that causes reputational harm.

To succeed in a defamation lawsuit, plaintiffs must show:

  • a false statement purporting to be fact;
  • publication or communication of that statement to a third person;
  • fault amounting to at least negligence; and
  • damages, or some harm caused to the reputation of the person or entity who is the subject of the statement.

US Dominion, Inc. v. Fox News Network, Inc. is a defamation case highlighting how the media can impact reputations. Dominion sued Fox News for $1.6 billion, claiming the network falsely accused it of being involved in election fraud during the 2020 presidential election. Fox News defended itself by saying that it was simply reporting on claims made by others, even if those claims turned out to be false. The case was settled in March 2023 for $787.5 million, showing that media outlets can be held accountable when they spread information without regard for the truth. This is similar to how cancel culture works – individuals or companies can face backlash and reputational damage based on viral accusations that may not be fully verified. Ultimately, the case highlights how defamation law can provide legal recourse for those harmed by false public statements while emphasizing the balance between free speech and accountability in today’s fast paced digital environment.

 

Free Speech vs. Harm: The Tensions of Cancel Culture

Cancel culture brings to light the ongoing tension between free speech and reputational harm. On one hand, it provides a platform for people to criticize others and hold them accountable for their actions. However, the consequences of these public accusations can be severe, leading to job loss, emotional distress, and social isolation—sometimes even beyond what the law might consider fair.

While the First Amendment protects free speech, it doesn’t cover defamatory or harmful speech. This means people can face consequences for their words, especially when they cause harm. But in the realm of cancel culture, these consequences can sometimes feel disproportionate, where the public reaction can go beyond what might be considered reasonable or just. This raises concerns about fairness and justice – whether the punishment fits the crime, especially when the public can amplify the damage in ways that the legal system may not address.

In Cajune v. Indep. Sch. Dist. 194, the Eighth Circuit Court addressed a First Amendment issue regarding the display of “Black Lives Matter” (BLM) posters in classrooms. The case revolves around whether the school district’s policies, which allow teachers to choose whether to display these posters, restrict or support free speech. The plaintiffs argue that this limitation on expression resembles the broader dynamics of cancel culture, where certain viewpoints can be suppressed or silenced. Much like cancel culture, where individuals or ideas are “canceled” for holding or expressing controversial views, this case touches on how institutions control public expression. If the district restricts messages like “All Lives Matter” or “Blue Lives Matter,” it could be seen as institutional “canceling” of dissenting or unpopular opinions, which can show how cancel culture restricts diverse speech. This shows the clash between promoting free speech and managing controversial messages in public spaces.

 

New York’s Anti-SLAPP Law

New York’s Anti-SLAPP (Strategic Lawsuit Against Public Participation) law is also highly relevant in the context of cancel culture, especially for cases involving public figures. This statute protects defendants from lawsuits intended to silence free speech on matters of public interest. In 2020, New York amended the law to broaden protections, allowing it to cover speech on any issue of public concern.

In Gottwald v. Sebert (aka Kesha v. Dr. Luke), New York’s Court of Appeals upheld a high legal standard for defamation claims made by public figures, by requiring them to prove actual malice. This means Dr. Luke would need to show that Kesha knowingly made false statements or acted with reckless disregard. The court’s decision highlights the strong free speech protections that apply to public figures, making it difficult for them to win defamation cases unless they provide clear evidence of malice. This reflects how cancel culture incidents involving public figures are subject to stricter legal standards.

 

Social Media Platforms: Responsibility and Liability

Social media platforms like Twitter, Facebook, and Instagram play an important role in cancel culture by allowing for public criticism and allowing for rapid, widespread responses. Section 230 shields platforms from liability for user generated content, so they typically aren’t held liable if users post defamatory or harmful content. However, recent Supreme Court decisions upholding Section 230 protections highlight the tension between free speech and holding platforms accountable. These decisions have affirmed that platforms aren’t liable for third-party content, which affects the spread of cancel culture by limiting individuals’ ability to hold platforms accountable for hosting potentially defamatory or harmful content.

 

Legal Recourse for the Cancelled

For individuals targeted by cancel culture, legal options are limited but exist. Potential actions include:

  • Defamation lawsuits: If individuals can prove they were defamed, they may recover damages.
  • Privacy claims: Those whose personal information is shared publicly without consent.
  • Wrongful termination suits: If cancel culture leads to job loss, employees may have grounds for legal action if the termination was discriminatory or violated their rights.
  • Pursuing legal action can be difficult, especially given New York’s high standard for defamation and its expanded anti-SLAPP protections. In cases involving public figures, plaintiffs face many obstacles due to the requirement of proving actual malice.

 

Looking Ahead: Can the Law Catch Up with Cancel Culture?

As cancel culture continues to evolve, legislature will continue to face challenges in determining how best to regulate it. Reforms in privacy laws, online harassment protections, and Section 230 could provide clearer boundaries, but any change will have to account for free speech protections. Cancel culture poses a unique legal and social challenge, as public opinion on accountability and consequences continues to evolve alongside new media platforms. Balancing free expression with protections against reputational harm will likely remain a major challenge for future legal developments.

When Social Media Brand Deals Sour: The Case for Promissory Estoppel in Influencer Agreements

In a world now driven by social media, the advertising industry has been taken over by Influencer brand deals and paid product placement.  Businesses, both small and large, are utilizing, and sometimes relying solely, on influencers to promote their products.  Most of these brand deals are negotiated through formal agreements and contracts, clearly outlining the actions expected by each party.  One common way businesses engage in this marketing is by providing influencers with their products in exchange for exposure.  This typically involves the influencer posting a photo or video on social media that reviews or recommends the product to their audience.  Thus, a review or recommendation from an influencer with a bigger audience is far more valuable.  However, for smaller businesses who do not have prepared contracts for this type of exchange, reliance on informal agreements by influencers to review a product can lead to misunderstandings.  This blog post explores a recent TikTok controversy, where this type of scenario unfolded involving beauty influencer Mikayla Nogueira and Matthew Stevens, the owner of Illusion Bronze, a custom self-tanning product.  Could promissory estoppel, a doctrine in contract law, provide a solution where there are informal agreements for a product review?

The Controversy: Mikayla Nogueira and Illusion Bronze

In 2022, Matthew Stevens, the owner of Illusion Bronze, reached out to beauty influencer Mikayla Nogueira via Instagram direct messages, seeking a video reviewing his custom sunless tanner line.  Following their interaction, Nogueira allegedly agreed to review the product “ASAP.”  Nogueira is known for her product reviews, and previously mentioned in one of her videos that one challenge with reviewing products from small, independent brands is their limited inventory.  These startup brands often struggle to handle the sudden surge in demand from her audience, leading to website crashes and quick sellouts, leaving her audience frustrated and feeling snubbed.

Mikayla Nogueira vs. Illusion Bronze's Matthew Stevens: How Smaller Creators Leverage Influencer Scandal for Clout – Centennial World: Internet Culture, Creators & News

Relying on her promise to review the product “ASAP” and keeping in mind Mikayla’s concerns, Stevens, in the form of a loan to shopify, purchased $10,000 worth of inventory, preparing for this surge in sales that typically accompanies a product review from a major influencer like herself. Stevens waited some time with no review, reached out to Mikayla for reassurance that she would stick to her promise (and even receiving it).  After a few months, Stevens posted a video explaining the situation, accusing Nogueira of failing to honor her promise and claiming financial harm to his business with her to blame.

@whatstrending #mikaylanogueira has been called out on TikTok after #illusionbronze ♬ original sound – WhatsTrending

 Nogueira responded by stating that there was no formal agreement obligating her to review the product and that Stevens’ financial decision was his own.  The dispute escalated via public video responses to one another, with Nogueira insisting that Stevens was trying to rely on her audience for his success, while Stevens felt that her promise was the only reason he took his costly steps. Despite there being no formal agreement between the two requiring Nogueira to review the product in a certain time frame, this situation poses an interesting legal question: Could Stevens have a valid claim under promissory estoppel? Or is this just a risk of the business, as some seasoned public figures have commented:

@bethennyfrankel I’m team @Mikayla Nogueira ALL DAY errday🙌🏼🙂 #mikaylanogueira #influencerdrama #illusionbronze #mikaylanogueiradrama #productreviews #smallbusiness #joshuasanders #lymalaser #lyma #tiktokreviews #beautyinfluencer ♬ original sound – Bethenny Frankel

An implied agreement between the two?

Promissory estoppel is a principle in contract law that enforces a promise even in the absence of a formal contract, provided certain conditions are met.  Under this doctrine, if one party makes a promise, and the other party reasonably relies on that promise to their detriment, the promisor is estopped from arguing that the promise is unenforceable due to a lack of formal contract.

To succeed in a promissory estoppel claim, the following elements must be met:

  1. A clear and definite promise. There must be a clear promise made by the promisor.
  2. Reasonable reliance. The promisee must have reasonably relied on the promise.
  3. The promisee must have suffered a detriment due to their reliance on the promise.
  4. Injustice. Some remedy is necessary to avoid an injustice.

In this case, Nogueira’s message indicating she would review the product “ASAP” might be considered a clear enough promise to satisfy the first requirement.  Nogueira publicly expressed a valid concern about reviewing small businesses that are not capable of handing a large influx of orders.  Thus, Stevens’ advance of $10,000 worth of product might have been a reasonable step to take in reliance of her promise to review.  Since he was expecting a review from her, likely leading to a high influx of orders, he took steps to prepare his business for this scenario and avoid consumer frustration.  Lastly, Stevens’ financial loss from the unsold inventory and any interest on the loan to Shopify may be considered a detriment as a result of his reliance.  With those first three elements met, there is a possibility that injustice could only be avoided by some action and treating their exchange as a legally binding contract.

From a legal standpoint, Nogueira might defend her position by claiming that her statement was not a formal promise but merely an expression of intent.  This is especially possible given the fact that in her responses, she claimed that she “was going to get to it”, admitting that she took too long and should have made the video quicker. With that, there may be a valid argument that while there was some informal agreement, there was no urgency or deadline in place.  This fact might make it unreasonable to hold Nogueira liable for an implied contract that she did not technically breach (yet).  She might also argue that Stevens acted unreasonably by relying on her statement without securing a formal agreement or awaiting some notification from Nogueira that she had recorded the video and was preparing to post it.

This controversy raises important considerations about the relationship between influencers and brands, and how these type of marketing agreements should be arranged.  In traditional commercial settings, contracts mitigate the risk of situations like the Illusion Bronze controversy by ensuring that both parties understand their obligations.  However, social media interactions are far more casual.  The influencer economy may at times operate on less formal interactions, where DMs and verbal agreements may form the basis of understanding between parties.

Implications for Influencers and Brands

For influencers, the takeaway is clear: avoid making promises unless you are prepared to fulfill them, or at least have a standard process for intake of brand deals which clearly outlines obligations and timelines. This also serves as a lesson to influencers to be mindful that businesses, especially small brands, might be making decisions based on their interactions due to the fact that influencers may serve as a direct liaison to their target audience.  Simple steps such as including disclaimers in communications and clarifying any existence of obligations or guarantees every step of the way could draw the line in avoiding miscommunication and reliance.

For upcoming independent brands, let this be a lesson to formalize agreements before making financial decisions.  While it may feel natural to seek out influencer marketing informally on social media, small businesses should prioritize retaining their capital no matter what these interactions sound like.  There are real economic stakes when it comes to making investments based on words.

Conclusion

The economy is clearly evolving with social media, and along with it evolves the business efforts and strategies of brands everywhere. However, the legal principles governing these interactions remain grounded in traditional doctrines such as promissory estoppel.  These doctrines and the law may not evolve as fast as e-commerce, which could make the difference in an influencer’s liability to brands who seek exposure.  As influencer marketing becomes key in the online marketplace, trust and reputation are everything.  Therefore, both parties stand to benefit from clearer terms and understandings of their obligations to each other.

 

 

 

The Internet, Too Big for One Nation to Handle?

Social media is a powerful tool for individuals to exchange ideas and messages quickly. What makes Social Media so powerful? It is powerful because it allows individuals to spread and or exchange information in an instant but with anonymity. Social Media platform systems are built in a way that allows for accounts to be published where an individual or even entity may disguise themselves and post. (Such platforms are Instagram, X, TikTok, etc.) Platforms are aware of the power and ability to make an account and spread information whether it is true or not. This power has spawned a few issues for which the platform must adjust to. However, the liability and need for the platforms to adjust depends.

Here, in the United States, we recognize the need for ethics and the responsibility of upholding a fiduciary duty regardless if an individual is a lawyer or working in another profession that requires such duty. In law, we have the Model Rules of Professional Conduct. We recognize that certain positions in power concerning business transactions can be used in an abusive manner to take advantage of the other party. However, there is a limit to the liability as the United States is a capitalist, free-market economy. Do we want individuals to go ahead and get into business deals without doing their due diligence since the law could help remedy losses due to a business partner not upholding their duties? No, we would not see an end to the amount of cases that could be heard and it would be unfair to the courts, and business partners at some point. Bad business decisions are made all the time just as bad business deals are made. Businesses must adjust and do the work required to ensure they do not make bad choices and find their business suffering from them. Businesses are mature enough to properly face the consequences of a bad decision and work through it to get back into a profitable position.

Well, should these platforms owe a certain duty to the individuals on them? With Section 230 of the 1996 Communications Decency Act, it seems the idea that we have taken as a country is, that it is the user that is liable not the provider of the service for which questionable activity is posted. Therefore liability for activity that can be determined as hateful, inciting violence, or harm will be placed on the account/user who posted the activity, not the individual or company that gives such user a platform (a place for which the user may spread such questionable activity to other users). According to PBS it is a law that dates back to the 1950’s when bookstores were held liable for the content in the books they sold. SCOTUS determined “created a “chilling effect” to hold someone liable for someone else’s content.”

European Commission logo

EUROPE

The European Union (EU) has a different feeling on the liability of platforms. The EU, a committee of European nations who come along to meet in the pursuit of peace, believes consumer protection should be the priority when it comes to online “Digital Services”.

The EU established the Digital Services Act, laying out certain requirements for digital service providers, online intermediaries, and platforms, which would ensure “user safety, protect fundamental rights, and create a fair and open online platform environment.” It is fair to determine that Europe sees the internet as a topic too big for one nation to really figure out a way to manage it. By February 2024, the European Union determined the Act must apply to all types of platforms under the EU’s terminology.

The EU chooses to acknowledge that the internet and its growth are unpredictable. Due to its unpredictability, it is best to provide its users with some safety, as it is evident the internet is needed to complete everyday tasks. If the internet is something that individuals need to use to survive in the modern technological world, then it must be regulated. The most effective way to regulate global platforms is to have a group of nations such as the EU come together and decide on a unified form of regulations.

Russia makes its own attempts to regulate platforms in its own country. The way Russia has been handling Google is a prime example. According to Ty Roush of Forbes, “The Russian government is attempting to fine Google about $20 decillion, a figure with 34 zeros that’s exponentially larger than the world’s economy, over a decision by YouTube—owned by Google’s parent Alphabet—to block channels run by Russian state-run media, according to Russian officials.”

In the last few years, Russia has been holding Google, and its Russian Subsidiary accountable for the allegations that Russia has presented. In response, Google has decided to remove itself from Russia over time to the point its presence will eventually be nonexistent in Russia. Google will not and can not pay the fine that Russia is asking.

It seems to regulate a platform like Google, whose presence is all over the world in more than one nation, it will require more than one nation to come and determine a standard for the platforms. The platforms have established that they are here to stay for a while. One singular nation cannot handle a platform of the scale in nature as Google. Google can decide to go ahead and leave a large profitable market like Russia because it is still established in most nations across the globe outweighing the benefits of staying in Russia and fighting their rules to keep its presence.

North America/U.S.A

The 23 Countries in North America in Alphabetical Order - The Facts Institute

North America, where large influential nations such as Mexico, Canada, and the United States reside, does not have a unified committee such as the European Union. Therefore, regulation is up to the individual nation to come up with their own set of rules or if they want to come up with any at all. With section 230, the United States has relieved liability for the platforms. However, we are aware that individuals’ mental health worsened as the internet grew. It does not look like the issue of mental health in the nation is getting any better. It will definitely not get better with platforms providing places for individuals or even bots to spread harmful activity.

It is time for nations across the globe to come together and acknowledge the internet, and its platforms are a global matter where users are very susceptible. The only way to protect global citizens from the harm the platforms can provide is to establish a unified mindset on handling the internet. It is best to see just how effective the DSA is for the EU and perhaps, one day, the United Nations may establish a treaty amongst nations where platforms may be regulated with users’ safety as the priority.

AI in the Legal Field

What is AI? 

Photo Source

AI, or Artificial Intelligence, refers to a set of technologies that enables computers to simulate human intelligence and perform tasks that typically require human cognition. Examples of AI applications include ChatGPT, Harvey.AI, and Google Gemini. These systems are designed to think and learn like humans, continually improving as users interact with them. They are trained through algorithms of data to improve their performance, which allows them to enhance their performance over time without being explicitly programmed for every task. Unlike Google, which provides search results based on web queries, ChatGPT generates human-like answers to prompts through the process by which computers learn from examples.

Cost-Benefit Analysis of AI in the Legal Field 

Photo Source

The primary areas where AI is being applied in the legal field include; reviewing documents for discovery, which is generally referred to as technology-assisted review (TAR), legal research through automated searches of case law and statutes, contract and legal document analysis, proofreading, and document organization.  

One of the main reasons AI is used in the legal field is because it saves time. By having AI conduct routine tasks, such as proofreading, AI frees up attorneys’ time to focus on more complex tasks. This increased efficiency may also enable law firms to reduce their staff headcount and save money. For example, without AI, proofreading a document can take hours, but with AI, it can be completed in less than a minute, identifying and correcting errors instantly. As they say, “time is money.” AI is also valuable because it produces high-quality work. Since AI doesn’t get tired or become distracted, it can deliver flawless, error-free, and enhanced results. Tasks like document review, proofreading, and legal research can be tedious, but AI handles the initial “heavy lifting,” reducing stress and frustration for attorneys. As one saying goes, “No one said attorneys had to do everything themselves!  

While AI has the potential to save law firms money, I do not think the promised cost reduction always materialized in the way that one may anticipate. It may not be worth it for a law firm to use AI because the initial investment in AI technology can be substantial. The cost can range from $5,000 for simple models to over $500,000 for complex models. After law firms purchase the AI system, they then have to train their staff to use it effectively and upgrade the software regularly. “These costs can be substantial and may take time to recoup.” Law firms might consider doing a cost-benefit analysis before determining if using AI is the right decision for them.

Problems With AI in the Legal Field 

One issue with AI applications is that they can perform tasks, such as writing and problem-solving, in ways that closely mimic human work. This makes it difficult for others to determine whether the work was created by AI or a human. For example, drafting documents now requires less human input because AI can generate these documents automatically. This raises concerns about trust and reliability, as myself and others may prefer to have a human complete the work rather than relying on AI, due to skepticism about AI’s accuracy and dependability. 

A major concern with the shift towards AI use is the potential spread of misinformation. Lawyers who rely on AI to draft documents without thoroughly reviewing what is produced may unknowingly present “hallucinations” which are made-up or inaccurate information. This can potentially lead to serious legal errors. Another critical issue is the risk of confidential client information being compromised. When lawyers put sensitive client data into AI systems to generate legal documents, they are potentially handing that data over to large technology companies. These companies usually prioritize their commercial interests, and without proper regulation, they could misuse client data for profit, potentially compromising client confidentiality, enabling fraud, and threatening the integrity of the judicial system.

A Case Where Lawyers Misused ChatGPT in Court 

As a law student who hopes to become a lawyer one day, it is concerning to see lawyers facing consequences for using AI. However, it is also understandable that if a lawyer does not use AI carefully, they will get sanctioned. Two of the first lawyers to use AI in court and encounter “hallucinations” were Steven Schwartz and Peter LoDuca. The lawyers were representing a client in a personal injury lawsuit against an airline company. Schwartz used ChatGPT to help prepare a filing, allegedly unaware that the AI had fabricated several case citations. Specifically, AI cited at least six cases, including Varghese v. China Southern Airlines and Shaboon v. Egypt Air, the court found these cases didn’t exist. The court said these cases had “bogus judicial decisions with bogus quotes and bogus internal citations.” As a result, both attorneys were each fined $5,000. Judge P. Kevin Castel said he might not have punished the attorneys if they had come “clean” about using ChatGPT to find the purported cases the AI cited.

AI Limitations in Court

Photo Source

As of February 2024, about 2% of the more than 1,600 United States District and Magistrate judges have issued 23 standing orders addressing the use of AI. These standing orders mainly block or put guidelines on using AI due to concerns about technology accuracy issues. Some legal scholars have raised concerns that these orders might discourage attorneys and self-represented litigants from using AI tools. I think instead of completely banning the use of AI, one possible approach could be requiring attorneys to disclose to the court when they use AI for their work. For example, U.S. District Judge Leslie E. Kobayashi of Hawaii wrote in her order, “The court directs that any party, whether appearing pro se or through counsel, who utilizes any generative artificial intelligence (AI) tool in the preparation of any documents to be filed with the court, must disclose in the document that AI was used and the specific AI tool that was used. The unrepresented party or attorney must further certify in the document that the person has checked the accuracy of any portion of the document drafted by generative AI, including all citations and legal authority.” 

Ethicality of AI 

Judicial officers include judges, magistrates, and candidates for judicial office. Under the Model Code of Judicial Conduct (MCJC) Rule 2.5, judicial officers have a responsibility to maintain competence and stay up to date with technology.  Similarly, the Model Rules of Professional Conduct (MRPC) Rule 1.1 states that lawyers must provide competent representation to their clients which includes having technical competence.  

The National Counterintelligence and Security Center (NCSC) emphasizes that both judicial officers and lawyers must have a basic understanding of AI and be aware of the risks associated with using AI for research and document drafting. Furthermore, judicial officers must uphold their duty of confidentiality. This means they should be cautious when they or their staff are entering sensitive or confidential information into AI systems for legal research or document preparation, ensuring that the information is not being retained or misused by the AI platform. I was surprised to find out that while the National Cyber Security Center provides these guidelines, they are not legally binding, but are strongly recommended. 

Members of the legal field should also be aware that there may be additional state-specific rules and obligations depending on the state where they practice. For instance, in April 2024, the New York State Bar Association established a Task Force on AI and issued a Report and Recommendations. The New York guidance notes that “attorneys [have a duty] to understand the benefits, not just the risks, of AI in providing competent and ethical legal representation and allows the use of AI tools to be considered in the reasonableness of attorney fees.” In New Jersey, “although lawyers do not have to tell a client every time they use AI, they may have an obligation to disclose the use of AI if the client cannot make an informed decision without knowing.” I think lawyers and judicial officers should be aware of their state’s rules for AI and make sure they are not blindly using it. 

Disclosing the Use of AI 

Some clients have explicitly requested that their lawyers refrain from using AI tools in their legal representation. However, for the clients who do not express their wishes, lawyers wrestle with the question of whether they should inform their clients that they use AI in their case matters. While there is no clear answer, some lawyers have decided to discuss with their clients that they wish to use AI and before doing so obtain consent, which seems like a good idea.  

Photo Source

Rule 1.4(2) of the American Bar Association (ABA) Model Rules of Professional Conduct addresses attorney-client communication. It provides that a lawyer must “reasonably consult with the client about the means by which the client’s objectives are to be accomplished.” This raises the question of whether this rule covers the use of AI. If it does, how much AI assistance should be disclosed to clients? For instance, should using ChatGPT to draft a brief be disclosed, while using law students for the same task does not require disclosure? These are some of the ethical questions currently being debated in the legal field.

The New Face of Misinformation: Deepfakes and Their Legal Challenges

The New Face of Misinformation: Deepfakes and Their Legal Challenges

What if I told you that the video of a celebrity endorsing a product or a politician delivering a shocking confession wasn’t real? That’s the unsettling reality deepfakes have created, where seeing and hearing are no longer believing. Deepfakes are a double edged sword – while they can be used for entertainment and creative expression, they also raise significant risks to privacy, spread misinformation, and enable manipulation. In a 2022 iProov study, 71% of global respondents were unaware of what deepfakes were, yet 57% believed they could distinguish between real videos and deepfakes. This gap in knowledge is alarming, considering deepfake technology can create entirely convincing and fictional representations of public figures​.

What Are Deepfakes, and Why Are They Problematic?

A deepfake is a highly realistic image, audio, or video generated by artificial intelligence, through deep learning models, to make someone appear as if they are saying or doing things they never did. Think face swaps, cloned voices, or AI-generated speeches. Deepfakes have been used to defame individuals, including public figures, by falsely attributing statements to them, and have been used to create non-consensual explicit content, such as deepfake pornography.

A prime example is when sexually explicit deepfake images of Taylor Swift flooded social media in January 2024, forcing platforms like X (formerly Twitter) to take drastic action by removing content and banning accounts involved. This raised questions about social media companies’ responsibility in controlling the spread of harmful AI-generated content.

Legal Ramifications: Are Deepfakes Protected by the First Amendment?

Deepfakes exist in a legal gray area. Defamation and copyright can sometimes address misuse of AI, but they don’t fully account for the complexities of AI-generated content due to its rapidly evolving nature. This creates tension with constitutional rights, particularly the First Amendment, which protects free speech. Courts now face difficult decisions, such as whether and how to punish deepfake creators, where to draw the line between free speech and harmful content, and how to address the legal ambiguity that has complicated federal regulation, prompting some states to take action.

The DEEPFAKES Accountability Act: A Federal Response

While no comprehensive federal regulation addressing deepfakes currently exists, the DEEPFAKES Accountability Act provides a framework for future legislation. Initially introduced in 2019 and renewed by Representative Yvette Clarke in 2023, the bill requires clear disclosures for AI-generated content to inform viewers, gives victims the right to seek damages, and introduces criminal penalties for malicious use of deepfakes. It specifically targets harmful applications of deepfakes, such as spreading disinformation, manipulating elections, and creating non-consensual explicit content.

If enacted, this law would empower agencies like the FTC and DOJ to enforce these regulations. However, achieving a balance between protecting victims and safeguarding free speech rights presents a challenge. Courts would likely need to evaluate each case individually, carefully weighing the totality of the circumstances surrounding the use of deepfake technology, which also poses significant challenges within legal practice, particularly regarding the authenticity of evidence.

The “Deepfake Defense” – How Deepfakes Impact the Legal System

Deepfakes present evidentiary challenges in courtrooms. Lawyers and defendants can now argue that incriminating videos are fabricated. In fact, recent high-profile cases have seen defendants cast doubt on the authenticity of video evidence by claiming it was AI-manipulated. This concept has been coined the “deepfake defense” by Judge Herbert B. Dixon Jr., who discusses its implications in his article, “The Deepfake Defense: An Evidentiary Conundrum.” Judge Dixon notes that deepfakes are videos created or altered with the aid of AI, leading to situations where it becomes difficult for courts to determine the authenticity of evidence presented.

This defense has emerged in high-profile cases. Attorneys for defendants charged with storming the Capitol on January 6, 2021, argued that the jury could not trust the video evidence displaying their clients at the riot, as there was no assurance that the footage was real or had not been altered. Experts have proposed amendments to the Federal Rules of Evidence to clarify the responsibilities of parties in authenticating evidence. Without a consistent approach, the risk of misjudgment remains, highlighting the urgent need for legal systems to adapt to the modern realities of AI technology. Prolonged trial processes result from these challenges, ultimately undermining judicial economy and wasting valuable resources. Courts are increasingly forced to rely on experts to authenticate digital evidence, analyzing factors such as eye movements, lighting, and speech patterns. However, as deepfakes become more advanced, even these experts struggle to detect them, raising significant concerns about how the judicial system will adapt.

What’s Next: A New Era of Regulation and Responsibility?

Deepfakes force us to rethink the role of social media platforms in preventing the spread of misinformation. Platforms like Instagram, X, and Facebook have started labeling manipulated media, but should these warnings be mandatory? Should platforms be held liable for the spread of deepfakes? These are critical questions that legislators and tech companies must address. Moving forward, balancing regulation and innovation will be key. Policymakers must develop laws that protect victims without stifling technological progress. At the same time, raising public awareness about the risks of deepfakes is essential.

In a world where deepfake technology is becoming increasingly accessible, trust in digital media is more fragile than ever. Laws and policies are starting to catch up, but it will take time to fully address the legal challenges deepfakes present. Until then, the best defense is an informed public, aware of the power and pitfalls of artificial intelligence.

 

Navigating Torts in the Digital Age: When Social Media Conduct Leads to Legal Claims

Navigating Torts in the Digital Age: When Social Media Conduct Leads to Legal Claims

Traditional tort law was developed in a world of face-to-face interactions. It was developed with the purpose of providing compensation for harm, deterring wrongful conduct, and ensuring accountability for violations to individual rights. However, the digital age has created new scenarios that often do not neatly fit within existing legal frameworks. This blog post explores how conduct on social media—be it intentional or accidental—can lead to tort claims such as defamation, right of publicity, or even battery, and how courts might apply tort law —sometimes even unusually— to address these modern challenges.

Torts and Social Media: Where the Two Intersect

Some traditional tort claims, like defamation, may seem to naturally extend to social media. However, at the beginning of the social media age, courts struggled with how to address wrongful conduct on social media that harmed individuals, requiring creative legal thinking to apply existing laws to the digital world..

  1. Battery in the Digital Space: Eichenwald v. Rivello

One of the most compelling cases that pushes the boundaries of tort law is Eichenwald v. Rivello. The parties are Kurt Eichenwald, a journalist with epilepsy, and John Rivello, a social media user. Eichenwald publicly disclosed his epilepsy and happened to be a frequent critic of certain political and social issues. Rivello —likely motivated by animosity toward Eichenwald, due to his public commentary on political issues— sent Eichenwald a tweet with a GIF containing flashing strobe lights designed to trigger his epilepsy, with the accompanying message, “You deserve a seizure for your post.” When Eichenwald opened his Twitter notifications, he suffered a seizure as a result of the GIF. This case posed a novel issue of law at the time: can sending a harmful image online constitute physical contact?

Trolls try to trigger seizures - is it assault? - BBC News

Despite the fact that battery traditionally required physical contact, the Court in Eichenwald held that Rivello’s conduct met the elements of battery. The strobing GIF made indirect contact with Eichenwald’s cornea, undeniably causing him harm. In this case, the Court had to stretch traditional tort principles to accommodate claims arising from digital conduct.

  1. Defamation and the Viral Nature of Social Media

Another tort commonly seen in social media cases is defamation. With the ability to share statements quickly with a wide audience, defamation claims have become the primary claim seen arising out of social media interactions. One situation we can analyze under this claim is the ‘Central Park Karen’ incident. In 2020, a bystander recorded Amy Cooper’s altercation with an African American birdwatcher and shared it online where it went viral. Following the incident, her employer, Franklin Templeton, made a public statement condemning racism and Cooper was fired.

Before “Karen” and “Becky” there was “John” – Communist Party USA

Cooper sued for defamation, arguing that the viral video and public statements caused harm to her reputation. Unfortunately for her, the Court dismissed her claim, reasoning that the employer’s statements were opinions, which are protected under the First Amendment. The controversy serves as a cautionary tale, not only warning people about their online behavior, but also their actions in public. Videos of behaviors in public now are subject to recordings that can spread like wildfire. Cooper herself writes that the video still haunts her to this day.

As exemplified in the dismissal of Cooper’s case, the key to defamation claims is distinguishing between factual statements, false statements, and opinions—especially in the context of social media, where free-flowing ideas and opinions can cause significant reputational harm. In the social media age, analyzing defamation claims requires balancing free speech with the protection of individuals’ reputations.

  1. Cancel Culture and Tortious Interference with Business Relations

In the case of Amy Cooper, she has been what one would call “canceled,” but in the real world, rather than in the context of social media. The rise of cancel culture has posed a threat to influencers and public figures who often rely on brand deals and partnerships for their livelihoods. In many controversies, the “cancellation” is a result of fair criticism to the public figure. But what happens when it is the result of false or harmful misinformation spread online? While defamation may be one avenue, tortious interference with business relations might also come into play.

An example fake tweet created by using Tweetgen. | Download Scientific Diagram
Disclaimer: This tweet is a fake example and was not actually posted by NASA. It is being used here purely for illustrative purposes.

Imagine an influencer who becomes the target of a viral campaign based on photoshopped offensive tweets. As the “screenshots” roam the internet, the influencer’s followers drop, brand deals are canceled, and new partnerships become difficult to secure. Since the false information led to a disruption of business relationships, this may be a scenario giving rise to a claim for tortious interference, especially if the creation of that false information was done so maliciously, targeting the influencer’s success.

Tortious interference claims require showing that a third party intentionally caused harm to the plaintiff’s business relationships. In the context of social media, competitors or malicious individuals could spread misinformation that causes financial loss.

The Future of Torts and Social Media

As social media continues to influence how we communicate, courts face the challenge of adapting traditional tort law to address new types of harm in the digital age. While many no longer consider Social media a “new” concept, you can imagine that courts will have to similarly adapt old law to new technologies, such as Artificial Intelligence. Cases like Eichenwald v. Rivello demonstrate how legal frameworks can be stretched to accommodate harm caused by online conduct. Claims like defamation, tortious interference, and right of publicity claims highlight the real consequences of social media scandals. As we navigate social media spaces, it’s important for individuals—whether influencers, content creators, or casual users—to recognize when their actions cross the line into actionable torts. Understanding the potential legal consequences of online behavior, and even in public, is essential for avoiding disputes and protecting rights in this rapidly changing environment.

 

 

Skip to toolbar