Due Process vs. Public Backlash: Is it Time to Cancel Cancel Culture?

Throughout history, people have often challenged and criticized each other’s ideas and opinions. But with the rise of internet accessibility, especially social media, the way these interactions unfold have changed. Now, it’s easy for anyone to call out someone else’s behavior or words online, and the power of social media makes it simple to gather a large group of people to join in. What starts as a single person’s post can quickly turn into a bigger movement, with others sharing the same views and adding their own criticism. This is cancel culture.

Cancel culture has become a highly relevant topic in today’s digital world, especially because it often leads to serious public backlash and consequences for people or companies seen as saying or doing something offensive. The phrase “cancel culture” first originated from the word cancel, meaning to cut ties with someone. In the abstract, this concept aims to demand accountability, but it also raises important legal questions. When does criticism go too far and become defamation? How does this “online backlash” affect a person’s right to fair treatment? And what legal options are available for those who feel unfairly targeted by “cancel culture”?

 

What Is Cancel Culture?

Cancel culture is a collective online call-out and boycott of individuals, brands, or organizations accused of offensive behavior, often driven by social media. Critics argue that it can lead to mob justice, where people are judged and punished without proper due process. On the other hand, supporters believe it gives a voice to marginalized groups and holds powerful people accountable in ways that traditional systems often fail to. It’s a debate about how accountability should work in a digital age—whether it’s a tool for justice or a dangerous trend that threatens free speech and fairness.

The impact of cancel culture can be extensive, leading to reputational harm, financial losses, and social exclusion. When these outcomes affect a person’s livelihood or well-being, the legal implications become significant, because public accusations, whether true or false, can cause real damage.

In a Pew Research study from September 2020, 44% of Americans reported being familiar with the term “cancel culture,” with 22% saying they were very familiar. Familiarity with the term varies by age, with 64% of adults under 30 aware of it, compared to 46% of those ages 30-49 and only 34% of people 50 and older. Individuals with higher levels of education are also more likely to have heard of cancel culture. Political affiliation shows little difference in awareness, although more liberal Democrats and conservative Republicans tend to be more familiar with the term than their moderate counterparts.

 

Cancel Culture x Defamation Law

In a legal context, defamation law is essential in determining when online criticism crosses the line. Defamation generally involves a false statement presented as fact that causes reputational harm.

To succeed in a defamation lawsuit, plaintiffs must show:

  • a false statement purporting to be fact;
  • publication or communication of that statement to a third person;
  • fault amounting to at least negligence; and
  • damages, or some harm caused to the reputation of the person or entity who is the subject of the statement.

US Dominion, Inc. v. Fox News Network, Inc. is a defamation case highlighting how the media can impact reputations. Dominion sued Fox News for $1.6 billion, claiming the network falsely accused it of being involved in election fraud during the 2020 presidential election. Fox News defended itself by saying that it was simply reporting on claims made by others, even if those claims turned out to be false. The case was settled in March 2023 for $787.5 million, showing that media outlets can be held accountable when they spread information without regard for the truth. This is similar to how cancel culture works – individuals or companies can face backlash and reputational damage based on viral accusations that may not be fully verified. Ultimately, the case highlights how defamation law can provide legal recourse for those harmed by false public statements while emphasizing the balance between free speech and accountability in today’s fast paced digital environment.

 

Free Speech vs. Harm: The Tensions of Cancel Culture

Cancel culture brings to light the ongoing tension between free speech and reputational harm. On one hand, it provides a platform for people to criticize others and hold them accountable for their actions. However, the consequences of these public accusations can be severe, leading to job loss, emotional distress, and social isolation—sometimes even beyond what the law might consider fair.

While the First Amendment protects free speech, it doesn’t cover defamatory or harmful speech. This means people can face consequences for their words, especially when they cause harm. But in the realm of cancel culture, these consequences can sometimes feel disproportionate, where the public reaction can go beyond what might be considered reasonable or just. This raises concerns about fairness and justice – whether the punishment fits the crime, especially when the public can amplify the damage in ways that the legal system may not address.

In Cajune v. Indep. Sch. Dist. 194, the Eighth Circuit Court addressed a First Amendment issue regarding the display of “Black Lives Matter” (BLM) posters in classrooms. The case revolves around whether the school district’s policies, which allow teachers to choose whether to display these posters, restrict or support free speech. The plaintiffs argue that this limitation on expression resembles the broader dynamics of cancel culture, where certain viewpoints can be suppressed or silenced. Much like cancel culture, where individuals or ideas are “canceled” for holding or expressing controversial views, this case touches on how institutions control public expression. If the district restricts messages like “All Lives Matter” or “Blue Lives Matter,” it could be seen as institutional “canceling” of dissenting or unpopular opinions, which can show how cancel culture restricts diverse speech. This shows the clash between promoting free speech and managing controversial messages in public spaces.

 

New York’s Anti-SLAPP Law

New York’s Anti-SLAPP (Strategic Lawsuit Against Public Participation) law is also highly relevant in the context of cancel culture, especially for cases involving public figures. This statute protects defendants from lawsuits intended to silence free speech on matters of public interest. In 2020, New York amended the law to broaden protections, allowing it to cover speech on any issue of public concern.

In Gottwald v. Sebert (aka Kesha v. Dr. Luke), New York’s Court of Appeals upheld a high legal standard for defamation claims made by public figures, by requiring them to prove actual malice. This means Dr. Luke would need to show that Kesha knowingly made false statements or acted with reckless disregard. The court’s decision highlights the strong free speech protections that apply to public figures, making it difficult for them to win defamation cases unless they provide clear evidence of malice. This reflects how cancel culture incidents involving public figures are subject to stricter legal standards.

 

Social Media Platforms: Responsibility and Liability

Social media platforms like Twitter, Facebook, and Instagram play an important role in cancel culture by allowing for public criticism and allowing for rapid, widespread responses. Section 230 shields platforms from liability for user generated content, so they typically aren’t held liable if users post defamatory or harmful content. However, recent Supreme Court decisions upholding Section 230 protections highlight the tension between free speech and holding platforms accountable. These decisions have affirmed that platforms aren’t liable for third-party content, which affects the spread of cancel culture by limiting individuals’ ability to hold platforms accountable for hosting potentially defamatory or harmful content.

 

Legal Recourse for the Cancelled

For individuals targeted by cancel culture, legal options are limited but exist. Potential actions include:

  • Defamation lawsuits: If individuals can prove they were defamed, they may recover damages.
  • Privacy claims: Those whose personal information is shared publicly without consent.
  • Wrongful termination suits: If cancel culture leads to job loss, employees may have grounds for legal action if the termination was discriminatory or violated their rights.
  • Pursuing legal action can be difficult, especially given New York’s high standard for defamation and its expanded anti-SLAPP protections. In cases involving public figures, plaintiffs face many obstacles due to the requirement of proving actual malice.

 

Looking Ahead: Can the Law Catch Up with Cancel Culture?

As cancel culture continues to evolve, legislature will continue to face challenges in determining how best to regulate it. Reforms in privacy laws, online harassment protections, and Section 230 could provide clearer boundaries, but any change will have to account for free speech protections. Cancel culture poses a unique legal and social challenge, as public opinion on accountability and consequences continues to evolve alongside new media platforms. Balancing free expression with protections against reputational harm will likely remain a major challenge for future legal developments.

The New Face of Misinformation: Deepfakes and Their Legal Challenges

The New Face of Misinformation: Deepfakes and Their Legal Challenges

What if I told you that the video of a celebrity endorsing a product or a politician delivering a shocking confession wasn’t real? That’s the unsettling reality deepfakes have created, where seeing and hearing are no longer believing. Deepfakes are a double edged sword – while they can be used for entertainment and creative expression, they also raise significant risks to privacy, spread misinformation, and enable manipulation. In a 2022 iProov study, 71% of global respondents were unaware of what deepfakes were, yet 57% believed they could distinguish between real videos and deepfakes. This gap in knowledge is alarming, considering deepfake technology can create entirely convincing and fictional representations of public figures​.

What Are Deepfakes, and Why Are They Problematic?

A deepfake is a highly realistic image, audio, or video generated by artificial intelligence, through deep learning models, to make someone appear as if they are saying or doing things they never did. Think face swaps, cloned voices, or AI-generated speeches. Deepfakes have been used to defame individuals, including public figures, by falsely attributing statements to them, and have been used to create non-consensual explicit content, such as deepfake pornography.

A prime example is when sexually explicit deepfake images of Taylor Swift flooded social media in January 2024, forcing platforms like X (formerly Twitter) to take drastic action by removing content and banning accounts involved. This raised questions about social media companies’ responsibility in controlling the spread of harmful AI-generated content.

Legal Ramifications: Are Deepfakes Protected by the First Amendment?

Deepfakes exist in a legal gray area. Defamation and copyright can sometimes address misuse of AI, but they don’t fully account for the complexities of AI-generated content due to its rapidly evolving nature. This creates tension with constitutional rights, particularly the First Amendment, which protects free speech. Courts now face difficult decisions, such as whether and how to punish deepfake creators, where to draw the line between free speech and harmful content, and how to address the legal ambiguity that has complicated federal regulation, prompting some states to take action.

The DEEPFAKES Accountability Act: A Federal Response

While no comprehensive federal regulation addressing deepfakes currently exists, the DEEPFAKES Accountability Act provides a framework for future legislation. Initially introduced in 2019 and renewed by Representative Yvette Clarke in 2023, the bill requires clear disclosures for AI-generated content to inform viewers, gives victims the right to seek damages, and introduces criminal penalties for malicious use of deepfakes. It specifically targets harmful applications of deepfakes, such as spreading disinformation, manipulating elections, and creating non-consensual explicit content.

If enacted, this law would empower agencies like the FTC and DOJ to enforce these regulations. However, achieving a balance between protecting victims and safeguarding free speech rights presents a challenge. Courts would likely need to evaluate each case individually, carefully weighing the totality of the circumstances surrounding the use of deepfake technology, which also poses significant challenges within legal practice, particularly regarding the authenticity of evidence.

The “Deepfake Defense” – How Deepfakes Impact the Legal System

Deepfakes present evidentiary challenges in courtrooms. Lawyers and defendants can now argue that incriminating videos are fabricated. In fact, recent high-profile cases have seen defendants cast doubt on the authenticity of video evidence by claiming it was AI-manipulated. This concept has been coined the “deepfake defense” by Judge Herbert B. Dixon Jr., who discusses its implications in his article, “The Deepfake Defense: An Evidentiary Conundrum.” Judge Dixon notes that deepfakes are videos created or altered with the aid of AI, leading to situations where it becomes difficult for courts to determine the authenticity of evidence presented.

This defense has emerged in high-profile cases. Attorneys for defendants charged with storming the Capitol on January 6, 2021, argued that the jury could not trust the video evidence displaying their clients at the riot, as there was no assurance that the footage was real or had not been altered. Experts have proposed amendments to the Federal Rules of Evidence to clarify the responsibilities of parties in authenticating evidence. Without a consistent approach, the risk of misjudgment remains, highlighting the urgent need for legal systems to adapt to the modern realities of AI technology. Prolonged trial processes result from these challenges, ultimately undermining judicial economy and wasting valuable resources. Courts are increasingly forced to rely on experts to authenticate digital evidence, analyzing factors such as eye movements, lighting, and speech patterns. However, as deepfakes become more advanced, even these experts struggle to detect them, raising significant concerns about how the judicial system will adapt.

What’s Next: A New Era of Regulation and Responsibility?

Deepfakes force us to rethink the role of social media platforms in preventing the spread of misinformation. Platforms like Instagram, X, and Facebook have started labeling manipulated media, but should these warnings be mandatory? Should platforms be held liable for the spread of deepfakes? These are critical questions that legislators and tech companies must address. Moving forward, balancing regulation and innovation will be key. Policymakers must develop laws that protect victims without stifling technological progress. At the same time, raising public awareness about the risks of deepfakes is essential.

In a world where deepfake technology is becoming increasingly accessible, trust in digital media is more fragile than ever. Laws and policies are starting to catch up, but it will take time to fully address the legal challenges deepfakes present. Until then, the best defense is an informed public, aware of the power and pitfalls of artificial intelligence.

 

Skip to toolbar