The New Face of Misinformation: Deepfakes and Their Legal Challenges

The New Face of Misinformation: Deepfakes and Their Legal Challenges

What if I told you that the video of a celebrity endorsing a product or a politician delivering a shocking confession wasn’t real? That’s the unsettling reality deepfakes have created, where seeing and hearing are no longer believing. Deepfakes are a double edged sword – while they can be used for entertainment and creative expression, they also raise significant risks to privacy, spread misinformation, and enable manipulation. In a 2022 iProov study, 71% of global respondents were unaware of what deepfakes were, yet 57% believed they could distinguish between real videos and deepfakes. This gap in knowledge is alarming, considering deepfake technology can create entirely convincing and fictional representations of public figures​.

What Are Deepfakes, and Why Are They Problematic?

A deepfake is a highly realistic image, audio, or video generated by artificial intelligence, through deep learning models, to make someone appear as if they are saying or doing things they never did. Think face swaps, cloned voices, or AI-generated speeches. Deepfakes have been used to defame individuals, including public figures, by falsely attributing statements to them, and have been used to create non-consensual explicit content, such as deepfake pornography.

A prime example is when sexually explicit deepfake images of Taylor Swift flooded social media in January 2024, forcing platforms like X (formerly Twitter) to take drastic action by removing content and banning accounts involved. This raised questions about social media companies’ responsibility in controlling the spread of harmful AI-generated content.

Legal Ramifications: Are Deepfakes Protected by the First Amendment?

Deepfakes exist in a legal gray area. Defamation and copyright can sometimes address misuse of AI, but they don’t fully account for the complexities of AI-generated content due to its rapidly evolving nature. This creates tension with constitutional rights, particularly the First Amendment, which protects free speech. Courts now face difficult decisions, such as whether and how to punish deepfake creators, where to draw the line between free speech and harmful content, and how to address the legal ambiguity that has complicated federal regulation, prompting some states to take action.

The DEEPFAKES Accountability Act: A Federal Response

While no comprehensive federal regulation addressing deepfakes currently exists, the DEEPFAKES Accountability Act provides a framework for future legislation. Initially introduced in 2019 and renewed by Representative Yvette Clarke in 2023, the bill requires clear disclosures for AI-generated content to inform viewers, gives victims the right to seek damages, and introduces criminal penalties for malicious use of deepfakes. It specifically targets harmful applications of deepfakes, such as spreading disinformation, manipulating elections, and creating non-consensual explicit content.

If enacted, this law would empower agencies like the FTC and DOJ to enforce these regulations. However, achieving a balance between protecting victims and safeguarding free speech rights presents a challenge. Courts would likely need to evaluate each case individually, carefully weighing the totality of the circumstances surrounding the use of deepfake technology, which also poses significant challenges within legal practice, particularly regarding the authenticity of evidence.

The “Deepfake Defense” – How Deepfakes Impact the Legal System

Deepfakes present evidentiary challenges in courtrooms. Lawyers and defendants can now argue that incriminating videos are fabricated. In fact, recent high-profile cases have seen defendants cast doubt on the authenticity of video evidence by claiming it was AI-manipulated. This concept has been coined the “deepfake defense” by Judge Herbert B. Dixon Jr., who discusses its implications in his article, “The Deepfake Defense: An Evidentiary Conundrum.” Judge Dixon notes that deepfakes are videos created or altered with the aid of AI, leading to situations where it becomes difficult for courts to determine the authenticity of evidence presented.

This defense has emerged in high-profile cases. Attorneys for defendants charged with storming the Capitol on January 6, 2021, argued that the jury could not trust the video evidence displaying their clients at the riot, as there was no assurance that the footage was real or had not been altered. Experts have proposed amendments to the Federal Rules of Evidence to clarify the responsibilities of parties in authenticating evidence. Without a consistent approach, the risk of misjudgment remains, highlighting the urgent need for legal systems to adapt to the modern realities of AI technology. Prolonged trial processes result from these challenges, ultimately undermining judicial economy and wasting valuable resources. Courts are increasingly forced to rely on experts to authenticate digital evidence, analyzing factors such as eye movements, lighting, and speech patterns. However, as deepfakes become more advanced, even these experts struggle to detect them, raising significant concerns about how the judicial system will adapt.

What’s Next: A New Era of Regulation and Responsibility?

Deepfakes force us to rethink the role of social media platforms in preventing the spread of misinformation. Platforms like Instagram, X, and Facebook have started labeling manipulated media, but should these warnings be mandatory? Should platforms be held liable for the spread of deepfakes? These are critical questions that legislators and tech companies must address. Moving forward, balancing regulation and innovation will be key. Policymakers must develop laws that protect victims without stifling technological progress. At the same time, raising public awareness about the risks of deepfakes is essential.

In a world where deepfake technology is becoming increasingly accessible, trust in digital media is more fragile than ever. Laws and policies are starting to catch up, but it will take time to fully address the legal challenges deepfakes present. Until then, the best defense is an informed public, aware of the power and pitfalls of artificial intelligence.

 

Navigating Torts in the Digital Age: When Social Media Conduct Leads to Legal Claims

Navigating Torts in the Digital Age: When Social Media Conduct Leads to Legal Claims

Traditional tort law was developed in a world of face-to-face interactions. It was developed with the purpose of providing compensation for harm, deterring wrongful conduct, and ensuring accountability for violations to individual rights. However, the digital age has created new scenarios that often do not neatly fit within existing legal frameworks. This blog post explores how conduct on social media—be it intentional or accidental—can lead to tort claims such as defamation, right of publicity, or even battery, and how courts might apply tort law —sometimes even unusually— to address these modern challenges.

Torts and Social Media: Where the Two Intersect

Some traditional tort claims, like defamation, may seem to naturally extend to social media. However, at the beginning of the social media age, courts struggled with how to address wrongful conduct on social media that harmed individuals, requiring creative legal thinking to apply existing laws to the digital world..

  1. Battery in the Digital Space: Eichenwald v. Rivello

One of the most compelling cases that pushes the boundaries of tort law is Eichenwald v. Rivello. The parties are Kurt Eichenwald, a journalist with epilepsy, and John Rivello, a social media user. Eichenwald publicly disclosed his epilepsy and happened to be a frequent critic of certain political and social issues. Rivello —likely motivated by animosity toward Eichenwald, due to his public commentary on political issues— sent Eichenwald a tweet with a GIF containing flashing strobe lights designed to trigger his epilepsy, with the accompanying message, “You deserve a seizure for your post.” When Eichenwald opened his Twitter notifications, he suffered a seizure as a result of the GIF. This case posed a novel issue of law at the time: can sending a harmful image online constitute physical contact?

Trolls try to trigger seizures - is it assault? - BBC News

Despite the fact that battery traditionally required physical contact, the Court in Eichenwald held that Rivello’s conduct met the elements of battery. The strobing GIF made indirect contact with Eichenwald’s cornea, undeniably causing him harm. In this case, the Court had to stretch traditional tort principles to accommodate claims arising from digital conduct.

  1. Defamation and the Viral Nature of Social Media

Another tort commonly seen in social media cases is defamation. With the ability to share statements quickly with a wide audience, defamation claims have become the primary claim seen arising out of social media interactions. One situation we can analyze under this claim is the ‘Central Park Karen’ incident. In 2020, a bystander recorded Amy Cooper’s altercation with an African American birdwatcher and shared it online where it went viral. Following the incident, her employer, Franklin Templeton, made a public statement condemning racism and Cooper was fired.

Before “Karen” and “Becky” there was “John” – Communist Party USA

Cooper sued for defamation, arguing that the viral video and public statements caused harm to her reputation. Unfortunately for her, the Court dismissed her claim, reasoning that the employer’s statements were opinions, which are protected under the First Amendment. The controversy serves as a cautionary tale, not only warning people about their online behavior, but also their actions in public. Videos of behaviors in public now are subject to recordings that can spread like wildfire. Cooper herself writes that the video still haunts her to this day.

As exemplified in the dismissal of Cooper’s case, the key to defamation claims is distinguishing between factual statements, false statements, and opinions—especially in the context of social media, where free-flowing ideas and opinions can cause significant reputational harm. In the social media age, analyzing defamation claims requires balancing free speech with the protection of individuals’ reputations.

  1. Cancel Culture and Tortious Interference with Business Relations

In the case of Amy Cooper, she has been what one would call “canceled,” but in the real world, rather than in the context of social media. The rise of cancel culture has posed a threat to influencers and public figures who often rely on brand deals and partnerships for their livelihoods. In many controversies, the “cancellation” is a result of fair criticism to the public figure. But what happens when it is the result of false or harmful misinformation spread online? While defamation may be one avenue, tortious interference with business relations might also come into play.

An example fake tweet created by using Tweetgen. | Download Scientific Diagram
Disclaimer: This tweet is a fake example and was not actually posted by NASA. It is being used here purely for illustrative purposes.

Imagine an influencer who becomes the target of a viral campaign based on photoshopped offensive tweets. As the “screenshots” roam the internet, the influencer’s followers drop, brand deals are canceled, and new partnerships become difficult to secure. Since the false information led to a disruption of business relationships, this may be a scenario giving rise to a claim for tortious interference, especially if the creation of that false information was done so maliciously, targeting the influencer’s success.

Tortious interference claims require showing that a third party intentionally caused harm to the plaintiff’s business relationships. In the context of social media, competitors or malicious individuals could spread misinformation that causes financial loss.

The Future of Torts and Social Media

As social media continues to influence how we communicate, courts face the challenge of adapting traditional tort law to address new types of harm in the digital age. While many no longer consider Social media a “new” concept, you can imagine that courts will have to similarly adapt old law to new technologies, such as Artificial Intelligence. Cases like Eichenwald v. Rivello demonstrate how legal frameworks can be stretched to accommodate harm caused by online conduct. Claims like defamation, tortious interference, and right of publicity claims highlight the real consequences of social media scandals. As we navigate social media spaces, it’s important for individuals—whether influencers, content creators, or casual users—to recognize when their actions cross the line into actionable torts. Understanding the potential legal consequences of online behavior, and even in public, is essential for avoiding disputes and protecting rights in this rapidly changing environment.

 

 

Sport Regulation of Legal Matters with Social Media

The internet is becoming more accessible to individuals throughout the world. With more access to the internet, there is a growing population on the social media platforms. Social media platforms, such as Facebook (Meta), X (Twitter), Snapchat, and YouTube. These platforms provide an opportunity for engagement between consumers and producers.

 

Leagues such as the MLB, NFL, La Liga, and more have created an account, establishing presences in the social media world where they may interact with their fans (consumers) and their athletes (employees).

Why Social Media matters in sports.

As presence on Social Media platforms continue to grow so does the need for businesses to market themselves on the platforms. Therefore, leagues such as the MLB have created policies for its employees and athletes to follow. The MLB is a private organization even though it is spread around the United States. Usually sports leagues are private organizations headquartered in a specific state, New York HQ is where employees handle league matters. These organizations may create their own policies or guidelines which they may enforce internally. Even though organizations such as the MLB may go ahead an place their own policies, they must abide by Federal and State labor, corporate, criminal and more types of law. The policies that these leagues provide can give the leagues more power to ensure that they are abiding by the laws necessary to continue on the national and at times international scale.

MLB’s Management of Social Media. 

MLB’s Social Media policies are prefaced by this paragraph explaining who within the MLB establishes the policies. “Consistent with the authority vested in the Commissioner by the Major League Constitution (“MLC”) and the Major League Baseball Interactive Media Rights Agreement (“IMRA”), the Commissioner has implemented the following policy regarding the use of social media by individuals affiliated with Major League Baseball and the 30 Clubs. Nothing contained in this policy is intended to restrict or otherwise alter any of the rights otherwise granted by the IMRA.” To enforce power and regulation in Social Media, the league has referred to their Interactive Media Rights Agreement and their commissioner. These organizations generally will have an elected to serve the organization and help with executive managerial decisions.

There is a list of 10 explcit types of conduct related to Social Media for which the MLB Prohibits (A few rules that stand out will be listed):

1. Displaying or transmitting Content via Social Media in a manner that reasonably could be construed as an official public communication of any MLB Entity or attributed to any MLB Entity.

2. Using an MLB Entity’s logo, mark, or written, photographic, video, or audio property in any way that might indicate an MLB Entity’s approval of Content, create confusion as to attribution, or jeopardize an MLB Entity’s legal rights concerning a logo or mark.

3. Linking to the website of any MLB Entity on any Social Media outlet in any way that might indicate an MLB Entity’s approval of Content or create confusion as to attribution.

NOTE: Only Covered Individuals who are authorized by the Senior Vice
President, Public Relations of the Commissioner’s Office to use Social Media on behalf of an MLB Entity and display Content on Social Media in that capacity are exempt from Sections 1, 2 and 3 of this policy.

5. Displaying or transmitting Content that reasonably could be construed as
condoning the use of any substance prohibited by the Major or Minor League Drug Programs, or the Commissioner’s Drug Program.

7. Displaying or transmitting Content that reasonably could be viewed as
discriminatory, bullying, and/or harassing based on race, color, ancestry, sex, sexual orientation, national origin, age, disability, religion, or other categories protected by law and/or which would not be permitted in the workplace, including, but not limited to, Content that could contribute to a hostile work environment (e.g., slurs, obscenities, stereotypes) or reasonably could be viewed as retaliatory.

10. Displaying or transmitting Content that violates applicable local, state or federal law or regulations.

 

Notice that these policies are provided to the organization as a whole, but there are exceptions for individuals whose role for the league involves Social Media. Workers are privileged to not be bound by rules 1-3 but employees/athletes such as Ohtani are bound.

Mizuhara/Ohtani Gambling Situation.

One of the biggest stories of the MLB this year was the illegal gambling situation of Ohtani and his interpreter. In the MLB’s policies, gambling is strictly prohibited regardless if it is legal in the state where the athlete is a citizen.

In California, the state has yet to legalize betting. Therefore to place a bet, one would have to do so with a bookie and bookkeeper, not with an application such as Fanduel or go to a Tribal location where gambling is administered. 

Per the commissioner’s orders, the MLB launched an internal investigation on the matter as the situation involves violations of their policies and even criminal acts. The MLB may deem a punishment they find fit at the end of their investigation. However, the DOI is limited to how much the MLB funds them. The MLB’s Department of Investigation can only do so much with the limited resources that the MLB provides them to conduct investigations.

However, Ohtani was found to be a victim and there was a federal investigation launched. The complaint lists many counts of bank fraud allegations. In conducting the investigation, a forensic review of Mizuhara’s phone and texts were acquired. In addition, so were the suspected bookkeepers. There was evidence of the individuals discussing ways to bet, how to earn and pay debts, and discussions of wiring money from banks in excessive amounts.

What Does This All Mean?

The law and its administrations are beginning to adapt and acknowledge the presence of the internet. It is common to find Phones and communications through the internet seized for evidence in cases. The internet is essential for life. It must be determined if, as a society, do we want to have limits set since we are required to use the internet to live. Also, if we want to set limits to speech dependent on employment.

Parents Using Their Children for Clicks on YouTube to Make Money

With the rise of social media, an increasing number of people have turned to these platforms to earn money. A report from Goldman Sachs reveals that 50 million individuals are making a living as influencers, and this number is expected to grow by 10% to 20% annually through 2028. Alarmingly, some creators are exploiting their children in the process by not giving them fair compensation.

Photo Credits

How Do YouTubers Make Money? 

You might wonder how YouTubers make money from their videos. YouTube pays creators for views through ads that appear in their content. The more clicks they get the more money they make. Advertisers pay YouTube a set rate for every 1,000 ad views, YouTube keeps 45% of the revenue while creators receive the remaining 55%. To earn money from ads, creators must be eligible for the YouTube Partner Program (YPP). YYP allows revenue sharing from ads that are played on the influencer’s content. On average, a YouTuber earns about $0.018 per view, which totals approximately $18 for every 1,000 views. As of September 30, 2024, the average annual salary for a YouTube channel in the United States is $68,714, with well-known YouTubers earning between $48,500 and $70,500, and top earners making around $89,000. Some successful YouTubers even make millions annually. 

In addition to ad revenue, YouTubers can earn through other sources like AdSense, which also pays an average of $18 per 1,000 ad views. However, only 15% of total video views count toward the required 30 seconds of view time for the ad to qualify for payment. Many YouTubers also sell merchandise such as t-shirts, sweatshirts, hats, and phone cases. Channels with over 1 million subscribers often have greater opportunities for sponsorships and endorsements. Given the profit potential, parents may be motivated to create YouTube videos that attract significant views. Popular genres that feature kids include videos unboxing and reviewing new toys, demonstrating how certain toys work, participating in challenges or dares, creating funny or trick videos, and engaging in trending TikTok dances. 

Photo Credits 

Child Labor Laws Relating to Social Media 

Only a few states have established labor laws specifically for child content creators, with California and Illinois being worthy examples. Illinois was one of the first states to implement such regulations, started by 16-year-old Shreya Nallamothu. She brought attention to the issue of parents profiting from their children’s appearances in their content to Governor J.B. Pritzker. Shreya noted that she “kept seeing cases of exploitation” during her research and felt compelled to act. In a local interview, she explained her motivation for the change was triggered by “…very young children who may not understand what talking to a camera means, they can’t grasp what a million viewers look like. They don’t comprehend what they’re putting on the internet for profit, nor that it won’t just disappear, and their parents are making money off it.” 

As a result, Illinois passed Illinois Law SB 1782, which took effect on July 1, 2024. This law mandates that parent influencers compensate their children for appearing in their content. It amends the state’s Child Labor Law to include children featured in their parents’ or caregivers’ social media. Minors 16 years old and under must be paid 15% of the influencer’s gross earnings if they appear in at least 30% of monetized content. Additionally, they are entitled to 50% of the profits based on the time they are featured. The adult responsible for creating the videos is required to set aside the gross earnings in a trust account within 30 days for the child to access when they turn 18. The law also grants children the right to request the deletion of content featuring them. This part of the legislation is a significant step in ensuring that children have some control over the content that follows them into adulthood. If the adult fails to comply, the minor can sue for damages once they become adults. Generally, children who are not residents of Illinois can bring an action under this law as long as the alleged violation occurred within Illinois, the law applies to the case, and the court has jurisdiction over the parent (defendant).

California was the second state to pass a law on this. The California Content Creator Rights Act was authored by Senator Steve Padilla (D-San Diego) and passed in August 2024. This law requires influencers who feature minors in at least 30% of their videos to set aside a proportional percentage of their earnings in a trust for the minor to access upon reaching adulthood. This bill is broader than Illinois’s bill, but they both aim to ensure that creators who are minors receive fair financial benefits from the use of their image. 

There is hope that other states will see Illinois and California laws that give children influencers fair financial benefits for the use of their image in their parent’s videos and create similar laws. Parents should not be exploiting their kids by making a profit off of them. 

Photo Credits

Can Social Media Platforms Be Held Legally Responsible If Parents Do Not Pay Their Children? 

Social media platforms will probably not be held liable because of Section 230 of the Communications Decency Act of 1996. This law protects social media platforms from being held accountable for users’ actions and instead holds the user who made the post responsible for their own words and actions. For example, if a user posts defamatory content on Instagram, the responsibility lies with the user, not Instagram.  

Currently, the only states that have requirements for parent influencers to compensate their children featured on their social media accounts are Illinois and California. If a parent in these states fails to set aside money for their child as required by law, most likely only the parent will be held liable. It is unlikely that social media platforms will be held responsible for violations by the parent because of Section 230.

Skip to toolbar