The New Border: Immigration Law in the Age of Social Media Monitoring

In today’s digital world, where much of public discourse takes place online, the intersection between social media and immigration law has become increasingly critical. From viral debates over “migrant bashing” posts to visa revocations tied to online activism, social media now serves both as a platform for immigrant voices and as a frontier for government surveillance.

Social Media Monitoring & Immigration

Recent policy developments confirm that U.S. immigration authorities are not only observing social media activity but actively using it to inform decisions.

On April 9th 2025, U.S. Citizenship and Immigration Services (USCIS) announced it will begin considering  antisemitic activity on social media platforms when evaluating immigration benefit applications. This policy immediately affected green card applicants, international students, and others seeking immigration benefits. 

USCIS will consider social media content that indicates an alien endorsing, espousing, promoting, or supporting antisemitic terrorism, antisemitic terrorist organizations, or other antisemitic activity as a negative factor in any USCIS discretionary analysis when adjudicating immigration benefit requests.” 

This marks a significant shift from traditional factors like criminal history or fraud to now assessing online speech and ideology. It reflects a growing willingness to treat moral or political expression, which was once considered private and protected, as a legitimate basis for immigration decisions.

These “discretionary analyses” primarily affect benefit applications such as adjustment of status, asylum, and visa renewals where officers have broad authority to evaluate an applicant’s moral character and other subjective factors.

ICE and Algorithmic Surveillance

Meanwhile, U.S. Immigration and Customs Enforcement (ICE) continues to expand its social media surveillance capabilities. ICE contracts with private technology companies to build AI driven systems that scrape and analyze public posts, images, and online networks across multiple languages. These systems search for “threat indicators” or potential immigration violations, flagging accounts through pattern recognition and linguistic analysis.

ICE’s Open Source Intelligence program relies on vendors such as Palantir and ShadowDragon to automate the collection and analysis of social media data for enforcement leads. Because these algorithms are secretive and often shielded from public records laws like the Freedom of Information Act (FOIA), immigrants often have no way to learn what online data was used against them or to challenge any mistakes or errors.

Observers  describe this trend as part of a broader “tech powered enforcement” model, in which digital footprints shape immigration outcomes.  In effect, a digital border has emerged. One that exists not at airports or checkpoints but within the virtual spaces people inhabit every day.

Speech and Expanding Risk

The implications are profound. A noncitizens tweets, Facebook posts, or even tagged photos can be scrutinized and used as evidence in visa adjudications or deportation proceedings.

This pervasive monitoring encourages self censorship. Immigrants and lawful permanent residents may delete posts, avoid political discussion, or disengage from activism online out of fear that a misunderstood comment could threaten their status. What once felt like ordinary self expression now carries real legal risk.

As the Brennan Center for Justice warns, vague or discretionary standards create chilling effects on speech by making it impossible to predict how officials will interpret online expression.

the April 9 notice is likely to quell speech, discouraging immigrants and non-immigrants who are lawfully seeking a variety of immigration benefits…..from taking part in a wide range of constitutionally protected activity for fear of retaliation. And its smorgasbord of vague terms, many with no legally recognized meaning, enables USCIS officers to exercise nearly unchecked discretion in determining when to reject an otherwise unobjectionable application for a benefit……”

The First Amendment and Ideological Vetting

This new surveillance landscape raises pressing First Amendment concerns. Although noncitizens do not enjoy the full range of constitutional protections, courts have long held that the government may not condition immigration benefits on ideological conformity. Social media vetting, however, blurs that line. Turning online expression into a proxy for moral or political loyalty tests.

Courts have long struggled to balance the executive’s plenary power over immigration with First Amendment concerns raised by ideological exclusions. In Kleindienst v. Mandel (1972) the Supreme Court upheld the government’s exclusion of a Belgian Marxist scholar, deferring to the executive’s authority over immigration even when the denial indirectly burdened U.S. citizens right to receive information and ideas. Decades later, in American Academy of Religion v. Napolitano (2009), the Second Circuit reaffirmed that while the executive retains broad power, it cannot rely on secret or arbitrary rationales for ideological exclusions. Together, these cases highlight the unresolved tension between immigration control and free speech protections.

Case Study: Mahmoud Khalil

The collision of social media, political activism, and immigration enforcement is sharply illustrated in the case of Mahmoud Khalil.

Mahmoud Khalil, a lawful permanent resident and recent Columbia University graduate, was arrested by ICE in New York in March 2025 after participating in pro-Palestinian demonstrations. He was detained in Louisiana for over three months pending removal proceedings.

The government cited  Immigration and Nationality Act  (INA) § 237(a)(4)(C)(i), a rarely used provision allowing deportation of a noncitizen whose “presence or activities” are deemed to have “potentially serious adverse foreign policy consequences.” The evidence reportedly consisted of a brief undated letter referencing Khalil’s activism and supposed foreign policy concerns

Khalil’s attorneys argued that he was targeted not for any criminal conduct but instead for his speech, association, and protest activity both on campus and online raising serious First Amendment and due process issues. 

 In May 2025, a federal judge found the statute likely unconstitutional as applied, and Khalil was released after 104 days in detention. 

The Future of the Digital Border

As immigration enforcement integrates algorithmic surveillance, the border is no longer confined to geography. It exists everywhere a user logs in. This new reality challenges long standing principles of due process, privacy and free expression.

Whether justified under national security, anti-hate policies, or fraud prevention, social media vetting transforms immigration law into a form of ideological policing. The challenge for policymakers is to balance legitimate screening needs with fundamental rights in an age when one tweet can determine a person’s future.

Cases like Mahmoud Khalil’s reveal how online activism can trigger enforcement actions that test the limits of constitutional and civil liberties protections. Legal scholars and advocates have urged Congress and Department of Homeland Security (DHS) to establish clearer rules ensuring transparency in algorithms, limiting ideology based denials, and mandating bias audits of surveillance tools.

Future litigation will test how the First Amendment and due process doctrines evolve in an age where immigration enforcement operates through data analytics rather than physical checkpoints.

Ultimately, the key questions we must ask ourselves are:

To what extent can authorities treat social media activism as a legitimate factor in visa or green card adjudications?

Does using immigration law to penalize online speech amount to viewpoint discrimination?

The answers will shape not only the future of immigration law but the very boundaries of free speech in the digital age.

From Record Stores to FYPs: Social Media’s Impact on the Music Industry

Who remembers having to go out and buy a record or an 8 track or cassette tape? How about a CD or asking their parents if they can buy the newest songs on iTunes? I sure do, but today many kids and individuals turn to TikTok or other social media platforms to hear the latest songs. But what happens to the music that is used in these viral dances or over a post? Are they free to use just because everything is now digitized or are there still protections for artists and their music once it hits social media?

Social media, since its inception has played a role in musicians finding their big break online. Starting with Myspace in the early 2000s, huge stars like Calvin Harris, Adele and even Sean Kingston used Myspace to their advantage. They grew their fanbase, contacted record labels, and put their music out for the world to hear. One of the most well-known internet success stories for this generation is Justin Bieber and his discovery on YouTube. While covering a Chris Brown song at just 13 years old, caught the attention of music executive and the rest was history. Justin Bieber is one of the biggest household names of this generation being named 8th Greatest Pop Star of the 21st Century by Billboard Canada in 2024. Justin, however, wasn’t the only success story. Ed Sheeran, 5 Seconds of Summer, Charlie Puth, Tate McRae, and so many other artists found their success by posting covers, originals and other content on YouTube in the hopes of getting discovered like Justin Bieber had.

Following and alongside YouTube success, next came the wave of artists being discovered on the hit platform, Vine. Vine unlike YouTube could not have full videos on its platform. In 2012 Vine took the world by storm with only six-second videos. These videos were played on loops so that if you blinked…don’t worry it would play again. In 2013 many young aspiring stars again took to posting to the platform with the hopes of posting that one perfect video, but now they only had six seconds to impress. Shawn Mendes began posting on the app nearly at its inception. He began posting cover clips while he played the guitar.

“One Vine, Mendes posted a video of himself playing guitar while singing the hook to Bieber’s song “As Long As You Love Me” and received 10,000 likes overnight. He followed up with covers of Bruno Mars and other pop singers, and, by the spring, when Island and Massey came calling, he already assumed over 2.5 million followers on the service.”

Mendes soon got to record a hit song with Justin Bieber called “Monster” where the two got to show off their different styles and tell a story about the hardships that come with fame.

After Vine was shut down, artists turned back to other social media platforms to put out their music. And then the 2019 Covid Pandemic hit and TikTok entered the scene. Like Vine, TikTok had short videos that played on a loop. However, this time they were about 15-30 seconds when the app first started gaining traction in the US. Artists could post their videos of viral dances, cover music or even post daily get ready with me videos.

Again, TikTok produced up and coming stars who we know today such as Olivia Rodrigo, Lil Nas X and Alex Warren exploded once their songs became part of a viral trend or pick a song from the platforms “Trending” sounds in the sound library.

This is great, right!? All of these people using what is right at their fingertips to put themselves out there and make their dreams come true. But what happens when these viral songs are being used without the proper licensing or when they infringe on copyright law? This is an issue that has been on the rise in the exorbitant use of social media videos to promote companies, schools or in a popular video. So, let’s talk about it.

First what is copyright law?

“Copyright is a type of intellectual property that protects original works of authorship as soon as an author fixes the work in a tangible form of expression.”

This includes paintings, photographs, illustrations, musical compositions, sound recordings, computer programs, books, poems, blog posts, movies, architectural works and so much more!

So, what if you want to use a copyrighted work? Don’t panic! The Fair Use Doctrine explains that certain usage of these works is allowed.

“Fair use is a legal doctrine that promotes freedom of expression by promoting the unlicensed use of copyright-protected works in certain circumstances. Section 107 of the Copyright Act provides the statutory framework for determining whether something is a fair use and identifies certain types of uses—such as criticism, comment, news reporting, teaching, scholarship, and research—as examples of activities that may qualify as fair use.”

Section 107 calls for consideration of the following four factors in evaluating a question of fair use: purpose and character of the use, including whether the use is of a commercial nature or is for nonprofit educational purposes; nature of the copyrighted work; amount and substantiality of the portion used in relation to the copyrighted work as a whole; and effect of the use upon the potential market for or value of the copyrighted work.

However, even with these laws in place, there are still recent cases of music being used in commercials, TikTok videos and on the platform without proper licensing agreements in place. It is not only the big companies that are facing copyright infringement suits, but also the influencers posting the content on behalf of the brands.

In recent there have been several major cases. Here are a few.

Sony Music Entertainment v. Marriott. In this case, Sony alleged that Marriott’s social media pages featured hundreds of videos. Sony sought to hold Marriott liable for their own posts as well as posts made by influencers and Marriott-franchised hotels. Sony claimed that it was entitled to more than $139,000,000 in statutory damages, as well as an injunction. The case was eventually dismissed with prejudice.

Sony Music Entertainment v. Gymshark. Sony claimed unauthorized use of 297 works in online advertisements posted by Gymshark and influencers. This consisted of music by Harry Styles, Beyoncé and Britney Spears in its Instagram and TikTok posts. This case was also dismissed with prejudice.

Music Publishers v. NBA.

“In July of 2024, Kobalt Music Publishing America, Inc. and other music companies filed suit against 14 NBA teams in the US District Court for the Southern District of New York, in the latest ongoing battle between music publishers and organizations that allegedly use copyrighted material without proper authorization. These (teams) engaged in unauthorized use of copyrighted music in social media postings on Instagram, TikTok, X, Youtube, and Facebook and are seeking to protect their intellectual property rights and ensure that their works are not exploited without due compensation.”

Sony Music Entertainment v. USC. Sony had previously warned the university about its use of unauthorized music in their posts. These posts were gaining major traction helping the school promote different games and events on campus.

“The law suit … cited 283 videos with songs from musicians including Michael Jackson, Britney Spears and AC/DC that USC’s sports teams supposedly used in TikTok and Instagram posts without licenses. Sony Music asked for statutory copyright damages of $150,000 per song, amounting to tens of millions of dollars in damages.” This case is still ongoing.

Warner Music Group v. DSW. This case again involves the use of music by the company in its ads and on social media along with its influencers without the proper licensing in place. Warner said that the musical works that were allegedly infringed by DSW were “some of the most popular sound recordings and musical compositions in the world.”

Although influencer marketing has helped so many companies grow on social media through the years, without the proper licensing, it leaves these companies and influencers vulnerable to potential copyright infringement. However, Universal Music Group, one of the world’s largest record labels notably pulled all of its music from TikTok due to licensing issues with the social media platform. This impacted video’s featuring songs by Billie Eilish, Drake, Taylor Swift and other big-name artists. Eventually UMG and TikTok struck a deal however while they were working things out, TikTok went silent on these sounds for nearly three months. So, what can influencers and apps due to limit their liability and risk of infringement?

First, social media companies can update their terms of service, which TikTok has done, to help its users avoid suits. Influencers who are posting for promotional content such as an advertisement usually require two different kinds of licenses. Synchronization license and master use license.

A Synchronization or sync license is, “required to pair a musical composition (i.e. the song) with visual content. It must be obtained from the copyright holder, which is usually the music publisher… To make things more complicated, a commercial song can often be co-owned by multiple copyright holders, which is why brands often partner with specialist music clearance agencies to obtain the necessary rights.”

A master use license is “needed if the brand wishes to use a specific recording of a song. It must be obtained from the owner of the recording – usually, a record label.”

By obtaining the proper licensing prior to posting many influencers and brands can post freely without the risk of copyright infringement and potentially risk their post being taken down or even a lawsuit being filed against them. Platforms like TikTok license with record labels so that their songs can be used through their platform library once they are properly licensed.

So while social media has been the place where so many incredible artists have found their fame, once they’ve recorded their hit album, the platform must properly license with the record labels to use their music otherwise they risk being taken to court for copyright infringement not only impacting their platform but also its users, the artists, and labels.

 

 

 

 

Regulating the Scroll: How Lawmakers Are Redefining Social Media for Minors

In today’s digital world, the question is no longer if minors use social media but how they use it. 

Social media platforms don’t just host young users, they shape their experiences through algorithmic feeds and “addictive” design features that keep kids scrolling long after bedtime. As the mental health toll becomes increasingly clear, lawmakers are stepping in to limit how much control these platforms have over young minds.

What is an “addictive” feed and why target it? 

Algorithms don’t just show content, they promote it. By tracking what users click, watch, or like, these feeds are designed to keep attention flowing. For minors, that means endless scrolling and constant engagement which typically is at the expense of sleep, focus, and self-esteem.

Under New York’s Stop Addictive Feeds Exploitation (SAFE) for Kids Act, lawmakers found that:

 “social media companies have created feeds designed to keep minors scrolling for dangerously long periods of time.”

The Act defines an “addictive feed” as one that recommends or prioritizes content based on data linked to the user or their device.

The harms aren’t hypothetical. Studies link heavy social media use among teens with higher rates of depression, anxiety, and sleep disruption. Platforms often push notifications late at night or during school hours. Times when young users are most vulnerable. 

Features like autoplay, for you page, endless “you may also like” suggestions, and quick likes or comments can trap kids in an endless scroll. What begins as fun and harmless entertainment soon becomes a routine they struggle to escape.                              

 

Key Developments in Legislation 

It’s no surprise that minors exposure to social media algorithms sits at the center of today’s policy debates. Over the past two years, state and federal lawmakers have introduced laws seeking to rein in the “addictive” design features of online platforms. While many of these measures face ongoing rule making or constitutional challenges, together they signal a national shift toward stronger regulations of social media’s impact on youth. 

Let’s take a closer look at some of the major legal developments shaping this issue.

New York’s SAFE for Kids Act

New York’s Stop Addictive Feeds Exploitation (SAFE) for Kids Act represents one of the nation’s most ambitious efforts to regulate algorithmic feeds. The law prohibits platforms from providing “addictive feeds” to users under 18 unless the platform obtains verifiable parental consent or reasonably determines that the user is not a minor. It also bans push notifications and advertisements tied to those feeds between 12 a.m. and 6 a.m. unless parents explicitly consent. The rule making process remains ongoing, and enforcement will likely begin once these standards are finalized.

The Kids Off Social Media Act (KOSMA)

At the federal level, the Kids Off Social Media Act (KOSMA) seeks to create national baselines for youth protections online. Reintroduced to Congress, the bill would:

  • Ban social media accounts for children under 13.
  • Prohibit algorithmic recommendation systems for users under 17.
  • Restrict social media access in schools during instructional hours.

Supporters argue the bill is necessary to counteract the addictive nature of social media design. Critics, including digital rights advocates, question whether such sweeping restrictions could survive First Amendment scrutiny or prove enforceable at scale. 

KOSMA remains pending in Congress but continues to shape the national conversation about youth and online safety.

California’s SB 976 

California’s Protecting Our Kids from Social Media Addiction Act (SB 976) reflects a growing trend of regulating design features rather than content. The law requires platforms to:

  • Obtain parental consent before delivering addictive feeds to minors.
  • Mute notifications for minors between midnight and 6 a.m. and during school hours unless parents opt in.

The statute is currently under legal challenge for potential First Amendment violations, however, the Ninth Circuit allowed enforcement of key provisions to proceed suggesting that narrowly tailored design regulations aimed at protecting minors may survive early constitutional scrutiny.

Other State Efforts

Other states are following suit. According to the National Conference of State Legislatures (NCSL), at least 13 states have passed or proposed laws requiring age verification, parental consent, or restrictions on algorithmic recommendations for minors. Mississippi’s HB 1126, for example, requires both age verification and parental consent, and the U.S. Supreme Court allowed the law to remain in effect while litigation continues. 

Final Thoughts

We are at a pivotal moment. The era when children’s digital consumption went largely unregulated is coming to an end. The question now isn’t if  regulation is on the horizon, it’s how it will take shape, and whether it can strike the right balance between safety, free expression, and innovation.

As lawmakers, parents, and platforms navigate this evolving landscape, one challenge remains constant: ensuring that efforts to protect minors from harmful algorithmic design do not come at the expense of their ability to connect, learn, and express themselves online.

What do you think is the right balance between protecting minors from harmful algorithmic exposure and preserving their access to social media as a space for connection and expression?

From Cute to Concerning: The Legal and Emotional Costs of Sharenting

After a long day at work, most people now sit down for a nice relaxing…scroll. That’s right, most people have social media and enjoy going through the latest posts to wind down or pass the time. Whether it’s on Instagram, Facebook, or TikTok someone is looking at a post made by a parent displaying their child doing something adorable, funny, documenting a family trip or marking a milestone like the first day of school. What seems like an innocent post, can be something much darker.

What is Sharenting

As social media gained traction in recent years, so did sharenting. Sharenting is  when

a parent overshares or excessively posts information, pictures stories or updates about their child’s life.

A proud parent could post the smiling face of their child at a sporting event on their private account thinking only family and friends will see it. Some parents even post daily vlogs involving their children making money on filming their day to day with strangers. Most parents engage in sharenting to share details of their child because they are proud of them. Some want to build a digital archive, or want to connect with loved ones. Others are even trying to build camaraderie with other parents, and they could even be trying to help others. Most parents do this with the purest motives in mind; however, their content is not always received as it is intended.

The Risks of Sharenting

Legal Risks

As established in Troxel v. Grainville, parents have a fundamental right to raise their children as they see fit. This includes education, religion, and even social media. Parents have a First Amendment right to speech just as much as a child does when it comes to posting online. Parents are protected in their posting videos and pictures of their children under the First Amendment; however, this right is not unlimited. These restrictions apply in certain circumstances such as child explosion laws, or other compelling state interests.

Children also have a right to privacy that conflicts with their parents First Amendment right of speech and expression in the context of posting them online.  Under the Children’s Online Privacy Protection Act (COPPA), significant protections for children’s online privacy were established. COPPA imposes certain requirements on operators of websites or online services directed

to children under 13 years of age, and on operators of other websites or online services that have actual knowledge that they are collecting personal information online from a child under 13 years of age.

COPPA, however, only targets protecting children’s data not the actual child from the risks of being online.

Psychological Risks

 In addition to the legal risks of sharenting, there are also many psychological risks. What happens when a parent posts that one picture that comes back to haunt their child later on. These videos and images can be used by other students to bully the child down the road. Children can have a harder time developing their own image and identity when they are prescribed an online persona by their parents through their posts.

Even with pure motives, a survey of parents discussed by Dr. Albers of the Cleveland Health Clinic found that:

74% of parents using social media knew another parent engaging in sharenting behavior.

56% said the parents shared embarrassing information about their kid.

51% said the parent provided details that revealed their child’s location.

27% said the parent circulated inappropriate phots.

The impact that these posts that, once are made are always out there, can be detrimental to a child’s mental health. Social media, according to the Mayo Clinic, already amplifies adolescents’ anxiety and depression. Parents can add to this by sharenting.

Other Risks

These seemingly innocent posts can often lead to greater risks,  for their children than most parents realize. In addition to negative psychological impacts, sharenting can endanger the child’s mental health as well as their physical health. Sharenting is a window directly into a child’s life, one which a predator can abuse. Images can be taken from their parents accounts and shared to sites for pedophiles.

The taking of these images can also enable identity theft, harassment, bullying, exploitation and even violence.

Parents who have gotten famous from posting their kids like the Labrant Family and The Fisher’s have increased their kids risk of being subject to one of these crimes by constantly posting them online.

Sharenting can blur the line between a fun posts and advertising your child to strangers.  In extreme situations creating dangerous environments for internet famous children.

Parents are also contributing to their child’s digital identity which could impact their future educational and employment prospects. It could also lead to embarrassment that the content was shared, and they cannot get rid of it.

How Can Parents Protect their Kids

As social media continues to grow and be a part of our daily lives, parents can take action to protect their children going forward. One way parents can do this is by blurring or covering their child’s face with an emoji. Parents can still have the excitement of posting their child’s achievements or milestones without exposing their identity to the internet.

Parents can think before they post.

If you’re trying to decide whether a post counts as sharenting, ask yourself these questions:

What’s the content?

Why am I posting it?

Who’s my intended audience? Have I set my permissions accordingly?

Is my child old enough to understand the concept of a digital footprint? If they are, did I ask their consent? If not, do I think they’d be happy to see this online when they’re older?

Sharenting is not going to stop, but it can evolve to be done in a way that protects a parent’s right to post and their child’s safety.

 

Privacy Please: Privacy Law, Social Media Regulation and the Evolving Privacy Landscape in the US

Social media regulation is a touchy subject in the United States.  Congress and the White House have proposed, advocated, and voted on various bills, aimed at protecting and guarding people from data misuse and misappropriation, misinformation, harms suffered by children, and for the implications of vast data collection. Some of the most potent concerns about social media stem from use and misuse of information by the platforms- from the method of collection, to notice of collection and use of collected information. Efforts to pass a bill regulating social media have been frustrated, primarily by the First Amendment right to free speech. Congress has thus far failed to enact meaningful regulation on social media platforms.

The way forward may well be through privacy law. Privacy laws give people some right to control their own personhood including their data, right to be left alone, and how and when people see and view them. Privacy laws originated in their current form in the late 1800’s with the impetus being one’s freedom from constant surveillance by paparazzi and reporters, and the right to control your own personal information. As technology mutated, our understanding of privacy rights grew to encompass rights in our likeness, our reputation, and our data. Current US privacy laws do not directly address social media, and a struggle is currently playing between the vast data collection practices of the platforms, immunity for platforms under Section 230, and private rights of privacy for users.

There is very little Federal Privacy law, and that which does exist is narrowly tailored to specific purposes and circumstances in the form of specific bills. Somes states have enacted their own privacy law scheme, California being on the forefront, Virginia, Colorado, Connecticut, and Utah following in its footsteps. In the absence of a comprehensive Federal scheme, privacy law is often judge-made, and offers several private rights of action for a person whose right to be left alone has been invaded in some way. These are tort actions available for one person to bring against another for a violation of their right to privacy.

Privacy Law Introduction

Privacy law policy in the United States is premised on three fundamental personal rights to privacy:

  1. Physical right to privacy- Right to control your own information
  2. Privacy of decisions– such as decisions about sexuality, health, and child-rearing. These are the constitutional rights to privacy. Typically not about information, but about an act that flows from the decision
  3. Proprietary Privacy – the ability to protect your information from being misused by others in a proprietary sense.

Privacy Torts

Privacy law, as it concerns the individual, gives rise to four separate tort causes of action for invasion of privacy:

  1. Intrusion upon Seclusion- Privacy law provides a tort cause of action for intrusion upon seclusion when someone intentionally intrudes upon the reasonable expectation of seclusion of another, physically or otherwise, and the intrusion is objectively highly offensive.
  2. Publication of Private Facts- One gives publicity To a matter concerning the Private life of another that is not of legitimate concern to the public, and the matter publicized would be objectively highly offensive. The first amendment provides a strong defense for publication of truthful matters when they are considered newsworthy.
  3. False Light – One who gives publicity to a matter concerning another that places the other before the public in a false light when The false light in which the other was placed would be objectively highly offensive and the actor had knowledge of or acted in reckless disregard as to the falsity of the publicized matter and the false light in which the other would be placed.
  4. Appropriation of name and likeness- Appropriation of one’s name or likeness to the defendant’s own use or benefit. There is no appropriation when a persona’s picture is used to illustrate a non-commercial, newsworthy article. This is usually commercial in nature but need not be. The appropriation could be of “identity”. It need not be misappropriation of name, it could be the reputation, prestige, social or commercial standing, public interest, or other value on the plaintiff’s likeness.

These private rights of action are currently unavailable for use against social media platforms because of Section 230 of the Decency in Communications Act, which provides broad immunity to online providers for posts on their platforms. Section 230 prevents any of the privacy torts from being raised against social media platforms.

The Federal Trade Commission (FTC) and Social Media

Privacy law can implicate social media platforms when their practices become unfair or deceptive to the public through investigation by the Federal Trade Commission (FTC). The FTC is the only federal agency with both consumer protection and competition jurisdiction in broad sectors of the economy. FTC investigates business practices where those practices are unfair or deceptive. FTC Act 15 U.S.C S 45- Act prohibits “unfair or deceptive acts or practices in or affecting commerce” and grants broad jurisdiction over privacy practices of businesses to the FTC. Trade practice is unfair if it causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and is not outweighed by countervailing benefits to consumers or competition. A deceptive act or practice is a material representation, omission, or practice that is likely to mislead the consumer acting reasonably in the circumstances, to the consumer’s detriment.

Critically, there is no private right of action in FTC enforcement. The FTC has no ability to enforce fines for S5 violations but can provide injunctive relief. By design, the FTC has very limited rulemaking authority, and looks to consent decrees and procedural, long-lasting relief as an ideal remedy. The FTC pursues several types of misleading or deceptive policy and practices that implicate social media platforms: notice and choice paradigms, broken promises, retroactive policy changes, inadequate notice, and inadequate security measures. Their primary objective is to negotiate a settlement where the company submits to certain measures of control of oversight by the FTC for a certain period of time. Violations of the agreements could yield additional consequences, including steep fines and vulnerability to class action lawsuits.

Relating to social media platforms, the FTC has investigated misleading terms and conditions, and violations of platform’s own policies. In Re Snapchat, the platform claimed that user’s posted information disappeared completely after a certain period of time, however, through third party apps and manipulation of user’s posts off of the platform, posts could be retained. The FTC and Snapchat settled, through a consent decree, to subject Snapchat to FTC oversight for 20 years.

The FTC has also investigated Facebook for violation of its privacy policy. Facebook has been ordered to pay a $5 billion penalty and to submit to new restrictions and a modified corporate structure that will hold the company accountable for the decisions it makes about its users’ privacy to settle FTC charges claiming that they violated a 2012 agreement with the agency.

Unfortunately, none of these measures directly give individuals more power over their own privacy. Nor do these policies and processes give individuals any right to hold platforms responsible for being misled by algorithms using their data, or for intrusion into their privacy by collecting data without allowing an opt-out.

Some of the most harmful social media practices today relate to personal privacy. Some examples include the collection of personal data, the selling and dissemination of data through the use of algorithms designed to subtly manipulate our pocketbooks and tastes, collection and use of data belonging to children, and the design of social media sites to be more addictive- all in service of the goal of commercialization of data.

No current Federal privacy scheme exists. Previous Bills on Privacy have been few and narrowly tailored to relatively specific circumstances and topics like healthcare and medical data protection by HIPPA, protection of data surrounding video rentals as in the Video Privacy Protection Act, and narrow protection for children’s data in Children’s Online Protection Act. All the schemes are outdated and fall short of meeting the immediate need of broad protection of widely collected and broadly utilized data from social media.

Current Bills on Privacy

Upon request from some of the biggest platforms, outcry from the public, and the White House’s request for Federal Privacy regulation, Congress appears poised to act. The 118th Congress has pushed privacy law as a priority in this term by introducing several bills related to social media privacy. There are at least ten Bills currently pending between the House of the Senate addressing a variety of issues and concerns from Children’s data privacy to the minimum age for use and designation of a new agency to monitor some aspects of privacy.

S744The Data Care Act of 2023 aims to protect social media user’s data privacy by imposing fiduciary duties on the platforms. The original iteration of the bill was introduced in 2021 and failed to receive a vote. It was re-introduced in March of 2023 and is currently pending. Under the act, social media platforms would have the duty to reasonably secure user’s data from access, refrain from using the data in a way that could foreseeably “benefit the online service provider to the detriment of the end user” and to prevent disclosure of user’s data unless the party is also bound by these duties. The bill authorizes the FTC and certain state officials to take enforcement actions upon breach of those duties. The states would be permitted to take their own legal action against companies for privacy violations. The bill would also allow the FTC to intervene in the enforcement efforts by imposing fines for violations.

H.R.2701 – Perhaps the most comprehensive piece of legislation on the House floor is the Online Privacy Act. In 2023, the bill was reintroduced by democrat Anna Eshoo after an earlier version on the bill failed to receive a vote and died in Congress. The Online Privacy Act aims to protect users by providing individuals rights relating to the privacy of their personal information. The bill would also provide privacy and security requirements for treatment of personal information. To accomplish this, the bill established a new agency – the Digital Privacy Agency- which would be responsible for enforcement of the rights and requirements. The new individual rights in privacy are broad and include the rights of access, correction, deletion, human review of automated decision, individual autonomy, right to be informed, and right to impermanence, amongst others. This would be the most comprehensive plan to date. The establishment of a new agency with a task specific to administration and enforcement of privacy laws would be incredibly powerful. The creation of this agency would be valuable irrespective of whether this bill is passed.

HR 821– The Social Media Child Protection Act is a sister bill to one by a similar name which originated in the Senate. This bill aims to protect children from the harms of social media by limiting children’s access to it. Under the bill, Social Media platforms are required to verify the age of every user before accessing the platform by submitting a valid identity document or by using another reasonable verification method. A social media platform will be prohibited from allowing users under the age of 16 to access the platform. The bill also requires platforms to establish and maintain reasonable procedures to protect personal data collected from users. The bill affords for a private right of action as well as state and FTC enforcement.

S 1291The Protecting Kids on Social Media Act is similar to its counterpart in the House, with slightly less tenacity. It similarly aims to protect children from social media’s harms. Under the bill, platforms must verify its user’s age, not allow the user to use the service unless their age has been verified, and must limit access to the platform for children under 12. The bill also prohibits retention and use of information collected during the age verification process. Platforms must take reasonable steps to require affirmative consent from the parent or guardian of a minor who is at least 13 years old for the creation of a minor account, and reasonably allow access for the parent to later revoke that consent. The bill also prohibits use of data collected from minors for algorithmic recommendations. The bill would require the Department of Commerce to establish a voluntary program for secure digital age verification for social media platforms. Enforcement would be through the FTC or state action.

S 1409– The Kids Online Safety Act, proposed by Senator Blumenthal of Connecticut, also aims to protect minors from online harms. This bill, as does the Online Safety Bill, establishes fiduciary duties for social media platforms regarding children using their sites. The bill requires that platforms act in the best interest of minors using their services, including mitigating harms that may arise from use, sweeping in online bullying and sexual exploitation. Social media sites would be required to establish and provide access to safeguards such as settings that restrict access to minor’s personal data and granting parents the tools to supervise and monitor minor’s use of the platforms. Critically, the bill establishes a duty for social media platforms to create and maintain research portals for non-commercial purposes to study the effect that corporations like the platforms have on society.

Overall, these bills indicate Congress’s creative thinking and commitment to broad privacy protection for users from social media harms. I believe the establishment of a separate body to govern, other than the FTC which lacks the powers needed to compel compliance, to be a necessary step. Recourse for violations on par with the EU’s new regulatory scheme, mainly fines in the billions, could help.

Many of the bills, for myriad aims, establish new fiduciary duties for the platforms in preventing unauthorized use and harms for children. There is real promise in this scheme- establishing duty of loyalty, diligence and care for one party has a sound basis in many areas of law and would be more easily understood in implementation.

The notion that platforms would need to be vigilant in knowing their content, studying its affects, and reporting those effects may do the most to create a stable future for social media.

The legal responsibility for platforms to police and enforce their policies and terms and conditions is another opportunity to further incentivize platforms. The FTC currently investigates policies that are misleading or unfair, sweeping in the social media sites, but there could be an opportunity to make the platforms legally responsible for enforcing their own policies, regarding age, against hate, and inappropriate content, for example.

What would you like to see considered in Privacy law innovation for social media regulation?

Social Media, Minors, and Algorithms, Oh My!

What is an algorithm and why does it matter?

Social media algorithms are intricately designed data organization systems aimed at maximizing user engagement by sorting and delivering content tailored to individual preferences. At their core, social media algorithms collect and subsequently use extensive user data, employing machine learning techniques to better understand and predict user behavior. Social media algorithms note and analyze hundreds of thousands of data points, including past interactions, likes, shares, content preferences, time spent viewing content, and social connections to curate a personalized feed for each user. Social media algorithms are designed this way to keep users on the site, thus giving the site more time to put advertisements on the user’s feed and drive more profits for the social media site in question. The fundamental objective of an algorithm is to capture and maintain user attention, expose the user to an optimal amount of advertisements, and use data from users to curate their feed to keep them engaged for longer.

Addiction comes in many forms

One key element contributing to the addictiveness of social media is the concept of variable rewards. Algorithms strategically present a mix of content, varying in type and engagement level, to keep users interested in their feed. This unpredictability taps into the psychological principle of operant conditioning, where intermittent reinforcement, such as receiving likes, comments, or discovering new content, reinforces habitual platform use. Every time a user sees an entertaining post or receives a positive notification, the brain releases dopamine, the main chemical associated with addiction and addictive behaviors. The constant stream of notifications and updates, fueled by algorithmic insights and carefully tailored content suggestions, can create a sense of anticipation in users for their next dopamine fix, which encourages users to frequently update and scan their feeds to receive the next ‘reward’ on their timeline. The algorithmic and numbers-driven emphasis on user engagement metrics, such as the amount of likes, comments, and shares on a post, further intensifies the competitive and social nature of social media platforms, promoting frequent use.

Algorithms know you too well

Furthermore, algorithms continuously adapt to user behavior through real-time machine learning. As users engage with content, algorithms will analyze and refine their predictions, ensuring that the content remains compelling and relevant to the user over time. This iterative feedback loop further deepens the platform’s understanding of individual users, creating a specially curated and highly addictive feed that the user can always turn to for a boost of dopamine. This heightened social aspect, coupled with the algorithms’ ability to surface content that resonates deeply with the user, enhances the emotional connection users feel to the platform and their specific feed, which keeps users coming back time after time. Whether it be from seeing a new, dopamine-producing post, or posting a status that receives many likes and shares, every time one opens a social media app or website, it can produce seemingly endless new content, further reinforcing regular, and often unhealthy use.

A fine line to tread

As explained above, social media algorithms are key to user engagement. They are able to provide seemingly endless bouts of personalized content and maintain users’ undivided attention through their ability to understand the user and the user’s preferences in content. This pervasive influence extends to children, who are increasingly immersed in digital environments from an early age. Social media algorithms can offer constructive experiences for children by promoting educational content discovery, creativity, and social connectivity that would otherwise be impossible without a social media platform. Some platforms, like YouTube Kids, leverage algorithms to recommend age-appropriate content tailored to a child’s developmental stage. This personalized curation of interest-based content can enhance learning outcomes and produce a beneficial online experience for children. However, while being exposed to age-appropriate content may not harm the child viewers, it can still cause problems related to content addiction.

‘Protected Development’

Children are generally known to be naïve and impressionable, meaning full access to the internet can be harmful for their development, as they may take anything they see at face value. The American Psychological Association has said that, “[d]uring adolescent development, brain regions associated with the desire for attention, feedback, and reinforcement from peers become more sensitive. Meanwhile, the brain regions involved in self-control have not fully matured.” Social media algorithms play a pivotal role in shaping the content children can encounter by prioritizing engagement metrics such as likes, comments, and shares. In doing this, social media sites create an almost gamified experience that encourages frequent and prolonged use amongst children. Children also have a tendency to intensely fixate on certain activities, interests, or characters during their early development, further increasing the chances of being addicted to their feed.

Additionally, the addictive nature of social media algorithms poses significant risks to children’s physical and mental well-being. The constant stream of personalized content, notifications, and variable rewards can contribute to excessive screen time, impacting sleep patterns and physical health. Likewise, the competitive nature of engagement metrics may result in a sense of inadequacy or social pressure among young users, leading to issues such as cyberbullying, depression, low self-esteem, and anxiety.

Stop Addictive Feeds Exploitation (SAFE) for Kids

The New York legislature has spotted the anemic state of internet protection for children and identified the rising mental health issues relating to social media in the youth.  Announced their intentions at passing laws to better protect kids online. The Stop Addictive Feeds Exploitation (SAFE) for Kids Act is aimed explicitly at social media companies and their feed-bolstering algorithms. The SAFE for Kids Act is intended to “protect the mental health of children from addictive feeds used by social media platforms, and from disrupted sleep due to night-time use of social media.”

Section 1501 of The Act would essentially prohibit operators of social media sites from providing addictive, algorithm-based feeds to minors without first obtaining parental permission. Instead the default feed on the program would be a chronologically sorted main timeline, one more popular in the infancy of social media sites. Section 1502 of The Act would also require social media platforms to obtain parental consent before allowing notifications between the hours of 12:00 AM and 6:00 AM and creates an avenue for opting out of access to the platform between the same hours. The Act would also provide a limit on the overall number of hours a minor can spend on a social media platform. Additionally, the Act would authorize the Office of the Attorney General to bring a legal action to enjoin or seek damages/civil penalties of up to $5,000 per violation and allow any parent/guardian of a covered minor to sue for damages of up to $5,000 per user per incident, or actual damages, whichever is greater.

A sign of the times

The Act accurately represents the growing concerns of the public in its justification section, where it details many of the above referenced problems with social media algorithms and the State’s role in curtailing the well-known negative effects they can have on a protected class. The New York legislature has identified the problems that social media addiction can present, and have taken necessary steps in an attempt to curtail it.

Social media algorithms will always play an intricate role in shaping user experiences. However, their addictive nature should rightfully subject them to scrutiny, especially in their effects among children. While social media algorithms offer personalized content and can produce constructive experiences, their addictive nature poses significant risks, prompting legislative responses like the Stop Addictive Feeds Exploitation (SAFE) for Kids Act.  Considering the profound impact of these algorithms on young users’ physical and mental well-being, a critical question arises: How can we effectively balance the benefits of algorithm-driven engagement with the importance of protecting children from potential harm in the ever evolving digital landscape? The SAFE for Kids Act is a step in the right direction, inspiring critical reflection on the broader responsibility of parents and regulatory bodies to cultivate a digital environment that nurtures healthy online experiences for the next generation.

 

New York is Protecting Your Privacy

TAKING A STANCE ON DATA PRIVACY LAW

The digital age has brought with it unprecedented complexity surrounding personal data and the need for comprehensive data legislation. Recognizing this gap in legislative protection, New York has introduced the New York Privacy Act (NYPA), and the Stop Addictive Feeds Exploitation (SAFE) For Kids Act, two comprehensive initiatives designed to better document and safeguard personal data from the consumer side of data collection transactions. New York is taking a stand to protect consumers and children from the harms of data harvesting.

Currently under consideration in the Standing Committee on Consumer Affairs And Protection, chaired by Assemblywoman Nily Rozic, the New York Privacy Act was introduced as “An Act to amend the general business law, in relation to the management and oversight of personal data.” The NYPA was sponsored by State Senator Kevin Thomas and closely resembles the California Consumer Privacy Act (CCPA), which was finalized in 2019. In passing the NYPA, New York would become just the 12th state to adopt a comprehensive data privacy law protecting state residents.

DOING IT FOR THE DOLLAR

Companies buy and sell millions of user’s sensitive personal data in the pursuit of boosting profits. By purchasing personal user data from social media sites, web browsers, and other applications, advertisement companies can predict and drive trends that will increase product sales among different target groups.

Social media companies are notorious for selling user data to data collection companies, things such as your: name, phone number, payment information, email address, stored videos and photos, photo and file metadata, IP address, networks and connections, messages, videos watched, advertisement interactions, and sensor data, as well as time, frequency, and duration of activity on the site. The NYPA targets businesses like these by regulating legal persons that conduct business in the state of New York, or who produce products and services aimed at residents of New York. The entity that stands to be regulated must:

  • (a) have annual gross revenue of twenty-five million dollars or more;
  • (b) control or process personal data of fifty thousand consumers or more;
  • or (c) derive over fifty percent of gross revenue from the sale of personal data.

The NYPA does more for residents of New York because it places the consumer first, as the Act is not restricted to regulating businesses operating within New York but encompasses every resident of New York State who may be subject to targeted data collection, an immense step forward in giving consumers control over their digital footprint.

MORE RIGHTS, LESS FRIGHT

The NYPA works by granting all New Yorkers additional rights regarding how their data is maintained by controllers to which the Act applies. The comprehensive rights granted to New York consumers include the right to notice, opt out, consent, portability, correct, and delete personal information. The right to notice requires each controller provide a conspicuous and readily available notice statement describing the consumer’s rights, indicating the categories of personal data the controller will be collecting, where its collected from, and what it may be used for. The right to opt out includes allowing for consumers to opt out of processing their personal data for the purposes of targeted advertising, the sale of their personal data, and for profiling purposes. This gives the consumer an advantage when browsing sites and using apps, as they will be duly informed of exactly what information they are giving up when online.

While all the rights included in the NYPA are groundbreaking for the New York consumer, the right to consent to sensitive data collection and the right to delete data cannot be understated. The right to consent requires controllers to conspicuously ask for express consent to collect sensitive personal data. It also contains a zero-discrimination clause to protect consumers who do not give controllers express consent to use their personal data. The right to delete requires controllers to delete any or all of a consumer’s personal data upon request, demanding controllers delete said data within 45 days of receiving the request. These two clauses alone can do more for New Yorker’s digital privacy rights than ever before, allowing for complete control over who may access and keep sensitive personal data.

BUILDING A SAFER FUTURE

Following the early success of the NYPA, New York announced their comprehensive plan to better protect children from the harms of social media algorithms, which are some of the main drivers of personal data collection. Governor Kathy Hochul, State Senator Andrew Gounardes, and Assemblywoman Nily Rozic recently proposed the Stop Addictive Feeds Exploitation (SAFE) For Kids Act, directly targeting social media sites and their algorithms. It has long been suspected that social media usage contributes to worsening mental health conditions in the United States, especially among youths. The SAFE For Kids Act seeks to require parental consent for children to have access to social media feeds that use algorithms to boost usage.

On top of selling user data, social media sites like Facebook, YouTube, and X/Twitter also use carefully constructed algorithms to push content that the user has expressed interest in, usually based on the profiles they click on or the posts they ‘like’. Social media sites feed user data to algorithms they’ve designed to promote content that will keep the user engaged for longer, which exposes the user to more advertisements and produces more revenue.

Children, however, are particularly susceptible to these algorithms, and depending on the posts they view, can be exposed to harmful images or content that can have serious consequences for their mental health. Social media algorithms can show children things they are not meant to see, regardless of their naiveté and blind trust, traits that are not exactly cohesive with internet use. Distressing posts or controversial images could be plastered across children’s feeds if the algorithm determines it would drive better engagement by putting them there. Under the SAFE For Kids Act, without parental consent, children on social media sites would see their feed in chronological order, and only see posts from users they ‘follow’ on the platform. This change would completely alter the way platforms treat accounts associated with children, ensuring they are not exposed to content they don’t seek out themselves. This legislation would build upon the foundations established by the NYPA, opening the door to even further regulations that could increase protections for the average consumer and more importantly, for the average child online.

New Yorkers: If you have ever spent time on the internet, your personal data is out there, but now you have the power to protect it.

Destroying Defamation

The explosion of Fake News spread among social media sites is destroying a plaintiff’s ability to succeed in a defamation action. The recent proliferation of rushed journalism, online conspiracy theories, and the belief that most stories are, in fact, “Fake News” have created a desert of veracity. Widespread public skepticism about even the most mainstream social media reporting means plaintiffs need help convincing jurors that third parties believed any reported statement to be true. Such proof is necessary for a plaintiff to prove the elements of defamation.

Fake News Today

Fake News is any journalistic story that knowingly and intentionallyincludes untrue factual statements. Today, many speak of Fake News as a noun. There is no shortage of examples of Fake News and its impact.

      • Pizzagate: During the 2016 Presidential Election, Edgar Madison Welch, 28, read a story on (then) Facebook that Hilary Clinton was running a child trafficking ring out of the basement of a pizzeria. Welch, a self-described vigilante, shot open a locked door of the pizzeria with his AR-15.
      • A study by three MIT scholars found that false news stories spread faster on Twitter than true stories, with the former being 70% more likely to be retweeted than the latter.
      • During the defamation trial of Amber Heard and Johnny Depp, a considerable number of “Fake News” reports circulated across social media platforms, particularly TikTok, Twitter, and YouTube, attacking Ms. Heard at a disproportionality more significant rate than Mr. Depp.

 

What is Defamation?

To establish defamation, a plaintiff must show the defendant published a false assertion of fact that damages the plaintiff’s reputation. Hyperbolic language or other indications that a statement was not meant to be taken seriously are not actionable. Today’s understanding that everything on the Internet is susceptible to manipulation destroys defamation.

Because the factuality of a statement is a question of law, a plaintiff must first convince a judge that the offending statement is fact and not opinion. Courts often find that Internet and social media statements are hyperbole or opinion. If a plaintiff succeeds in persuading the judge, then the issue of whether the statement defamed the plaintiff heads to the jury. A jury faced with defamation must determine whether the statement of fact harmed the defendant’s reputation or livelihood to the extent that it caused the plaintiff to incur damages. The prevalence of Fake News creates another layer of difficulty for the Internet plaintiff, who must convince the jury that the statement was true.

Defamation’s Slow and Steady Erosion

Since the 1960s, the judiciary has limited plaintiffs’ ability to succeed in defamation claims. The decisions in Sullivan v. New York Times and Gertz increased the difficulty for public figures, and those with limited public figure status, to succeed by requiring them to prove actual malice against a defendant, a standard higher than the mere negligence standard allowed for individuals who are not of community interest.

The rise of Internet use, mainly social media, presents plaintiffs with yet another hurdle. Plaintiffs can only succeed if the challenged statement is fact, not opinion. However, judges find that statements made on the Internet are opinions and not points. The combined effect of Supreme Court limitations on proof and the increased belief that social media posts are mostly opinions has limited the plaintiff’s ability to succeed in a defamation claim.

Destroying Defamation

If the Supreme Court and social media have eroded defamation, Fake News has destroyed it. Today, convincing a jury that a false statement purporting to be fact has defamed a plaintiff is difficult given the dual issues of society’s objective mistrust of the media and the understanding that information on the Internet is generally opinion, not fact. Fake News sows confusion and makes it almost impossible for jurors to believe any statement has the credibility necessary to cause harm.

To be clear, in some instances, fake News is so intolerable that a jury will find for the plaintiffs. A Connecticut jury found conspiracy theorist Alex Jones liable for defamation based on his assertion that the government had faked the Sandy Hook shootings.

But often, plaintiffs are unsuccessful where the challenged language is conflated with untruths. Fox News successfully defended itself against a lawsuit claiming that it had aired false and deceptive content about the coronavirus, even though its reporting was, in fact, untrue.

Similarly, a federal judge dismissed a defamation case against Fox News for Tucker Carlson’s report that the plaintiff had extorted then-President Donald Trump. In reaching its conclusion, the judge observed that Carlson’s comments were rhetorical hyperbole and that the reasonable viewer “‘arrive[s] with the appropriate amount of skepticism.”‘ Reports of media success in defending against defamation claims further fuel media mistrust.

The current polarization caused by identity politics is furthering the tendency for Americans to mistrust the media. Sarah Palin announced that the goal of her recent defamation case against The New York Times was to reveal that the “lamestream media” publishes “fake news.”

If jurors believe that no reasonable person could credit a challenged statement as accurate, they cannot find that the statement the plaintiff asserts is defamatory caused harm. An essential element of defamation is that the defendant’s remarks damaged the plaintiff’s reputation. The large number of people who believe News is fake, the media’s rush to publish, and external attacks on credible journalism have created a problematization of truth among members of society. The potential for defamatory harm is minimal when every news story is questionable. Ultimately, the presence of Fake News is a blight on the tort of defamation and, like the credibility of present-day news organizations, will erode it to the point of irrelevance.

Is there any hope for a world without Fake News?

 

Update Required: An Analysis of the Conflict Between Copyright Holders and Social Media Users

Opening

For anyone who is chronically online as yours truly, in one way or another we have seen our favorite social media influencers, artists, commentators, and content creators complain about their problems with the current US Intellectual Property (IP) system. Be it that their posts are deleted without explanation or portions of their video files are muted, the combination of factors leading to copyright issues on social media is endless. This, in turn, has a markedly negative impact on free and fair expression on the internet, especially within the context of our contemporary online culture. For better or worse, interaction in society today is intertwined with the services of social media sites. Conflict arises when the interests of copyright holders clash with this reality. They are empowered by byzantine and unrealistic laws that hamper our ability to exist as freely as we do in real life. While they do have legitimate and fundamental rights that need to be protected, such rights must be balanced out with desperately needed reform. People’s interaction with society and culture must not be hampered, for that is one of the many foundations of a healthy and thriving society. To understand this, I venture to analyze the current legal infrastructure we find ourselves in.

Current Relevant Law

The current controlling laws for copyright issues on social media are the Copyright Act of 1976 and the Digital Millennium Copyright Act (DMCA). The DMCA is most relevant to our analysis; it gives copyright holders relatively unrestrained power to demand removal of their property from the internet and to punish those using illegal methods to get ahold of their property. This broad law, of course, impacted social media sites. Title II of the law added 17 U.S. Code § 512 to the Copyright Act of 1976, creating several safe harbor provisions for online service providers (OSP), such as social media sites, when hosting content posted by third parties. The most relevant of these safe harbors to this issue is 17 U.S. Code § 512(c), which states that an OSP cannot be liable for monetary damages if it meets several requirements and provides a copyright holder a quick and easy way to claim their property. The mechanism, known as a “notice and takedown” procedure, varies by social media service and is outlined in their terms and conditions of service (YouTube, Twitter, Instagram, TikTok, Facebook/Meta). Regardless, they all have a complaint form or application that follows the rules of the DMCA and usually will rapidly strike objectionable social media posts by users. 17 U.S. Code § 512(g) does provide the user some leeway with an appeal process and § 512(f) imposes liability to those who send unjustifiable takedowns. Nevertheless, a perfect balance of rights is not achieved.

The doctrine of fair use, codified as 17 U.S. Code § 107 via the Copyright Act of 1976, also plays a massive role here. It established a legal pathway for the use of copyrighted material for “purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research” without having to acquire right to said IP from the owner. This legal safety valve has been a blessing for social media users, especially with recent victories like Hosseinzadeh v. Klein, which protected reaction content from DMCA takedowns. Cases like Lenz v. Universal Music Corp further established that fair use must be considered by copyright holders when preparing for takedowns. Nevertheless, failure to consider said rights by true copyright holders still happens, as sites are quick to react to DMCA complaints. Furthermore, the flawed reporting systems of social media sites lead to abuse by unscrupulous actors faking true ownership. On top of that, such legal actions can be psychologically and financially intimidating, especially when facing off with a major IP holder, adding to the unbalanced power dynamic between the holder and the poster.

The Telecommunications Act of 1996, which focuses primarily on cellular and landline carriers, is also particularly relevant to social media companies in this conflict. At the time of its passing, the internet was still in its infancy. Thus, it does not incorporate an understanding of the current cultural paradigm we find ourselves in. Specifically, the contentious Section 230 of the Communication Decency Act (Title V of the 1996 Act) works against social media companies in this instance, incorporating a broad and draconian rule on copyright infringement. 47 U.S. Code § 230(e)(2) states in no uncertain terms that “nothing in this section shall be construed to limit or expand any law pertaining to intellectual property.” This has been interpreted and restated in Perfect 10, Inc. v. CCBill LLC to mean that such companies are liable for user copyright infringement. This gap in the protective armor of Section 230 is a great concern to such companies, therefore they react strongly to such issues.

What is To Be Done?

Arguably, fixing the issues around copyright on social media is far beyond the capacity of current legal mechanisms. With ostensibly billions of posts each day on various sites, regulation by copyright holders and sites is far beyond reason. It will take serious reform in the socio-cultural, technological, and legal arenas before a true balance of liberty and justice can be established. Perhaps we can start with an understanding by copyright holders not to overreact when their property is posted online. Popularity is key to success in business, so shouldn’t you value the free marketing that comes with your copyrighted property getting shared honestly within the cultural sphere of social media?  Social media sites can also expand their DMCA case management teams or create tools for users to accredit and even share revenue with, if they are an influencer or content creator, the copyright holder. Finally, congressional action is desperately needed as we have entered a new era that requires new laws. That being said, achieving a balance between the free exchange of ideas and creations and the rights of copyright holders must be the cornerstone of the government’s approach to socio-cultural expression on social media. That is the only way we can progress as an ever more online society.

 

Image: Freepik.com

https://www.freepik.com/free-vector/flat-design-intellectual-property-concept-with-woman-laptop_10491685.htm#query=intellectual%20property&position=2&from_view=keyword”>Image by pikisuperstar

Skip to toolbar