Privacy Please: Privacy Law, Social Media Regulation and the Evolving Privacy Landscape in the US

Social media regulation is a touchy subject in the United States.  Congress and the White House have proposed, advocated, and voted on various bills, aimed at protecting and guarding people from data misuse and misappropriation, misinformation, harms suffered by children, and for the implications of vast data collection. Some of the most potent concerns about social media stem from use and misuse of information by the platforms- from the method of collection, to notice of collection and use of collected information. Efforts to pass a bill regulating social media have been frustrated, primarily by the First Amendment right to free speech. Congress has thus far failed to enact meaningful regulation on social media platforms.

The way forward may well be through privacy law. Privacy laws give people some right to control their own personhood including their data, right to be left alone, and how and when people see and view them. Privacy laws originated in their current form in the late 1800’s with the impetus being one’s freedom from constant surveillance by paparazzi and reporters, and the right to control your own personal information. As technology mutated, our understanding of privacy rights grew to encompass rights in our likeness, our reputation, and our data. Current US privacy laws do not directly address social media, and a struggle is currently playing between the vast data collection practices of the platforms, immunity for platforms under Section 230, and private rights of privacy for users.

There is very little Federal Privacy law, and that which does exist is narrowly tailored to specific purposes and circumstances in the form of specific bills. Somes states have enacted their own privacy law scheme, California being on the forefront, Virginia, Colorado, Connecticut, and Utah following in its footsteps. In the absence of a comprehensive Federal scheme, privacy law is often judge-made, and offers several private rights of action for a person whose right to be left alone has been invaded in some way. These are tort actions available for one person to bring against another for a violation of their right to privacy.

Privacy Law Introduction

Privacy law policy in the United States is premised on three fundamental personal rights to privacy:

  1. Physical right to privacy- Right to control your own information
  2. Privacy of decisions– such as decisions about sexuality, health, and child-rearing. These are the constitutional rights to privacy. Typically not about information, but about an act that flows from the decision
  3. Proprietary Privacy – the ability to protect your information from being misused by others in a proprietary sense.

Privacy Torts

Privacy law, as it concerns the individual, gives rise to four separate tort causes of action for invasion of privacy:

  1. Intrusion upon Seclusion- Privacy law provides a tort cause of action for intrusion upon seclusion when someone intentionally intrudes upon the reasonable expectation of seclusion of another, physically or otherwise, and the intrusion is objectively highly offensive.
  2. Publication of Private Facts- One gives publicity To a matter concerning the Private life of another that is not of legitimate concern to the public, and the matter publicized would be objectively highly offensive. The first amendment provides a strong defense for publication of truthful matters when they are considered newsworthy.
  3. False Light – One who gives publicity to a matter concerning another that places the other before the public in a false light when The false light in which the other was placed would be objectively highly offensive and the actor had knowledge of or acted in reckless disregard as to the falsity of the publicized matter and the false light in which the other would be placed.
  4. Appropriation of name and likeness- Appropriation of one’s name or likeness to the defendant’s own use or benefit. There is no appropriation when a persona’s picture is used to illustrate a non-commercial, newsworthy article. This is usually commercial in nature but need not be. The appropriation could be of “identity”. It need not be misappropriation of name, it could be the reputation, prestige, social or commercial standing, public interest, or other value on the plaintiff’s likeness.

These private rights of action are currently unavailable for use against social media platforms because of Section 230 of the Decency in Communications Act, which provides broad immunity to online providers for posts on their platforms. Section 230 prevents any of the privacy torts from being raised against social media platforms.

The Federal Trade Commission (FTC) and Social Media

Privacy law can implicate social media platforms when their practices become unfair or deceptive to the public through investigation by the Federal Trade Commission (FTC). The FTC is the only federal agency with both consumer protection and competition jurisdiction in broad sectors of the economy. FTC investigates business practices where those practices are unfair or deceptive. FTC Act 15 U.S.C S 45- Act prohibits “unfair or deceptive acts or practices in or affecting commerce” and grants broad jurisdiction over privacy practices of businesses to the FTC. Trade practice is unfair if it causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and is not outweighed by countervailing benefits to consumers or competition. A deceptive act or practice is a material representation, omission, or practice that is likely to mislead the consumer acting reasonably in the circumstances, to the consumer’s detriment.

Critically, there is no private right of action in FTC enforcement. The FTC has no ability to enforce fines for S5 violations but can provide injunctive relief. By design, the FTC has very limited rulemaking authority, and looks to consent decrees and procedural, long-lasting relief as an ideal remedy. The FTC pursues several types of misleading or deceptive policy and practices that implicate social media platforms: notice and choice paradigms, broken promises, retroactive policy changes, inadequate notice, and inadequate security measures. Their primary objective is to negotiate a settlement where the company submits to certain measures of control of oversight by the FTC for a certain period of time. Violations of the agreements could yield additional consequences, including steep fines and vulnerability to class action lawsuits.

Relating to social media platforms, the FTC has investigated misleading terms and conditions, and violations of platform’s own policies. In Re Snapchat, the platform claimed that user’s posted information disappeared completely after a certain period of time, however, through third party apps and manipulation of user’s posts off of the platform, posts could be retained. The FTC and Snapchat settled, through a consent decree, to subject Snapchat to FTC oversight for 20 years.

The FTC has also investigated Facebook for violation of its privacy policy. Facebook has been ordered to pay a $5 billion penalty and to submit to new restrictions and a modified corporate structure that will hold the company accountable for the decisions it makes about its users’ privacy to settle FTC charges claiming that they violated a 2012 agreement with the agency.

Unfortunately, none of these measures directly give individuals more power over their own privacy. Nor do these policies and processes give individuals any right to hold platforms responsible for being misled by algorithms using their data, or for intrusion into their privacy by collecting data without allowing an opt-out.

Some of the most harmful social media practices today relate to personal privacy. Some examples include the collection of personal data, the selling and dissemination of data through the use of algorithms designed to subtly manipulate our pocketbooks and tastes, collection and use of data belonging to children, and the design of social media sites to be more addictive- all in service of the goal of commercialization of data.

No current Federal privacy scheme exists. Previous Bills on Privacy have been few and narrowly tailored to relatively specific circumstances and topics like healthcare and medical data protection by HIPPA, protection of data surrounding video rentals as in the Video Privacy Protection Act, and narrow protection for children’s data in Children’s Online Protection Act. All the schemes are outdated and fall short of meeting the immediate need of broad protection of widely collected and broadly utilized data from social media.

Current Bills on Privacy

Upon request from some of the biggest platforms, outcry from the public, and the White House’s request for Federal Privacy regulation, Congress appears poised to act. The 118th Congress has pushed privacy law as a priority in this term by introducing several bills related to social media privacy. There are at least ten Bills currently pending between the House of the Senate addressing a variety of issues and concerns from Children’s data privacy to the minimum age for use and designation of a new agency to monitor some aspects of privacy.

S744The Data Care Act of 2023 aims to protect social media user’s data privacy by imposing fiduciary duties on the platforms. The original iteration of the bill was introduced in 2021 and failed to receive a vote. It was re-introduced in March of 2023 and is currently pending. Under the act, social media platforms would have the duty to reasonably secure user’s data from access, refrain from using the data in a way that could foreseeably “benefit the online service provider to the detriment of the end user” and to prevent disclosure of user’s data unless the party is also bound by these duties. The bill authorizes the FTC and certain state officials to take enforcement actions upon breach of those duties. The states would be permitted to take their own legal action against companies for privacy violations. The bill would also allow the FTC to intervene in the enforcement efforts by imposing fines for violations.

H.R.2701 – Perhaps the most comprehensive piece of legislation on the House floor is the Online Privacy Act. In 2023, the bill was reintroduced by democrat Anna Eshoo after an earlier version on the bill failed to receive a vote and died in Congress. The Online Privacy Act aims to protect users by providing individuals rights relating to the privacy of their personal information. The bill would also provide privacy and security requirements for treatment of personal information. To accomplish this, the bill established a new agency – the Digital Privacy Agency- which would be responsible for enforcement of the rights and requirements. The new individual rights in privacy are broad and include the rights of access, correction, deletion, human review of automated decision, individual autonomy, right to be informed, and right to impermanence, amongst others. This would be the most comprehensive plan to date. The establishment of a new agency with a task specific to administration and enforcement of privacy laws would be incredibly powerful. The creation of this agency would be valuable irrespective of whether this bill is passed.

HR 821– The Social Media Child Protection Act is a sister bill to one by a similar name which originated in the Senate. This bill aims to protect children from the harms of social media by limiting children’s access to it. Under the bill, Social Media platforms are required to verify the age of every user before accessing the platform by submitting a valid identity document or by using another reasonable verification method. A social media platform will be prohibited from allowing users under the age of 16 to access the platform. The bill also requires platforms to establish and maintain reasonable procedures to protect personal data collected from users. The bill affords for a private right of action as well as state and FTC enforcement.

S 1291The Protecting Kids on Social Media Act is similar to its counterpart in the House, with slightly less tenacity. It similarly aims to protect children from social media’s harms. Under the bill, platforms must verify its user’s age, not allow the user to use the service unless their age has been verified, and must limit access to the platform for children under 12. The bill also prohibits retention and use of information collected during the age verification process. Platforms must take reasonable steps to require affirmative consent from the parent or guardian of a minor who is at least 13 years old for the creation of a minor account, and reasonably allow access for the parent to later revoke that consent. The bill also prohibits use of data collected from minors for algorithmic recommendations. The bill would require the Department of Commerce to establish a voluntary program for secure digital age verification for social media platforms. Enforcement would be through the FTC or state action.

S 1409– The Kids Online Safety Act, proposed by Senator Blumenthal of Connecticut, also aims to protect minors from online harms. This bill, as does the Online Safety Bill, establishes fiduciary duties for social media platforms regarding children using their sites. The bill requires that platforms act in the best interest of minors using their services, including mitigating harms that may arise from use, sweeping in online bullying and sexual exploitation. Social media sites would be required to establish and provide access to safeguards such as settings that restrict access to minor’s personal data and granting parents the tools to supervise and monitor minor’s use of the platforms. Critically, the bill establishes a duty for social media platforms to create and maintain research portals for non-commercial purposes to study the effect that corporations like the platforms have on society.

Overall, these bills indicate Congress’s creative thinking and commitment to broad privacy protection for users from social media harms. I believe the establishment of a separate body to govern, other than the FTC which lacks the powers needed to compel compliance, to be a necessary step. Recourse for violations on par with the EU’s new regulatory scheme, mainly fines in the billions, could help.

Many of the bills, for myriad aims, establish new fiduciary duties for the platforms in preventing unauthorized use and harms for children. There is real promise in this scheme- establishing duty of loyalty, diligence and care for one party has a sound basis in many areas of law and would be more easily understood in implementation.

The notion that platforms would need to be vigilant in knowing their content, studying its affects, and reporting those effects may do the most to create a stable future for social media.

The legal responsibility for platforms to police and enforce their policies and terms and conditions is another opportunity to further incentivize platforms. The FTC currently investigates policies that are misleading or unfair, sweeping in the social media sites, but there could be an opportunity to make the platforms legally responsible for enforcing their own policies, regarding age, against hate, and inappropriate content, for example.

What would you like to see considered in Privacy law innovation for social media regulation?

New York is Protecting Your Privacy

TAKING A STANCE ON DATA PRIVACY LAW

The digital age has brought with it unprecedented complexity surrounding personal data and the need for comprehensive data legislation. Recognizing this gap in legislative protection, New York has introduced the New York Privacy Act (NYPA), and the Stop Addictive Feeds Exploitation (SAFE) For Kids Act, two comprehensive initiatives designed to better document and safeguard personal data from the consumer side of data collection transactions. New York is taking a stand to protect consumers and children from the harms of data harvesting.

Currently under consideration in the Standing Committee on Consumer Affairs And Protection, chaired by Assemblywoman Nily Rozic, the New York Privacy Act was introduced as “An Act to amend the general business law, in relation to the management and oversight of personal data.” The NYPA was sponsored by State Senator Kevin Thomas and closely resembles the California Consumer Privacy Act (CCPA), which was finalized in 2019. In passing the NYPA, New York would become just the 12th state to adopt a comprehensive data privacy law protecting state residents.

DOING IT FOR THE DOLLAR

Companies buy and sell millions of user’s sensitive personal data in the pursuit of boosting profits. By purchasing personal user data from social media sites, web browsers, and other applications, advertisement companies can predict and drive trends that will increase product sales among different target groups.

Social media companies are notorious for selling user data to data collection companies, things such as your: name, phone number, payment information, email address, stored videos and photos, photo and file metadata, IP address, networks and connections, messages, videos watched, advertisement interactions, and sensor data, as well as time, frequency, and duration of activity on the site. The NYPA targets businesses like these by regulating legal persons that conduct business in the state of New York, or who produce products and services aimed at residents of New York. The entity that stands to be regulated must:

  • (a) have annual gross revenue of twenty-five million dollars or more;
  • (b) control or process personal data of fifty thousand consumers or more;
  • or (c) derive over fifty percent of gross revenue from the sale of personal data.

The NYPA does more for residents of New York because it places the consumer first, as the Act is not restricted to regulating businesses operating within New York but encompasses every resident of New York State who may be subject to targeted data collection, an immense step forward in giving consumers control over their digital footprint.

MORE RIGHTS, LESS FRIGHT

The NYPA works by granting all New Yorkers additional rights regarding how their data is maintained by controllers to which the Act applies. The comprehensive rights granted to New York consumers include the right to notice, opt out, consent, portability, correct, and delete personal information. The right to notice requires each controller provide a conspicuous and readily available notice statement describing the consumer’s rights, indicating the categories of personal data the controller will be collecting, where its collected from, and what it may be used for. The right to opt out includes allowing for consumers to opt out of processing their personal data for the purposes of targeted advertising, the sale of their personal data, and for profiling purposes. This gives the consumer an advantage when browsing sites and using apps, as they will be duly informed of exactly what information they are giving up when online.

While all the rights included in the NYPA are groundbreaking for the New York consumer, the right to consent to sensitive data collection and the right to delete data cannot be understated. The right to consent requires controllers to conspicuously ask for express consent to collect sensitive personal data. It also contains a zero-discrimination clause to protect consumers who do not give controllers express consent to use their personal data. The right to delete requires controllers to delete any or all of a consumer’s personal data upon request, demanding controllers delete said data within 45 days of receiving the request. These two clauses alone can do more for New Yorker’s digital privacy rights than ever before, allowing for complete control over who may access and keep sensitive personal data.

BUILDING A SAFER FUTURE

Following the early success of the NYPA, New York announced their comprehensive plan to better protect children from the harms of social media algorithms, which are some of the main drivers of personal data collection. Governor Kathy Hochul, State Senator Andrew Gounardes, and Assemblywoman Nily Rozic recently proposed the Stop Addictive Feeds Exploitation (SAFE) For Kids Act, directly targeting social media sites and their algorithms. It has long been suspected that social media usage contributes to worsening mental health conditions in the United States, especially among youths. The SAFE For Kids Act seeks to require parental consent for children to have access to social media feeds that use algorithms to boost usage.

On top of selling user data, social media sites like Facebook, YouTube, and X/Twitter also use carefully constructed algorithms to push content that the user has expressed interest in, usually based on the profiles they click on or the posts they ‘like’. Social media sites feed user data to algorithms they’ve designed to promote content that will keep the user engaged for longer, which exposes the user to more advertisements and produces more revenue.

Children, however, are particularly susceptible to these algorithms, and depending on the posts they view, can be exposed to harmful images or content that can have serious consequences for their mental health. Social media algorithms can show children things they are not meant to see, regardless of their naiveté and blind trust, traits that are not exactly cohesive with internet use. Distressing posts or controversial images could be plastered across children’s feeds if the algorithm determines it would drive better engagement by putting them there. Under the SAFE For Kids Act, without parental consent, children on social media sites would see their feed in chronological order, and only see posts from users they ‘follow’ on the platform. This change would completely alter the way platforms treat accounts associated with children, ensuring they are not exposed to content they don’t seek out themselves. This legislation would build upon the foundations established by the NYPA, opening the door to even further regulations that could increase protections for the average consumer and more importantly, for the average child online.

New Yorkers: If you have ever spent time on the internet, your personal data is out there, but now you have the power to protect it.

Don’t Talk to Strangers! But if it’s Online, it’s Okay?

It is 2010.  You are in middle school and your parents let your best friend come over on a Friday night.  You gossip, talk about crushes, and go on all social media sites.  You decide to try the latest one, Omegle.  You automatically get paired with a stranger to talk to and video chat with.  You speak to a few random people, and then, with the next click, a stranger’s genitalia are on your screen.

Stranger Danger

Omegle is a free video-chatting social media platform.  Its primary function has become meeting new people and arranging “online sexual rendezvous.”  Registration is not required.  Omegle randomly pairs users for one-on-one video sessions.  These sessions are anonymous, and you can skip to a new person at any time.  Although there is a large warning on the home screen saying “you must be 18 or older to use Omegle”, no parental controls are available through the platform.  Should you want to install any parental controls, you must use a separate commercial program.

While the platform’s community guidelines illustrate the “dos and don’ts” of the site, it seems questionable that the platform can monitor millions of users, especially when users are not required to sign up, or to agree to any of Omegle’s terms and conditions.  It, therefore, seems that this site could harbor online predators, raising quite a few issues.

One recent case surrounding Omegle involved a pre-teen who was sexually abused, harassed, and blackmailed into sending a sexual predator obscene content.  In A.M. v. Omegle.com LLC, the open nature of Omegle ended up matching an 11-year-old girl with a sexual predator in his late thirties.  Being easily susceptible, he forced the 11-year-old into sending pornographic images and videos of herself, perform for him and other predators, and recruit other minors.  This predator was able to continue this horrific crime for three years by threatening to release these videos, pictures, and additional content publicly.  The 11-year-old plaintiff sued Omegle on two general claims of platform liability through Section 230, but only one claim was able to break through the law.

Unlimited Immunity Cards!

Under 47 U.S.C. § 230 (Section 230), social media platforms are immune from liability for content posted by third parties.  As part of the Communications Decency Act of 1996, Section 230 provides almost full protection against lawsuits for social media companies since no platform is seen as a publisher or speaker of user-generated content posted on the site.  Section 230 has gone so far to say that Google and Twitter were immune from liability for claims that their platforms were used to aid terrorist activities.  In May of 2023, these cases moved up to the Supreme Court.  Although the court declined to rule for the Google case, they ruled on the Twitter case.  Google was found not liable for the claim that they stimulated the growth of ISIS through targeted recommendations and inspired an attack that killed an American student.  Twitter was immune for the claim that the platform aided and abetted a terrorist group to raise funds and recruit members for a terrorist attack.

Wiping the Slate

In February of 2023, the District Court in Oregon for the Portland Division found that Section 230 immunity did not apply to Omegle in a products liability claim, and the platform was held liable for these predatory actions committed by the third party on the site.  By side-stepping the third-party freedom of speech issue that comes with Section 230 immunity for an online publisher, the district court found Omegle responsible under the Plaintiff’s products liability claim, which targeted the platforms’ defective design, defective warning, negligent design, and failure to warn.

Three prongs need to be proved to preclude a platform from liability under Section 230:

  1. A provider of an interactive site,
  2. Whom is sought to be treated as a publisher or speaker, and
  3. For information provided by a third-party.

It is clear that Omegle is an interactive site that fits into the definition provided by Section 230.  The issue then falls on the second and third prongs: if the cause of action treated Omegle as the speaker of third-party content.  The sole function of randomly pairing strangers causes the foreseen danger of pairing a minor with an adult. Shown in the present case, “the function occurs before the content occurs.” By designing the platform negligently and with knowing disregard for the possibility of harm, the court ultimately concluded that the liability of the platform’s function does not pertain to third-party published content and that the claim targeted specific functions rather than users’ speech on the platform.  Section 230 immunity did not apply for this first claim and Omegle was held liable.

Not MY Speech

The plaintiff’s last claim dealing with immunity under Section 230 is that Omegle negligently failed to apply reasonable precautions to provide a safe platform.  There was a foreseeable risk of harm when marketing the service to children and adults and randomly pairing them.  Unlike the products liability claim, the negligence claim was twofold: the function of matching people and publishing their communications to each other, both of which fall directly into Section 230’s immunity domain.  The Oregon District Court drew a distinct line between the two claims, so although Omegle was not liable under Section 230 here through negligent service, they were liable through products liability.

If You Cannot Get In Through the Front Door, Try the Back Door!

For almost 30 years, social media platforms have been nearly immune from liability pertaining to Section 230 issues.  In the last few years, with the growth of technology on these platforms, judges have been trying to find loopholes in the law to hold companies liable.  A.M. v. Omegle has just moved through the district court level.  If appealed, it will be an interesting case to follow and see if the ruling will stand or be overruled in conjunction with the other cases that have been decided.  

How do you think a higher court will rule on issues like these?

THE SCHEME BEHIND AN ILLEGAL STREAM

FOLLOW THE STREAM TOWARDS A FELONY

The Protecting Lawful Streaming Act makes it a felony to engage in large-scale streaming of copyright material. The introduction of this law took place on December 10th, 2020. The law pertains to the increased concern surrounding live audio and video streaming in recent years. Specifically, such streaming has transformed society and become one of the most influential ways society chooses to enjoy various forms of content. Yet, the growth of legitimate streaming services has continuously been accompanied and disturbed by unlawful streaming of copyright materials. Initially, the illegal streaming of copyright material was only a misdemeanor until the Protecting Lawful Streaming Act became a part of America’s newest addition to the law.

Under the Protecting Lawful Streaming Act, a person must act:

  1. Willfully.
  2. For purposes of commercial advantage or private financial gain.
  3. Offer or provide to the public a digital transmission service.

ALL FOR ONE, ONE FOR ALL

The law’s enactment incentivizes those who indulge in hosting illegal streams subjects them to severe criminal penalties. Accordingly, anyone who hosts an illegal stream that not only infringes upon copyright material but also obtains an economic benefit will now face felony charges. Many fail to recognize that while the individual responsible for hosting the illegal stream faces criminal charges, any individual who merely partakes in viewing this infringement does not technically violate any criminal law. Therefore, illegal streams that host hundreds and even thousands of viewers allow for no criminal action to be taken or even threatened to all these spectators. Instead, the focus is entirely on the host of this illegal stream.

PLATFORMS ENGINEERING IS PERFECTLY IMPERFECT

The question then becomes, what does social media do with illegal streaming? For starters, social media platforms serve as one of, if not the most, influential ways illegal streams reach society. Social media platform designs focus on spreading information. They not only spread information but essentially take information and provide the capability to have it worldwide within seconds. As such, these platform’s engineering do precisely what illegal streaming hosts want. That is to expose these streams to millions of individuals who may indulge and use copyright material for their benefit. Social media’s capabilities of utilizing hashtags, likes, shares, and other methods of expansion through social media allow hosts to capitalize on these platform’s designs to take advantage for their own personal and financial gain.

NOT MY MESS, NOT MY PROBLEM

Social media platforms are not liable for copyright material exposure on their platforms. According to the Digital Millennium Copyright Act, the only requirement is that these platforms must take prompt action when contacted by the rights holders. However, the statistics have shown thus far that social media platforms fail to take the initiative and are generally unwilling to address this ongoing concern. The argument on behalf of social media platforms is that the duty is not on their behalf but on the rights holders to report an infringement. With this belief, social media platforms could take a more significant initiative to address this concern of illegal streaming. While social media platforms have at least some implementations to help prevent infringement of owner’s work, the system is flawed, with many unresolved areas of concern. Current measures in place by themselves fail to provide reassurance that they can protect the content of the actual owner from being exploited for the financial benefit of illegal streaming hosts around the world. 

MORE MONEY, MORE PROBLEMS

The question then becomes, how many illegal streaming services impact people? Major entertainment networks such as the NFL, NBA, and UFC are just a few examples of illegal streaming threatening their businesses’ most critical revenue stream. That being the television viewership. Not only this but even movie and non-sport television programs are reported to have lost billions of dollars to the hands of illegal streaming. Thus, by enacting the Protecting Lawful Streaming Act, the goal is to deter harmful criminal activity and simultaneously protect the rights of creators and copyright owners.

Furthermore, the individual people would least expect to be harmed by illegal streaming is also in jeopardy. That being themselves! Illegal streams cause various risks of malicious software that can infect one’s device. This exposure puts individuals’ personal information at risk. It is subject to several casualties, such as identity fraud, financial loss, and permanent damage to devices that watch these illegal streaming services. 

WHAT’S MINE IS YOURS

Society must recognize and address how individuals can counteract illegal streaming legally yet unfairly. For instance, an individual who legally purchases a pay-per-view event and then live streams this on their social media for others to also spectate. Someone can lawfully buy the stream and not be subject to being host to an illegal stream. Yet, the same issue arises. The owners of this content are stuck with no resolution and lose out on potential revenue. Rather than these individuals all purchasing the content for themselves, one is used as a sacrifice while the others reap the same benefit without costing a dime. The same scenario can arise where individuals gather in one home to watch a pay-per-view or a movie on demand. This conduct is not illegal, but it negates the potential revenue these industries may obtain. Such a solution was, is, and consistently will be recognized as legal activity.

AN ISSUE, BUT NOT AN ISSUE WORTH SOLVING

Even streaming platforms like Netflix fail to take any measures regarding not necessarily illegally streaming its content but sharing passwords for one account. Although such conduct can be subject to civil liability in a breach of its contractual terms or even criminal liability if fraud is determined, these platforms fail to take proper measures against this behavior. Ultimately, moving forward on these actions would be too costly and can result in losing viewership through this sort of conduct.

Through these findings, it’s clear that illegal streaming has and continues to take advantage of the actual copyright owners of this material. The Protecting Lawful Streaming Act was society’s most recent attempt to minimize this ongoing issue through an effort to increase the criminal penalty and deter such conduct. Yet, based on the inability to identify and diminish these illegal streams on social media, many continue to get away with this behavior daily. The legal loopholes discussed above prove that entertainment industries may never see the revenue stream they anticipate. Only time will tell how society responds to this predicament and whether some law will address it in the foreseeable future. If the law were to hold higher standards for social media platforms to take accountability for this conduct, would it make a difference? Even so, would the minimization of social media’s influence on the spread of illegal streams even have a lasting impact? 

Destroying Defamation

The explosion of Fake News spread among social media sites is destroying a plaintiff’s ability to succeed in a defamation action. The recent proliferation of rushed journalism, online conspiracy theories, and the belief that most stories are, in fact, “Fake News” have created a desert of veracity. Widespread public skepticism about even the most mainstream social media reporting means plaintiffs need help convincing jurors that third parties believed any reported statement to be true. Such proof is necessary for a plaintiff to prove the elements of defamation.

Fake News Today

Fake News is any journalistic story that knowingly and intentionallyincludes untrue factual statements. Today, many speak of Fake News as a noun. There is no shortage of examples of Fake News and its impact.

      • Pizzagate: During the 2016 Presidential Election, Edgar Madison Welch, 28, read a story on (then) Facebook that Hilary Clinton was running a child trafficking ring out of the basement of a pizzeria. Welch, a self-described vigilante, shot open a locked door of the pizzeria with his AR-15.
      • A study by three MIT scholars found that false news stories spread faster on Twitter than true stories, with the former being 70% more likely to be retweeted than the latter.
      • During the defamation trial of Amber Heard and Johnny Depp, a considerable number of “Fake News” reports circulated across social media platforms, particularly TikTok, Twitter, and YouTube, attacking Ms. Heard at a disproportionality more significant rate than Mr. Depp.

 

What is Defamation?

To establish defamation, a plaintiff must show the defendant published a false assertion of fact that damages the plaintiff’s reputation. Hyperbolic language or other indications that a statement was not meant to be taken seriously are not actionable. Today’s understanding that everything on the Internet is susceptible to manipulation destroys defamation.

Because the factuality of a statement is a question of law, a plaintiff must first convince a judge that the offending statement is fact and not opinion. Courts often find that Internet and social media statements are hyperbole or opinion. If a plaintiff succeeds in persuading the judge, then the issue of whether the statement defamed the plaintiff heads to the jury. A jury faced with defamation must determine whether the statement of fact harmed the defendant’s reputation or livelihood to the extent that it caused the plaintiff to incur damages. The prevalence of Fake News creates another layer of difficulty for the Internet plaintiff, who must convince the jury that the statement was true.

Defamation’s Slow and Steady Erosion

Since the 1960s, the judiciary has limited plaintiffs’ ability to succeed in defamation claims. The decisions in Sullivan v. New York Times and Gertz increased the difficulty for public figures, and those with limited public figure status, to succeed by requiring them to prove actual malice against a defendant, a standard higher than the mere negligence standard allowed for individuals who are not of community interest.

The rise of Internet use, mainly social media, presents plaintiffs with yet another hurdle. Plaintiffs can only succeed if the challenged statement is fact, not opinion. However, judges find that statements made on the Internet are opinions and not points. The combined effect of Supreme Court limitations on proof and the increased belief that social media posts are mostly opinions has limited the plaintiff’s ability to succeed in a defamation claim.

Destroying Defamation

If the Supreme Court and social media have eroded defamation, Fake News has destroyed it. Today, convincing a jury that a false statement purporting to be fact has defamed a plaintiff is difficult given the dual issues of society’s objective mistrust of the media and the understanding that information on the Internet is generally opinion, not fact. Fake News sows confusion and makes it almost impossible for jurors to believe any statement has the credibility necessary to cause harm.

To be clear, in some instances, fake News is so intolerable that a jury will find for the plaintiffs. A Connecticut jury found conspiracy theorist Alex Jones liable for defamation based on his assertion that the government had faked the Sandy Hook shootings.

But often, plaintiffs are unsuccessful where the challenged language is conflated with untruths. Fox News successfully defended itself against a lawsuit claiming that it had aired false and deceptive content about the coronavirus, even though its reporting was, in fact, untrue.

Similarly, a federal judge dismissed a defamation case against Fox News for Tucker Carlson’s report that the plaintiff had extorted then-President Donald Trump. In reaching its conclusion, the judge observed that Carlson’s comments were rhetorical hyperbole and that the reasonable viewer “‘arrive[s] with the appropriate amount of skepticism.”‘ Reports of media success in defending against defamation claims further fuel media mistrust.

The current polarization caused by identity politics is furthering the tendency for Americans to mistrust the media. Sarah Palin announced that the goal of her recent defamation case against The New York Times was to reveal that the “lamestream media” publishes “fake news.”

If jurors believe that no reasonable person could credit a challenged statement as accurate, they cannot find that the statement the plaintiff asserts is defamatory caused harm. An essential element of defamation is that the defendant’s remarks damaged the plaintiff’s reputation. The large number of people who believe News is fake, the media’s rush to publish, and external attacks on credible journalism have created a problematization of truth among members of society. The potential for defamatory harm is minimal when every news story is questionable. Ultimately, the presence of Fake News is a blight on the tort of defamation and, like the credibility of present-day news organizations, will erode it to the point of irrelevance.

Is there any hope for a world without Fake News?

 

Is it HIGH TIME we allow Cannabis Content on Social Media?

 

Is it HIGHT TIME we allow Cannabis Content on Social Media?

The Cannabis Industry is Growing like a Weed

Social media provides a relationship between consumers and their favorite brands. Just about every company has a social media presence to advertise its products and grow its brand. Large companies command the advertising market, but smaller companies and one-person startups have their place too. The opportunity to expand your brand using social media is limitless to just about everyone. Except for the cannabis industry. With the developing struggle between social media companies and the politics of cannabis, comes an onslaught of problems facing the modern cannabis market. With recreational marijuana use legal in 21 states and Washington, D.C., and medical marijuana legal in 38 states, it may be time for this community to join the social media metaverse.

We know now that algorithms determine how many followers on a platform see a business’ content, whether or not the content is permitted, and whether the post or the user should be deleted. The legal cannabis industry has found itself in a similar struggle to legislators with social media giants ( like Facebook, Twitter, and Instagram) for increased transparency about their internal processes for filtering information, banning users, and moderating its platform. Mainstream cannabis businesses have been prevented from making their presence known on social media in the past, but legitimate businesses are being placed in a box with illicit drug users and prevented from advertising on public social media sites. The Legal cannabis industry is expected to be worth over $60 billion by 2024, and support for federal legalization is at an all-time high (68%). Now more than ever, brands are fighting for higher visibility amongst cannabis consumers.

Recent Legislation Could Open the Door for Cannabis

The question remains, whether the legal cannabis businesses have a place in the ever-changing landscape of the social media metaverse. Marijuana is currently a Schedule 1 narcotic on the Controlled Substances Act (1970). This categorization of Marijuana as Schedule 1 means that it has no currently accepted medical use and has a high potential for abuse. While that definition was acceptable when cannabis was placed on the DEAs list back in 1971, there has been evidence presented in opposition to that decision. Historians note, overt racism, combined with New Deal reforms and bureaucratic self-interest is often blamed for the first round of federal cannabis prohibition under the Marihuana Tax Act of 1937, which restricted possession to those who paid a steep tax for a limited set of medical and industrial applications.    The legitimacy of cannabis businesses within the past few decades based on individual state legalization (both medical and recreational) is at the center of debate for the opportunity to market as any other business has. Legislation like the MORE act (Marijuana Opportunity Reinvestment and Expungement) which was passed by The House of Representatives gives companies some hope that they can one day be seen as legitimate businesses. If passed into law, Marijuana will be lowered or removed from the schedule list which would blow the hinges off the cannabis industry, legitimate businesses in states that have legalized its use are patiently waiting in the wings for this moment.

States like New York have made great strides in passing legislation to legalize marijuana the “right” way and legitimize business, while simultaneously separating themselves from the illegal and dangerous drug trade that has parasitically attached itself to this movement. The  Marijuana Regulation and Tax Act (MRTA)  establishes a new framework for the production and sale of cannabis, creates a new adult-use cannabis program, and expands the existing medical cannabis and cannabinoid (CBD) hemp programs. MRTA also established the Office of Cannabis Management (OCM), which is the governing body for cannabis reform and regulation, particularly for emerging businesses that wish to establish a presence in New York. The OCM also oversees the licensure, cultivation, production, distribution, sal,e and taxation of medical, adult-use, and cannabinoid hemp within New York State. This sort of regulatory body and structure are becoming commonplace in a world that was deemed to be like the “wild-west” with regulatory abandonment, and lawlessness.

 

But, What of the Children?

In light of all the regulation that is slowly surrounding the Cannabis businesses, will the rapidly growing social media landscape have to concede to the demands of the industry and recognize their presence? Even with regulations cannabis exposure is still an issue to many about the more impressionable members of the user pool. Children and young adults are spending more time than ever online and on social media.  On average, daily screen use went up among tweens (ages 8 to 12) to five hours and 33 minutes from four hours and 44 minutes, and to eight hours and 39 minutes from seven hours and 22 minutes for teens (ages 13 to 18). This group of social media consumers is of particular concern to both the legislators and the social media companies themselves. MRTA offers protection from companies advertising with the intent of looking like common brands marketed to children. Companies are restricted to using their name and their logo, with explicit language that the item inside of the wrapper has cannabis or Tetrahydrocannabinol (THC) in it. MRTA restrictions along with strict community guidelines from several social media platforms and government regulations around the promotion of marijuana products, many brands are having a hard time building their communities’ presence on social media. The cannabis companies have resorted to creating their own that promote the content they are being prevented from blasting on other sites. Big-name rapper and cannabis enthusiast, Berner who created the popular edible brand “Cookies”, has been approached to partner with the creators to bolster their brand and raise awareness.  Unfortunately, the sites became what mainstream social media sites feared in creating their guideline, an unsavory haven for illicit drug use and other illegal behavior. One of the pioneer apps in this field Social Club was removed from the app store after multiple reports of illegal behavior. The apps have since been more internally regulated but have not taken off like the creators intended. Legitimate cannabis businesses are still being blocked from advertising on mainstream apps.

These Companies Won’t go Down Without a Fight

While cannabis companies aren’t supposed to be allowed on social media sites, there are special rules in place if a legal cannabis business were to have a presence on a social media site. Social media is the fastest and most efficient way to advertise to a desired audience. With appropriate regulatory oversight and within the confines of the changing law, social media sites may start to feel pressure to allow more advertising from cannabis brands.

A Petition has been generated to bring META, the company that owns Facebook and Instagram among other sites, to discuss the growing frustrations and strict restrictions on their social media platforms. The petition on Change.org has managed to amass 13,000 signatures. Arden Richard, the founder of WeedTube, has been outspoken about the issues saying  “This systematic change won’t come without a fight. Instagram has already begun deleting posts and accounts just for sharing the petition,”. He also stated, “The cannabis industry and community need to come together now for these changes and solutions to happen,”. If not, he fears, “we will be delivering this industry into the hands of mainstream corporations when federal legalization happens.”

Social media companies recognize the magnitude of the legal cannabis community because they have been banning its content nonstop since its inception. However, the changing landscape of the cannabis industry has made their decision to ban their content more difficult. Until federal regulation changes, businesses operating in states that have legalized cannabis will be force banned by the largest advertising platforms in the world.

 

Memes, Tweets, and Stocks . . . Oh, My!

 

Pop-Culture’s Got A Chokehold on Your Stocks

In just three short weeks, early in January 2021, Reddit meme-stock traders garnered up enough of GameStop’s stock to increase its value from a mere $17.25 per share to $325 a pop. This reflected almost an 1,800% increase in the stock’s value. In light of this, hedge funds, like New York’s Melvin Capital Management, were left devastated, some smaller hedge funds even went out of business.

For Melvin, because they were holding their GameStop stock in a short position (a trading technique in which the intention is to sell a security with the plan to buy it back later, at a lower cost, in an anticipated short term drop), they lost over 50% of their stock’s value, which translated to nearly $7 billion, in just under a month.

Around 2015, the rise of a new and free online trading platform geared towards a younger generation, emerged in Robinhood. Their mission was simple — “democratize” finance. By putting the capacity to understand and participate in trading, without needing an expensive broker, Robinhood made investing accessible to the masses. However, the very essence of Robinhood putting the power back in the hands of the people, was also what caused a halt in GameStop’s takeover rise. After three weeks, Robinhood had to cease all buying or selling of GameStop’s shares and options because the sheer volume of trading had exceeded their cash-on-hand capacity, or collateral that is required by regulators to function as a legal trade exchange.

But what exactly is a meme-stock? For starters, a meme is an idea or element of pop-culture that spreads and intensifies across people’s minds. As social media has increased in popularity, viral pop-culture references  and trends have as well. Memes allow people to instantaneously spread videos, tweets, pictures, or posts that are humorous, interesting, or sarcastic. This in turns goes viral. Meme-stocks therefore originate on the internet, usually in sub-Reddit threads, where users work together to identify a target stock and then promote it. The goal of promoting a meme stock largely involves shorting the stock—as explained above—which means buying, holding, selling, and rebuying as prices fluctuate to turn a profit.

GameStop is not the first, and certainly not the last, stock to be traded in this fashion. But it represents an important shift in the power of social media and its ability to affect the stock market. Another example of the power meme-culture can have on real-world finances and the economy, is Dogecoin.

Dogecoin was created as satirical new currency, in a way mocking the hype around existing cryptocurrencies. But its positive reaction and bolstered interest on social media turned the joke crypto into a practical reality. This “fun” version of Bitcoin was celebrated, listed on the crypto exchange Binance, and even cryptically endorsed by Elon Musk. More recently, in 2021, cinema chain AMC announced it would accept Dogecoin in exchange for digital gift card purchases, further bolstering the credibility of this meme-originated cryptocurrency.

Tricks of the Trade, Play at Your Own Risk

Stock trading is governed by the Securities Act of 1933, which boils down to two basic objectives: (1) to require that investors receive financial and other material information concerning securities being offered for public sale; and (2) to prohibit any deceit, misrepresentations, and other fraud in the sale of securities. In order to buy, sell, or trade most securities, it must first be registered with the SEC—the primary goal of registration is to facilitate information disclosures, so investors are informed before engaging. Additionally, the Securities Exchange Act of 1934 provides the SEC with broad authority over the securities industry, to regulate, register, and oversee brokerage firms, agents, and SROs. Other regulations at play include the Investment Company Act of 1940 and the Investment Advisers Act of 1940 which regulate investment advisers and their companies, respectively. These Acts require firms and agents that receive compensation for their advising practices are registered with the SEC and adhere to certain qualifications and strict guidelines designed to promote fair, informed investment decisions.

Cryptocurrency has over the years grown from a speculative investment to a new class of assets and regulation is imminent. The Biden Administration has recently added some clarification on crypto use and its regulation through a new directive designating power to the SEC and the Commodity Futures Trading Commission (CFTC), which were already the prominent securities regulators. In the recent Ripple Labs lawsuit, the SEC began to make some strides in regulating cryptocurrency by working to classify it as a security which would bring crypt into their domain of regulation.

Consequentially, the SEC’s Office of Investor Education and Advocacy has adapted with the times and now cautions against  making any investment decisions based solely off of information seen on social media platforms. Because social media has become integral to our daily lives, investors are increasingly relying and turning to it for information when deciding when, where, and on what to invest. This has increased the likelihood of scams, fraud, and other misinformation consequences. These problems can arise through fraudsters disseminating false information anonymously or impersonating someone else.

 

However, there is also an increasing concern with celebrity endorsements and testimonials regarding investment advice. The most common types of social media online scam schematics are impersonation and fake crypto investment advertisements.

 

With this rise in social media use, the laws governing investment advertisements and information are continuously developing. Regulation FD (Fair Disclosure) provides governance on the selective disclosure of information for publicly traded companies. Reg. FD prescribes that when an issuer discloses any material, nonpublic information to certain individuals or entities, they must also make a public disclosure of that information. In 2008, the SEC issued new guidance allowing information to be distributed on websites so long as shareholders, investors, and the market in general were aware it was the company’s “recognized channel of distribution.” In 2013 this was again amended to allow publishing earnings and other material information on social media, provided that investors knew to expect it there.

This clarification came in light of the controversial boast by Netflix co-founder and CEO Reed Hastings on Facebook that Netflix viewers had consumed 1 billion hours of watch time, per month. Hasting’s Facebook page had never previously disclosed performance stats and therefore investors were not on notice that this type of potentially material information, relevant to their investment decisions, would be located there. Hastings also failed to immediately remedy the situation with a public disclosure of the same information via a press release or Form 8-K filing.

In the same vein, a company’s employees may also be the target of consequence if they like or share a post, publish a third-party link, or friend certain people without permission if any of those actions could be viewed as an official endorsement or means of information dissemination.

The SEC requires that certain company information be accompanied by a disclosure or cautionary disclaimer statement. Section 17(b) of the 1933 Act, more commonly known as the Anti-Touting provision, requires any securities endorsement be accompanied by a disclosure of the “nature, source, and amount of any compensation paid, directly or indirectly, by the company in exchange for such endorsement.”

To Trade, or Not to Trade? Let Your Social Media Feed Decide

With the emergence of non-professional trading schematics and platforms like Robinhood, low-cost financial technology has brought investing to the hands of younger users. Likewise, the rise of Bitcoin and blockchain technologies in the early-to-mid 2010’s have changed the way financial firms must think about and approach new investors. The discussion of investments and information sharing that happens on these online forums creates a cesspool ripe for misinformation breeding. Social media sites are vulnerable to information problems for several reasons. For starters, which posts gain attention is not always something that can be calculated in advance—if the wrong post goes viral, hundreds to thousands to millions of users may read improper recommendations. Algorithm rabbit-holes also pose a risk to extremist views and strategically places ads further on this downward spiral.

Additionally, the presence of fake or spam-based accounts and internet trolls pose an ever more difficult problem to contain. Lastly, influencers can sway large groups of followers by mindlessly promoting or interacting with bad information or not properly disclosing required information. There are many more obvious risks associated but “herding” remains one of the largest. Jeff Kreisler, Head of Behavioral Science at J.P. Morgan & Chase explains that:

“Herding has been a common investment trap forever. Social media just makes it worse because it provides an even more distorted perception of reality. We only see what our limited network is talking about or promoting, or what news is ‘trending’ – a status that has nothing to do with value and everything to do with hype, publicity, coolness, selective presentation and other things that should have nothing to do with our investment decisions.”

This shift to a digital lifestyle and reliance on social media for information has played a key role in the information dissemination for investor decision-making. Nearly 80% of institutional investors now use social media as a part of their daily workflow. Of those, about 30% admit that information gathered on social media has in some way influenced an investment recommendation or decision and another third have maintained that because of announcements they saw on social media, they made at least one change to their investments as a direct result. In 2013, the SEC began to allow publicly traded companies to report news and earnings via their social media platforms which has resulted in an increased flow of information to investors on these platforms. Social media also now plays a large role in financial literacy for the younger generations.

The Tweet Heard Around the Market

A notable and recent example of how powerful social media warriors and internet trolls can be in relation to the success of a company’s stock came just days after Elon Musk’s acquisition of Twitter and only hours after launching his pay-for-verification Twitter Blue debacle.  Insulin manufacturing company Eli Lilly saw a stark drop in their stock value after a fake parody account was created under the guise of their name and tweeted out that “insulin is now free.”

This account acting under the Twitter handle @EliLillyandCo labeled itself, bought a blue check mark, and appended the same logo as the real company to its profile making it almost indistinguishable from the real thing. Consequently, the actual Eli Lilly corporate account had to tweet out an apology “to those who have been served a misleading message from a fake Lilly account.” And clarifying that, “Our official Twitter account is @Lillypad.”

This is a perfect example for Elon Musk and other major companies and CEOs just how powerful pop-culture, meme-culture, and internet trolls are by the simple fact that this parody account casually dropped the stock of a multi-billion dollar pharmaceutical company almost 5% in the matter of a few hours and weaponized with $8 and a single tweet.

So, what does all this mean for the future of digital finance? It’s difficult to say exactly where we might be headed, but social media’s growing tether on all facets of our lives leave much up for new regulation. Consumers should be cautious when scrolling through investment-related material, and providers should be transparent with their relationships and goals in promoting any such materials. Social media is here to stay, but the regulation and use of it are still up for grabs.

The Rise of E-personation

Social media allows millions of users to communicate with one another on a daily basis, but do you really know who is behind the computer screen?

As social media continues to expand into the enormous entity that we know it to be today, the more susceptible users are to abuse online. Impersonation through electronic means, often referred to as e-personation is a rapidly growing trend on social media. E-personation is extremely troublesome because it requires far less information than the other typical forms of identity theft. In order to create a fake social media page, all an e-personator would need is the victim’s name, and maybe a profile picture. While creating a fake account is relatively easy for the e-personator, the impact on the victim’s life can be detrimental.

E-personation Under State Law

It wasn’t until 2008, when New York became the first state to recognized e-personation as a criminally punishable form of identity theft. Under New York law, “a person is guilty of criminal impersonation in the second degree when he … impersonates another by communication by internet website or electronic means with intent to obtain a benefit or injure or defraud another, or by such communication pretends to be a public servant in order to induce another to submit to such authority or act in reliance on such pretense.”

Since 2008, other states, such as California, New Jersey, and Texas, have also amended their identity theft statutes to include online impersonation as a criminal offense. New Jersey amended their impersonation and identity theft statute in 2014, after an e-personator case revealed their current statute lacked any mention of “electronic communication” as means of unlawful impersonation. In 2011, New Jersey Superior Court Judge David Ironson in Morris County, declined to dismiss an indictment of identity theft against Dana Thornton. Ms. Thornton allegedly created a fictitious Facebook page that portrayed her ex-boyfriend, a narcotics detective, unfavorably. On the Facebook page, Thornton, pretending to be her ex, posted admitting to hiring prostitutes, using drugs, and even contracting a sexually transmitted disease. Thornton’s defense counsel argued that New Jersey’s impersonation statute was not applicable because online impersonation was not explicitly mentioned in the statute and therefore, Thornton’s actions do not fall within the scope of activity the statute proscribes. Judge Ironson disagreed by noting the New Jersey statute is “clear and unambiguous” in forbidding impersonation activities that cause injury and does not need to specify the means by which the injury occurs.

Currently under New Jersey law, a person is guilty of impersonation or theft of identity if … “the person engages in one or more of the following actions by any means, but not limited to, the use of electronic communications or an internet website:”

    1. Impersonates another or assumes a false identity … for the purpose of obtaining a benefit for himself or another or to injure or defraud another;
    2. Pretends to be a representative of some person or organization … for the purpose of obtaining a benefit for himself or another or to injure or defraud another;
    3. Impersonates another, assumes a false identity or makes a false or misleading statement regarding the identity of any person, in an oral or written application for services, for the purpose of obtaining services;
    4. Obtains any personal identifying information pertaining to another person and uses that information, or assists another person in using the information … without that person’s authorization and with the purpose to fraudulently obtain or attempt to obtain a benefit or services, or avoid the payment of debt … or avoid prosecution for a crime by using the name of the other person; or
    5. Impersonates another, assumes a false identity or makes a false or misleading statement, in the course of making an oral or written application for services, with the purpose of avoiding payment for prior services.

As social media continues to grow it is likely that more state legislators will amend their statutes to incorporate e-personation into their impersonation and identify theft statutes.

E-personators Twitter Takeover

Over the last week, e-personation has erupted into chaos on Twitter. Elon Musk brought Twitter on October 27, 2022, for $44 billion dollars. He immediately began firing the top Twitter executives including the chief executive and chief financial officer. On the verge of bankruptcy, Elon needed a plan to generate more subscription revenue. At last, the problematic Twitter Blue subscription was created. Under the Twitter Blue policy users could purchase a subscription for $8 a month and receive the blue verification check mark next to their Twitter handle.

The unregulated distribution of the blue verification check mark has led to chaos on Twitter by allowing e-personators to run amuck. Traditionally the blue check mark has been a symbol of authentication for celebrities, politicians, news outlets, and other companies. It was created to protect those susceptible to e-personation. The rollout of Twitter Blue began on November 9, 2022, the policy did not specify any requirements needed to verify a user’s authenticity beyond payment of the monthly fee.

Shortly after the rollout, e-personators began to take advantage of their newly purchased verification subscription by impersonating celebrities, pharmaceutical companies, politicians, and even the new CEO of Twitter, Elon Musk. For example, comedian Kathy Griffin was one of the first Twitter accounts suspended after Twitter Blue’s launch for changing her Twitter name and profile photo to Elon Musk and impersonating the new CEO. Kathy was not the only Twitter user to impersonate Elon and in response Elon tweeted “Going forward, any Twitter handles engaging in impersonation without clearly specifying ‘parody’ will be permanently suspended.”

Elon’s threats of permanent suspension did not stop e-personators from trolling on Twitter. One e-personator used their blue check verification to masquerade as Eli Lilly and Company, an American pharmaceutical company. The fake Eli Lilly account tweeted the company would be providing free insulin to its customers. The real Eli Lilly account tweeted an apology shortly thereafter. Another e-personator used their verification to impersonate former United States President George W. Bush. The fake Bush account tweeted “I miss killing Iraqis” along with a sad face emoji. The e-personators did not stop there, many more professional athletes, politicians, and companies were impersonated under the new Twitter Blue subscription policy. An internal Twitter log seen by the New York Times indicated that 140,000 accounts had signed up for the new Twitter Blue subscription. It is unlikely that Elon will be able to discover every e-personator account and remedy this spread of misinformation.

Twitter’s Term and Conditions 

Before the rollout of Twitter Blue, Twitter’s guidelines included a policy for misleading and deceptive identities. Under Twitter’s policy “you many not impersonate individuals, groups, or organizations to mislead, confuse, or deceive others, nor use a fake identity in a manner that disrupts the experience of others on Twitter.” The guidelines further explain that impersonation is prohibited, specifically “you can’t pose as an existing person, group, or organization in a confusing or deceptive manner.” Based on the terms of Twitter’s guidelines, the recent e-personators are in direct violation of Twitter’s policy, but are these users also criminally liable?

Careful, You Could Get a Criminal Record

Social media networks, such as Facebook, Instagram, and Twitter, have little incentive to protect the interests of individual users because they cannot be held liable for anything their users post. Under section 230 of the Communications Decency Act, “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Because of the lack responsibility placed on social media platforms, victims of e-personation often have a hard time trying to remove the fake online presence. Ironically, in order for a victim to gain control of an e-personator’s fake account, the victim must provide the social media platform with confidential identifying information, while the e-personator effectively remains anonymous.

By now you’re probably asking yourself, but what about the e-personators criminal liability? Under some state statutes, like those mentioned above, e-personators can be found criminally liable. However, there are some barriers that effect the effectiveness of these prosecutions. For example, e-personators maintain great anonymity, therefore finding the actual person behind the fake account could be difficult. Furthermore, many of the state statutes that criminalize e-personation include proving the perpetrator’s intent, which may also pose a risk to prosecution. Lastly, social media is a global phenomenon which means jurisdictional issues will arise when bringing these cases to court. Unfortunately, only a minority of states have amended their impersonation statutes to include e-personation. Hopefully as social media continues to grow more states will follow suit and e-personation will be prosecuted more efficiently and effectively. Remember, not everyone on social media is who they claim to be, so be cautious.

I Knew I Smelled a Rat! How Derivative Works on Social Media can “Cook Up” Infringement Lawsuits

 

If you have spent more than 60 seconds scrolling on social media, you have undoubtably been exposed to short clips or “reels” that often reference different pop culture elements that may be protected intellectual property. While seemingly harmless, it is possible that the clips you see on various platforms are infringing on another’s copyrighted work. Oh Rats!

What Does Copyright Law Tell Us?

Copyright protection, which is codified in 17 U.S.C. §102, extends to “original works of authorship fixed in any tangible medium of expression”. It refers to your right, as the original creator, to make copies of, control, and reproduce your own original content. This applies to any created work that is reduced to a tangible medium. Some examples of copyrightable material include, but are not limited to, literary works, musical works, dramatic works, motion pictures, and sound recordings.

Additionally, one of the rights associated with a copyright holder is the right to make derivative works from your original work. Codified in 17 U.S.C. §101, a derivative work is “a work based upon one or more preexisting works, such as a translation, musical arrangement, dramatization, fictionalization, motion picture version, sound recording, art reproduction, abridgment, condensation, or any other form in which a work may be recast, transformed, or adapted. A work consisting of editorial revisions, annotations, elaborations, or other modifications which, as a whole, represent an original work of authorship, is a ‘derivative work’.” This means that the copyright owner of the original work also reserves the right to make derivative works. Therefore, the owner of the copyright to the original work may bring a lawsuit against someone who creates a derivative work without permission.

Derivative Works: A Recipe for Disaster!

The issue of regulating derivative works has only intensified with the growth of cyberspace and “fandoms”. A fandom is a community or subculture of fans that’s built itself up around one specific piece of pop culture and who share a mutual bond over their enthusiasm for the source material. Fandoms can also be composed of fans that actively participate and engage with the source material through creative works, which is made easier by social media. Historically, fan works have been deemed legal under the fair use doctrine, which states that some copyrighted material can be used without legal permission for the purposes of scholarship, education, parody, or news reporting, so long as the copyrighted work is only being used to the extent necessary. Fair use can also apply to a derivative work that significantly transforms the original copyrighted work, adding a new expression, meaning, or message to the original work. So, that means that “anyone can cook”, right? …Well, not exactly! The new, derivative work cannot have an economic impact on the original copyright holder. I.e., profits cannot be “diverted to the person making the derivative work”, when the revenue could or should have gone to original copyright holder.

With the increased use of “sharing” platforms, such as TikTok, Instagram, or YouTube, it has become increasingly easier to share or distribute intellectual property via monetized accounts. Specifically, due to the large amount of content that is being consumed daily on TikTok, its users are incentivized with the ability to go “viral” instantaneity, if not overnight,  as well the ability to earn money through the platform’s “Creator Fund.” The Creator Fund is paid for by the TikTok ads program, and it allows creators to get paid based on the amount of views they receive. This creates a problem because now that users are getting paid for their posts, the line is blurred between what is fair use and what is a violation of copyright law. The Copyright Act fails to address the monetization of social media accounts and how that fits neatly into a fair use analysis.

Ratatouille the Musical: Anyone Can Cook?

Back in 2020, TikTok users Blake Rouse and Emily Jacobson were the first of many to release songs based on Disney-Pixar’s 2007 film, Ratatouille. What started out as a fun trend for users to participate in, turned into a full-fledged viral project and eventual tangible creation. Big name Broadway stars including André De Shields, Wayne Brady, Adam Lambert, Mary Testa, Kevin Chamberlin, Priscilla Lopez, and Tituss Burgess all participated in the trend, and on December 9, 2020, it was announced that Ratatouille was coming to Broadway via a virtual benefit concert.

Premiered as a one-night livestream event in January 1 2021, all profits generated from the event were donated to the Entertainment Community Fund (formerly the Actors Fund), which is a non-profit organization that supports performers and workers in the arts and entertainment industry. It initially streamed in over 138 countries and raised over $1.5 million for the charity. Due to its success, an encore production was streamed on TikTok 10 days later, which raised an additional $500,000 for the fund (totaling $2 million). While this is unarguably a derivative work, the question of fair use was not addressed here because Disney lawyers were smart enough not to sue. In fact, they embraced the Ratatouille musical by releasing a statement to the Verge magazine:

Although we do not have development plans for the title, we love when our fans engage with Disney stories. We applaud and thank all of the online theatre makers for helping to benefit The Actors Fund in this unprecedented time of need.

Normally, Disney is EXTREMELY strict and protective over their intellectual property. However, this small change of heart has now opened a door for other TikTok creators and fandom members to create unauthorized derivative works based on others’ copyrighted material.

Too Many Cooks in the Kitchen!

Take the “Unofficial Bridgerton Musical”, for example. In July of 2022, Netflix sued content creators Abigail Barlow and Emily Bear for their unauthorized use of Netflix’s original series, Bridgerton. The Bridgerton Series on Netflix is based on the Bridgerton book series by Julia Quinn. Back in 2020, Barlow and Bear began writing and uploading songs based on the Bridgerton series to TikTok for fun. Needless to say, the videos went viral, thus prompting Barlow and Bear to release an entire musical soundtrack based on Bridgerton. They even went so far as to win the 2022 Grammy Award for Best Musical Album.

On July 26, Barlow and Bear staged a sold-out performance with tickets ranging from $29-$149 at the New York Kennedy Center, and also incorporated merchandise for sale that included the “Bridgerton” trademark. Netflix then sued, demanding an end to these for-profit performances. Interestingly enough, Netflix was allegedly initially on board with Barlow and Bear’s project. However, although Barlow and Bear’s conduct began on social media, the complaint alleges they “stretched fanfiction way past its breaking point”. According to the complaint, Netflix “offered Barlow & Bear a license that would allow them to proceed with their scheduled live performances at the Kennedy Center and Royal Albert Hall, continue distributing their album, and perform their Bridgerton-inspired songs live as part of larger programs going forward,” which Barlow and Bear refused. Netflix also alleged that the musical interfered with its own derivative work, the “Bridgerton Experience,” an in-person pop-up event that has been offered in several cities.

Unlike the Ratatouille: The Musical, which was created to raise money for a non-profit organization that benefited actors during the COVID-19 pandemic, the Unofficial Bridgerton Musical helped line the pockets of its creators, Barlow and Bear, in an effort to build an international brand for themselves. Netflix ended up privately settling the lawsuit in September of 2022.

Has the Aftermath Left a Bad Taste in IP Holder’s Mouths?

The stage has been set, and courts have yet to determine exactly how fan-made derivative works play out in a fair use analysis. New technologies only exacerbate this issue with the monetization of social media accounts and “viral” trends. At a certain point, no matter how much you want to root for the “little guy”, you have to admit when they’ve gone too far. Average “fan art” does not go so far as to derive significant profits off the original work and it is very rare that a large company will take legal action against a small content creator unless the infringement is so blatant and explicit, there is no other choice. IP law exists to protect and enforce the rights of the creators and owners that have worked hard to secure their rights. Allowing content creators to infringe in the name of “fair use” poses a dangerous threat to intellectual property law and those it serves to protect.

 

Skip to toolbar