Privacy Please: Privacy Law, Social Media Regulation and the Evolving Privacy Landscape in the US

Social media regulation is a touchy subject in the United States.  Congress and the White House have proposed, advocated, and voted on various bills, aimed at protecting and guarding people from data misuse and misappropriation, misinformation, harms suffered by children, and for the implications of vast data collection. Some of the most potent concerns about social media stem from use and misuse of information by the platforms- from the method of collection, to notice of collection and use of collected information. Efforts to pass a bill regulating social media have been frustrated, primarily by the First Amendment right to free speech. Congress has thus far failed to enact meaningful regulation on social media platforms.

The way forward may well be through privacy law. Privacy laws give people some right to control their own personhood including their data, right to be left alone, and how and when people see and view them. Privacy laws originated in their current form in the late 1800’s with the impetus being one’s freedom from constant surveillance by paparazzi and reporters, and the right to control your own personal information. As technology mutated, our understanding of privacy rights grew to encompass rights in our likeness, our reputation, and our data. Current US privacy laws do not directly address social media, and a struggle is currently playing between the vast data collection practices of the platforms, immunity for platforms under Section 230, and private rights of privacy for users.

There is very little Federal Privacy law, and that which does exist is narrowly tailored to specific purposes and circumstances in the form of specific bills. Somes states have enacted their own privacy law scheme, California being on the forefront, Virginia, Colorado, Connecticut, and Utah following in its footsteps. In the absence of a comprehensive Federal scheme, privacy law is often judge-made, and offers several private rights of action for a person whose right to be left alone has been invaded in some way. These are tort actions available for one person to bring against another for a violation of their right to privacy.

Privacy Law Introduction

Privacy law policy in the United States is premised on three fundamental personal rights to privacy:

  1. Physical right to privacy- Right to control your own information
  2. Privacy of decisions– such as decisions about sexuality, health, and child-rearing. These are the constitutional rights to privacy. Typically not about information, but about an act that flows from the decision
  3. Proprietary Privacy – the ability to protect your information from being misused by others in a proprietary sense.

Privacy Torts

Privacy law, as it concerns the individual, gives rise to four separate tort causes of action for invasion of privacy:

  1. Intrusion upon Seclusion- Privacy law provides a tort cause of action for intrusion upon seclusion when someone intentionally intrudes upon the reasonable expectation of seclusion of another, physically or otherwise, and the intrusion is objectively highly offensive.
  2. Publication of Private Facts- One gives publicity To a matter concerning the Private life of another that is not of legitimate concern to the public, and the matter publicized would be objectively highly offensive. The first amendment provides a strong defense for publication of truthful matters when they are considered newsworthy.
  3. False Light – One who gives publicity to a matter concerning another that places the other before the public in a false light when The false light in which the other was placed would be objectively highly offensive and the actor had knowledge of or acted in reckless disregard as to the falsity of the publicized matter and the false light in which the other would be placed.
  4. Appropriation of name and likeness- Appropriation of one’s name or likeness to the defendant’s own use or benefit. There is no appropriation when a persona’s picture is used to illustrate a non-commercial, newsworthy article. This is usually commercial in nature but need not be. The appropriation could be of “identity”. It need not be misappropriation of name, it could be the reputation, prestige, social or commercial standing, public interest, or other value on the plaintiff’s likeness.

These private rights of action are currently unavailable for use against social media platforms because of Section 230 of the Decency in Communications Act, which provides broad immunity to online providers for posts on their platforms. Section 230 prevents any of the privacy torts from being raised against social media platforms.

The Federal Trade Commission (FTC) and Social Media

Privacy law can implicate social media platforms when their practices become unfair or deceptive to the public through investigation by the Federal Trade Commission (FTC). The FTC is the only federal agency with both consumer protection and competition jurisdiction in broad sectors of the economy. FTC investigates business practices where those practices are unfair or deceptive. FTC Act 15 U.S.C S 45- Act prohibits “unfair or deceptive acts or practices in or affecting commerce” and grants broad jurisdiction over privacy practices of businesses to the FTC. Trade practice is unfair if it causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and is not outweighed by countervailing benefits to consumers or competition. A deceptive act or practice is a material representation, omission, or practice that is likely to mislead the consumer acting reasonably in the circumstances, to the consumer’s detriment.

Critically, there is no private right of action in FTC enforcement. The FTC has no ability to enforce fines for S5 violations but can provide injunctive relief. By design, the FTC has very limited rulemaking authority, and looks to consent decrees and procedural, long-lasting relief as an ideal remedy. The FTC pursues several types of misleading or deceptive policy and practices that implicate social media platforms: notice and choice paradigms, broken promises, retroactive policy changes, inadequate notice, and inadequate security measures. Their primary objective is to negotiate a settlement where the company submits to certain measures of control of oversight by the FTC for a certain period of time. Violations of the agreements could yield additional consequences, including steep fines and vulnerability to class action lawsuits.

Relating to social media platforms, the FTC has investigated misleading terms and conditions, and violations of platform’s own policies. In Re Snapchat, the platform claimed that user’s posted information disappeared completely after a certain period of time, however, through third party apps and manipulation of user’s posts off of the platform, posts could be retained. The FTC and Snapchat settled, through a consent decree, to subject Snapchat to FTC oversight for 20 years.

The FTC has also investigated Facebook for violation of its privacy policy. Facebook has been ordered to pay a $5 billion penalty and to submit to new restrictions and a modified corporate structure that will hold the company accountable for the decisions it makes about its users’ privacy to settle FTC charges claiming that they violated a 2012 agreement with the agency.

Unfortunately, none of these measures directly give individuals more power over their own privacy. Nor do these policies and processes give individuals any right to hold platforms responsible for being misled by algorithms using their data, or for intrusion into their privacy by collecting data without allowing an opt-out.

Some of the most harmful social media practices today relate to personal privacy. Some examples include the collection of personal data, the selling and dissemination of data through the use of algorithms designed to subtly manipulate our pocketbooks and tastes, collection and use of data belonging to children, and the design of social media sites to be more addictive- all in service of the goal of commercialization of data.

No current Federal privacy scheme exists. Previous Bills on Privacy have been few and narrowly tailored to relatively specific circumstances and topics like healthcare and medical data protection by HIPPA, protection of data surrounding video rentals as in the Video Privacy Protection Act, and narrow protection for children’s data in Children’s Online Protection Act. All the schemes are outdated and fall short of meeting the immediate need of broad protection of widely collected and broadly utilized data from social media.

Current Bills on Privacy

Upon request from some of the biggest platforms, outcry from the public, and the White House’s request for Federal Privacy regulation, Congress appears poised to act. The 118th Congress has pushed privacy law as a priority in this term by introducing several bills related to social media privacy. There are at least ten Bills currently pending between the House of the Senate addressing a variety of issues and concerns from Children’s data privacy to the minimum age for use and designation of a new agency to monitor some aspects of privacy.

S744The Data Care Act of 2023 aims to protect social media user’s data privacy by imposing fiduciary duties on the platforms. The original iteration of the bill was introduced in 2021 and failed to receive a vote. It was re-introduced in March of 2023 and is currently pending. Under the act, social media platforms would have the duty to reasonably secure user’s data from access, refrain from using the data in a way that could foreseeably “benefit the online service provider to the detriment of the end user” and to prevent disclosure of user’s data unless the party is also bound by these duties. The bill authorizes the FTC and certain state officials to take enforcement actions upon breach of those duties. The states would be permitted to take their own legal action against companies for privacy violations. The bill would also allow the FTC to intervene in the enforcement efforts by imposing fines for violations.

H.R.2701 – Perhaps the most comprehensive piece of legislation on the House floor is the Online Privacy Act. In 2023, the bill was reintroduced by democrat Anna Eshoo after an earlier version on the bill failed to receive a vote and died in Congress. The Online Privacy Act aims to protect users by providing individuals rights relating to the privacy of their personal information. The bill would also provide privacy and security requirements for treatment of personal information. To accomplish this, the bill established a new agency – the Digital Privacy Agency- which would be responsible for enforcement of the rights and requirements. The new individual rights in privacy are broad and include the rights of access, correction, deletion, human review of automated decision, individual autonomy, right to be informed, and right to impermanence, amongst others. This would be the most comprehensive plan to date. The establishment of a new agency with a task specific to administration and enforcement of privacy laws would be incredibly powerful. The creation of this agency would be valuable irrespective of whether this bill is passed.

HR 821– The Social Media Child Protection Act is a sister bill to one by a similar name which originated in the Senate. This bill aims to protect children from the harms of social media by limiting children’s access to it. Under the bill, Social Media platforms are required to verify the age of every user before accessing the platform by submitting a valid identity document or by using another reasonable verification method. A social media platform will be prohibited from allowing users under the age of 16 to access the platform. The bill also requires platforms to establish and maintain reasonable procedures to protect personal data collected from users. The bill affords for a private right of action as well as state and FTC enforcement.

S 1291The Protecting Kids on Social Media Act is similar to its counterpart in the House, with slightly less tenacity. It similarly aims to protect children from social media’s harms. Under the bill, platforms must verify its user’s age, not allow the user to use the service unless their age has been verified, and must limit access to the platform for children under 12. The bill also prohibits retention and use of information collected during the age verification process. Platforms must take reasonable steps to require affirmative consent from the parent or guardian of a minor who is at least 13 years old for the creation of a minor account, and reasonably allow access for the parent to later revoke that consent. The bill also prohibits use of data collected from minors for algorithmic recommendations. The bill would require the Department of Commerce to establish a voluntary program for secure digital age verification for social media platforms. Enforcement would be through the FTC or state action.

S 1409– The Kids Online Safety Act, proposed by Senator Blumenthal of Connecticut, also aims to protect minors from online harms. This bill, as does the Online Safety Bill, establishes fiduciary duties for social media platforms regarding children using their sites. The bill requires that platforms act in the best interest of minors using their services, including mitigating harms that may arise from use, sweeping in online bullying and sexual exploitation. Social media sites would be required to establish and provide access to safeguards such as settings that restrict access to minor’s personal data and granting parents the tools to supervise and monitor minor’s use of the platforms. Critically, the bill establishes a duty for social media platforms to create and maintain research portals for non-commercial purposes to study the effect that corporations like the platforms have on society.

Overall, these bills indicate Congress’s creative thinking and commitment to broad privacy protection for users from social media harms. I believe the establishment of a separate body to govern, other than the FTC which lacks the powers needed to compel compliance, to be a necessary step. Recourse for violations on par with the EU’s new regulatory scheme, mainly fines in the billions, could help.

Many of the bills, for myriad aims, establish new fiduciary duties for the platforms in preventing unauthorized use and harms for children. There is real promise in this scheme- establishing duty of loyalty, diligence and care for one party has a sound basis in many areas of law and would be more easily understood in implementation.

The notion that platforms would need to be vigilant in knowing their content, studying its affects, and reporting those effects may do the most to create a stable future for social media.

The legal responsibility for platforms to police and enforce their policies and terms and conditions is another opportunity to further incentivize platforms. The FTC currently investigates policies that are misleading or unfair, sweeping in the social media sites, but there could be an opportunity to make the platforms legally responsible for enforcing their own policies, regarding age, against hate, and inappropriate content, for example.

What would you like to see considered in Privacy law innovation for social media regulation?

From Hashtags to Hazards: Dangerous Diets and Digital Doses

Dieting, weight loss, and the need to be skinny has been prevalent in society from as early as the 19th century. People will find and try anything these days, healthy or not, to lose weight fast: diet pills, eating plans, radiofrequency lasering, you name it. People will go through such lengths to lose weight the wrong way – not exercising, not eating right, and not getting enough sleep. The emergence of social media has only compounded these issues. Social media creates pathways leading to social comparison, thin/fit ideal internalization, and self-objectification.

Type 2 diabetes is often associated with obesity and occurs when the body does not produce enough insulin, or does not react to insulin, and therefore cannot function properly. This disease is usually diagnosed in people ages 45-64 who are physically inactive and not leading a healthy lifestyle. In the early 2000s, pharmaceutical companies were looking for an easy solution to lower blood sugar to manage this disease. Enter: Ozempic.

Drugmaker Novo Nordisk introduced Ozempic in 2017 when the Food and Drug Administration authorized its use for adults with type 2 diabetes. It started as a relatively mundane drug with a straightforward goal: to help individuals manage their blood sugar levels and lead healthier lives. The weekly injection was designed to simulate insulin production and suppress glucagon release, ultimately leading to a rise in hormone levels that go to your brain, telling it that the stomach is full. It also increases the time it takes for ingested food to leave the body, slowing digestion. Originally, the marketing for Ozempic only targeted adults with type 2 diabetes and was to be used with diet and exercise as a healthy way to lower blood sugar.

Turning an Unintended Outcome into a Marketing Advantage

Soon after Ozempic hit the market, surveys and studies came out that showed those who used the drug also lost weight. People who took it lost an average of 14.9% of their body weight in six months of use. The unintended weight loss from Ozempic would have usually been listed as a side effect for the medication. Now having an additional benefit of losing weight, ads for Ozempic included it along with the diabetes usage. Marketers knew their audience and this new marketing campaign attracted a large group of people who wanted to lose weight. They tapped into this market to increase sales and revenue for the drug, which continues to be very successful.

In recent years, the pharmaceutical industry has witnessed a dramatic shift in how drugs are marketed, perceived, and consumed. This is largely due to the power of social media platforms and its influence on users. The allure of social media’s vast audience, the power of user-generated content, and its complex algorithms turned Ozempic into a trending topic. In the last year, social media helped Ozempic become widely known that the drug could double as a potential solution for weight loss. The drug went viral as hashtags and posts illuminated Ozempic as a cheat to losing weight, and losing weight fast. No diet or exercise needed. Individuals, not just those diagnosed with diabetes, were captivated by this prospect, and sought after Ozempic.

The new social media sensation garnered attention on platforms like TikTok, Instagram, and YouTube, with users, influencers, and celebrities sharing their experiences, before-and-after photos, and purported success stories. The influx of advertisements and users mentioning Ozempic increased the drug’s sales by 111% since last year. Elon Musk credited fasting, no tasty food, and Ozempic/ Wegovy (a drug very similar to Ozempic), as the reasons he shed almost 30 pounds. Other celebrities who have taken the drug, and have been vocal about it, include Amy Schumer, Chelsea Handler, Charles Barkley, Sharon Osborne, Tracy Morgan, and many more who are known to not have type 2 diabetes.

Rewards Turn to Consequences

Now being marketed almost strictly as a weight loss drug from different vendors, the viral run on Ozempic has led to worldwide shortages, doctors over-prescribing the drug, and many different legal issues. The blowup of Ozempic online was at least in part fueled by people who wanted to lose weight but who did not have any medical reasons to take it. The scarcity of Ozempic, coupled with the high demand, poses a threat to the health of individuals with type 2 diabetes who depend on this medication. As a result of this issue, Novo Nordisk paused advertisements for Ozempic in May of 2023. However, most of the ads on social media were not coming from the drugmaker, and instead were coming from online pharmacies and smaller marketers. These marketers attract vulnerable users who are seeking that quick fix to weight loss. While pharmaceutical companies can be held liable if their advertisements are proven to be false and/or misleading, the social media platforms are not liable under Section 230.

Users were not walking; they were running to doctors begging for Ozempic, even users who are not overweight, let alone have diabetes. It is very easy to get a prescription for Ozempic since only an online telehealth appointment is needed. Medicines and drugs that are approved for specific uses in the United States can be prescribed off-label for any use. Off-label use is when doctors prescribe medications for purposes not approved by the Food and Drug Administration. Doctors were prescribing Ozempic for patients that did not have type 2 diabetes and did not need it. At this time, the FDA has not approved Ozempic for the sole purpose of weight loss (yet). Doctors have gotten around this by prescribing other weight loss drugs such as Wegovy. Even though off-label use is not illegal, it still raises a slew of legal issues.

Off-Label Dangers and Legal Showdowns

To this day, there have not been adequate studies of how Ozempic works for people without diabetes and there may not be enough evidence to support using the drug for people who are not diabetic. Off-label use of Ozempic can lead to serious side effects. In August of 2023, after being prescribed Ozempic for weight management, a Louisiana resident claimed to have developed gastroparesis and argued that Novo Nordisk failed in their duty to adequately warn about potential adverse side effects associated with the drug. Gastroparesis is a condition that impacts the normal movement of muscles in the stomach. Less than a month after this suit was filed, the FDA and Novo Nordisk added a warning for Ozempic that it could cause intestinal blockage. This case is still in its early stages, but more and more people are coming forward and hiring attorneys for this condition in relation to taking Ozempic. A class action or multi-district litigation is predicted to occur in these cases.

Another potential legal implication of the off-label use of Ozempic going viral is medical malpractice and the potential for mass claims against doctors and manufacturers for prescribing the weight loss drug without proper medical justification. Social media users who see advertisements on platforms and want to lose weight are not asking doctors to prescribe Ozempic to them; they are begging. The drug manufacturers aren’t providing comprehensive information to patients about potential adverse reactions and are actively promoting the use of these drugs among individuals who may receive only minimal or no long-term benefits from them.

Predicting the Future of Ozempic

To better understand the Ozempic situation, it is valuable to draw parallels with the OxyContin opioid epidemic. OxyContin was first introduced in 1996 and is a powerful narcotic designed for the management of severe pain. However, as a result of over-promotion and improper sales tactics, it was overprescribed and led to widespread abuse, addiction overdose and death. The similarities between the issues surrounding the two drugs include:

  • Over-prescription– in both cases, doctors and manufacturers have played a pivotal role in the over-prescription of the medications. OxyContin was prescribed for chronic pain, a use that went beyond its intended purpose, while Ozempic was prescribed off-label for weight loss.
  • Patient demand– in both cases, patient demand and pressure have played a significant role in prescription practices. Patients seeking quick and easy solutions are more likely to want and receive medications that may not be appropriate for their condition and health.
  • Pharmaceutical company responsibility– Purdue Pharma, makers of OxyContin, faced, and continue to face, lawsuits for aggressively marketing the drug. Although no lawsuits have been filed against Ozempic yet for this, the responsibility of pharmaceutical companies in promoting medications beyond their FDA-approved uses could show a common thread between both drugs.

The one key difference between the OxyContin epidemic and the issues with Ozempic today is that in the early 2000s, social media sites were not as prolific. The advent of social media amplifies the speed and scale at which information, whether accurate or not, spreads. The contagious nature of user-generated content, testimonials, and before-and-after narratives on platforms has the potential to magnify the off-label promotion and demand for Ozempic as a weight loss solution. This can fuel an unwarranted surge in prescriptions without proper medical assessment, potentially leading to increased risks, adverse effects, and challenges in regulating the medication’s use. The ease with which information circulates on social media might intensify the scope and speed of the ‘Ozempic epidemic,’ raising concerns about patient safety and regulatory control.

Where Does the Liability Land?

The story of Ozempic’s transformation from a diabetes medication to a weight loss sensation driven by social media is a compelling example of how the digital age can shape public perception and lead to a vast number of legal issues. If Section 230 is amended and sets forth certain parameters in which social media sites can be liable, could platforms be held accountable for the shortage of the drug due to social media’s contributions of Ozempic’s popularity? Could the platforms be responsible for the possible increase in body image issues and eating disorders associated with the trend to be skinny?

THE SCHEME BEHIND AN ILLEGAL STREAM

FOLLOW THE STREAM TOWARDS A FELONY

The Protecting Lawful Streaming Act makes it a felony to engage in large-scale streaming of copyright material. The introduction of this law took place on December 10th, 2020. The law pertains to the increased concern surrounding live audio and video streaming in recent years. Specifically, such streaming has transformed society and become one of the most influential ways society chooses to enjoy various forms of content. Yet, the growth of legitimate streaming services has continuously been accompanied and disturbed by unlawful streaming of copyright materials. Initially, the illegal streaming of copyright material was only a misdemeanor until the Protecting Lawful Streaming Act became a part of America’s newest addition to the law.

Under the Protecting Lawful Streaming Act, a person must act:

  1. Willfully.
  2. For purposes of commercial advantage or private financial gain.
  3. Offer or provide to the public a digital transmission service.

ALL FOR ONE, ONE FOR ALL

The law’s enactment incentivizes those who indulge in hosting illegal streams subjects them to severe criminal penalties. Accordingly, anyone who hosts an illegal stream that not only infringes upon copyright material but also obtains an economic benefit will now face felony charges. Many fail to recognize that while the individual responsible for hosting the illegal stream faces criminal charges, any individual who merely partakes in viewing this infringement does not technically violate any criminal law. Therefore, illegal streams that host hundreds and even thousands of viewers allow for no criminal action to be taken or even threatened to all these spectators. Instead, the focus is entirely on the host of this illegal stream.

PLATFORMS ENGINEERING IS PERFECTLY IMPERFECT

The question then becomes, what does social media do with illegal streaming? For starters, social media platforms serve as one of, if not the most, influential ways illegal streams reach society. Social media platform designs focus on spreading information. They not only spread information but essentially take information and provide the capability to have it worldwide within seconds. As such, these platform’s engineering do precisely what illegal streaming hosts want. That is to expose these streams to millions of individuals who may indulge and use copyright material for their benefit. Social media’s capabilities of utilizing hashtags, likes, shares, and other methods of expansion through social media allow hosts to capitalize on these platform’s designs to take advantage for their own personal and financial gain.

NOT MY MESS, NOT MY PROBLEM

Social media platforms are not liable for copyright material exposure on their platforms. According to the Digital Millennium Copyright Act, the only requirement is that these platforms must take prompt action when contacted by the rights holders. However, the statistics have shown thus far that social media platforms fail to take the initiative and are generally unwilling to address this ongoing concern. The argument on behalf of social media platforms is that the duty is not on their behalf but on the rights holders to report an infringement. With this belief, social media platforms could take a more significant initiative to address this concern of illegal streaming. While social media platforms have at least some implementations to help prevent infringement of owner’s work, the system is flawed, with many unresolved areas of concern. Current measures in place by themselves fail to provide reassurance that they can protect the content of the actual owner from being exploited for the financial benefit of illegal streaming hosts around the world. 

MORE MONEY, MORE PROBLEMS

The question then becomes, how many illegal streaming services impact people? Major entertainment networks such as the NFL, NBA, and UFC are just a few examples of illegal streaming threatening their businesses’ most critical revenue stream. That being the television viewership. Not only this but even movie and non-sport television programs are reported to have lost billions of dollars to the hands of illegal streaming. Thus, by enacting the Protecting Lawful Streaming Act, the goal is to deter harmful criminal activity and simultaneously protect the rights of creators and copyright owners.

Furthermore, the individual people would least expect to be harmed by illegal streaming is also in jeopardy. That being themselves! Illegal streams cause various risks of malicious software that can infect one’s device. This exposure puts individuals’ personal information at risk. It is subject to several casualties, such as identity fraud, financial loss, and permanent damage to devices that watch these illegal streaming services. 

WHAT’S MINE IS YOURS

Society must recognize and address how individuals can counteract illegal streaming legally yet unfairly. For instance, an individual who legally purchases a pay-per-view event and then live streams this on their social media for others to also spectate. Someone can lawfully buy the stream and not be subject to being host to an illegal stream. Yet, the same issue arises. The owners of this content are stuck with no resolution and lose out on potential revenue. Rather than these individuals all purchasing the content for themselves, one is used as a sacrifice while the others reap the same benefit without costing a dime. The same scenario can arise where individuals gather in one home to watch a pay-per-view or a movie on demand. This conduct is not illegal, but it negates the potential revenue these industries may obtain. Such a solution was, is, and consistently will be recognized as legal activity.

AN ISSUE, BUT NOT AN ISSUE WORTH SOLVING

Even streaming platforms like Netflix fail to take any measures regarding not necessarily illegally streaming its content but sharing passwords for one account. Although such conduct can be subject to civil liability in a breach of its contractual terms or even criminal liability if fraud is determined, these platforms fail to take proper measures against this behavior. Ultimately, moving forward on these actions would be too costly and can result in losing viewership through this sort of conduct.

Through these findings, it’s clear that illegal streaming has and continues to take advantage of the actual copyright owners of this material. The Protecting Lawful Streaming Act was society’s most recent attempt to minimize this ongoing issue through an effort to increase the criminal penalty and deter such conduct. Yet, based on the inability to identify and diminish these illegal streams on social media, many continue to get away with this behavior daily. The legal loopholes discussed above prove that entertainment industries may never see the revenue stream they anticipate. Only time will tell how society responds to this predicament and whether some law will address it in the foreseeable future. If the law were to hold higher standards for social media platforms to take accountability for this conduct, would it make a difference? Even so, would the minimization of social media’s influence on the spread of illegal streams even have a lasting impact? 

Social Media Has Gone Wild

Increasing technological advances and consumer demands have taken shopping to a new level. You can now buy clothes, food, and household items from the comfort of your couch, and in a few clicks: add to cart, pay, ship, and confirm. Not only are you limited to products sold in nearby stores, but shipping makes it possible to obtain items internationally. Even social media platforms have shopping features for users, such as Instagram Shopping, Facebook Marketplace, and WhatsApp. Despite its convenience, online shopping has also created an illegal marketplace for wildlife species and products.

Most trafficked animal-the Pangolin

Wildlife trafficking is the illegal trading or sale of wildlife species and their products. Elephant ivory, rhinoceros horns, turtle shells, pangolin scales, tiger furs, and shark fins are a few examples of highly sought after wildlife animal products. As social media platforms expand, so does wildlife trafficking.

Wildlife Trafficking Exists on Social Media?

Social media platforms make it easier for people to connect with others internationally. These platforms are great for staying in contact with distant aunts and uncles, but it also creates another method for criminals and traffickers to communicate. It provides a way to remain anonymous without having to meet in-person, which makes it harder for law enforcement to identify a user’s true identity. Even so, can social media platforms be held responsible for making it easier for criminals to commit wildlife trafficking crimes?

Thanks to Section 230 of the Communications Decency Act, the answer is most likely: no.

Section 230 provides broad immunity to websites for content a third-party user posts on the website. Even when a user posts illegal content on a website, the website cannot be held liable for such content. However, there are certain exceptions where websites have no immunity. It includes human and sex trafficking. Although these carve-outs are fairly new, it is clear that there is an interest in protecting people vulnerable to abuse.

So why don’t we apply the same logic to animals? Animals are also a vulnerable population. Many species are unmatched to guns, weapons, traps, and human encroachment on their natural habitats. Similar to children, animals may not have the ability to understand what trafficking is or even the physical strength to fight back. Social media platforms, like Facebook, attempt to combat the online wildlife trade, but its efforts continue to fall short.

How is Social Media Fighting Back?

 

In 2018, the World Wildlife Fund and 21 tech companies created the Coalition to End Wildlife Trafficking Online. The goal was to reduce illegal trade by 80% by 2020. While it is difficult to measure whether this goal is achievable, some social media platforms have created new policies to help meet this goal.

“We’re delighted to join the coalition to end wildlife trafficking online today. TikTok is a space for creative expression and content promoting wildlife trafficking is strictly prohibited. We look forward to partnering with the coalition and its members as we work together to share intelligence and best-practices to help protect endangered species.”

Luc Adenot, Global Policy Lead, Illegal Activities & Regulated Goods, TikTok

In 2019, Facebook banned the sale of animals altogether on its platform. But this did not stop users. A 2020 report showed a variety of illegal wildlife was for sale on Facebook. This clearly shows the new policies were ineffective. Furthermore, the report stated:

“29% of pages containing illegal wildlife for sale were found through the ‘Related Pages’ feature.”

This suggests that Facebook’s algorithm purposefully connects users to pages and similar content based on a user’s interest. Algorithms incentivize users to rely and depend on wildlife trafficking content. They will continue to use social media platforms because it does half of the work for them:

      • Facilitating communication
      • Connecting users to potential buyers
      • Connecting users to other sellers
      • Discovering online chat groups
      • Discovering online community pages

This fails to reduce wildlife trafficking outreach. Instead, it accelerates visibility of this type of content to other users. Does Facebook’s algorithms go beyond Section 230 immunity?

Under these circumstances, Facebook maintains immunity. In Gonzalez v. Google LLC, the court explains how websites are not liable for user content when the website employs content-neutral algorithms. This means that a website did nothing more than program an algorithm to present similar content to a user’s interest. The website did not offer direct encouragement to publish illegal content, nor did it treat the content differently from other user content.

What about when a website profits from illegal posts? Facebook receives a 5% selling fee for each shipment sold by a user. Since illegal wildlife products are rare, these transactions are highly profitable. A pound of ivory can be worth up to $3,300. If a user sells five pounds of ivory from endangered elephants on Facebook, the platform would profit $825 from one transaction. The Facebook Marketplace algorithm is similar to the algorithm based on user interest and engagement. Here, Facebook’s algorithm can push illegal wildlife products to a user who has searched for similar products. Yet, if illegal products are constantly pushed and successful sales are made, Facebook then benefits and makes a profit off these transactions. Does this mean that Section 230 will continue to protect Facebook when it profits from illegal activity?

Evading Detection

Even with Facebook’s prohibited sales policy, users get creative to avoid detection. A simple search of “animals for sale” led me to a public Facebook group. Within 30 seconds of scrolling, I found a user selling live coral, and another user selling an aquarium system with live coral, and live fish. The former reads: Leather $50. However, the picture shows a live coral in a fish tank. Leather identifies the type of coral it is, without saying it’s coral. Even if this was fake coral, a simple Google search shows a piece of fake coral is worth less than $50. If Facebook is failing to prevent users from selling live coral and live fish, it is most likely failing to prevent online wildlife trafficking on its platform.

Another method commonly used to evade detection is when users post a vague description or a photo of an item and include the words “pm me” or “dm me.” These are abbreviations for “private message me” or “direct message me.” It is a quick way to direct interested users to personally reach out to the individual and discuss details in a private chat. It is a way to communicate outside of the leering public eye. Sometimes a user will offer alternative contact methods, such as a personal phone number or an email address. This transitions the interaction off of or to a new social media platform.

Due to high profitability, there are lower stakes when transactions are conducted anonymously online. Social media platforms are great for concealing a user’s identity. Users can use fake names to maintain anonymity behind their computer and phone screen. There are no real consequences for using a fake name when the user is unknown. Nor is there any type of identity verification to truly discover the user’s true identity. Even if a user is banned, the person can create a new account under a different alias. Some users are criminals tied to organized crime syndicates or terrorist groups. Many users operate outside of the United States and are overseas, which makes it difficult to locate them. Thus, social media platforms incentivize criminals to hide among various aliases with little to lose.

Why Are Wildlife Products Popular?

Wildlife products have a high demand for human benefit and use. Common reasons why humans value wildlife products include:

Do We Go After the Traffickers or the Social Media Platform?

Taking down every single wildlife trafficker, and users that facilitate these transactions would be the perfect solution to end wildlife trafficking. Realistically, it’s too difficult to identify these users due to online anonymity and geographical limitations. On the other hand, social media platforms continue to tolerate these illegal activities.

Here, it is clear that Facebook is not doing enough to stop wildlife trafficking. With each sale made on Facebook, Facebook receives a percentage. Section 230 should not protect Facebook when it reaps the benefits of illegal transactions. This takes it a step too far and should open Facebook to the market of: Section 230 liability.

Should Facebook maintain Section 230 immunity when it receives proceeds from illegal wildlife trafficking transactions? Where do we draw the line?

Mental Health Advertisements on #TikTok

The stigma surrounding mental illness has persisted since the mid-twentieth century. This stigma is one of the many reasons why 60% of adults with a mental illness often go untreated. The huge treatment disparity demonstrates a significant need to spread awareness and make treatment more readily available. Ironically, social media, which has been ridiculed for its negative impact on the mental health of its users, has become a really important tool for spreading awareness about and de-stigmatizing mental health treatment.

The content shared on social media is a combination of users sharing their experiences with a mental health condition and companies who treat mental health using advertisements to attract potential patients. At the first glance, this appears to be a very powerful way to use social media to bridge treatment gaps. However, it highlights concerns over vulnerable people seeing content and self-diagnosing themselves with a condition that they might not have and undergoing unnecessary, and potentially dangerous, treatment. Additionally, they might fail to undergo needed treatment because they are overlooking the true cause of their symptoms due to the misinformation they were subjected to.

Attention Deficit Hyperactivity Disorder (“ADHD”) is an example of a condition that social media has jumped on. #ADHD has 14.5 billion views on TikTok and 3 million posts on Instagram. Between 2007 and 2016, diagnoses of ADHD increased by 123%. Further, prescriptions for stimulants, which treat ADHD, have increased 16% since the pandemic. Many experts are attributing this, in large part, to the use of social media in spreading awareness about ADHD and the rise of telehealth companies that have emerged to treat ADHD during the pandemic. These companies have jumped on viral trends with targeted advertisements that oversimplify what ADHD actually looks like and then offers treatment to those that click on the advertisement.

The availability and reliance of telemedicine grew rapidly during the COVID-19 pandemic and many restrictions regarding telehealth were suspended. This created an opening in the healthcare industry for these new companies. ‘Done’ and ‘Cerebral’ are two examples of companies that have emerged during the pandemic to treat ADHD. These companies attract, accept, and treat patients through a very simplistic procedure: (1) social media advertisements, (2) short online questionnaire, (2) virtual visit, and (3) prescription.

Both Done and Cerebral have utilized social media platforms like Instagram and TikTok to lure potential patients to their services. The advertisements vary, but they all highlight how easy and affordable treatment is by emphasizing convenience, accessibility, and low cost. Accessing the care offered is as simple as swiping up on an advertisements that appear as users are scrolling on the platform. These targeted ads depict images of people seeking treatment, taking medication, and having their symptoms go away. Further, these companies utilize viral trends and memes to increase the effectiveness of the advertisements, which typically oversimplify complex ADHD symptoms and mislead consumers.

ADHD content is popular on TikTok, as America faces an Adderall shortage - Vox

While these companies are increasing healthcare access for many patients due to the low cost and virtual platform, this speedy version of healthcare is blurring the line between offering treatment to patients and selling prescriptions to customers through social media. Further, medical professionals are concerned with how these companies are marketing addictive stimulants to young users, and, yet, remain largely unregulated due to outdated guidelines on advertisements for medical services.

The advertising model utilized by these telemedicine companies emphasize a need to modify existing laws to ensure that these advertisements are subjected to the FDA’s unique oversight to protect consumers. These companies are targeting young consumers and other vulnerable people to self-diagnose themselves with misleading information as to the criteria for a diagnosis. There are eighteen symptoms of ADHD and the average person meets at least one or two of those in the criteria, which is what these ads are emphasizing.

Advertisements in the medical sphere are regulated by either the FDA or the FTC. The FDA has unique oversight to regulate the marketing of prescription drugs by manufacturers and drug distributors in what is known as direct-to-consumer (“DTC”) drug advertising. The critics of prescription drug advertisements highlight the negative impact that DTC advertising has on the patient-provider relationship because patients go to providers expecting or requesting particular prescription treatment. In order to minimize these risks, the FDA requires that a prescription drug advertisement must be truthful, present a fair balance of the risks and benefits associated with the medications, and state an approved used of the medication. However, if the advertisement does not mention a particular drug or treatment, it eludes the FDA’s oversight.

Thus, the marketing of medical services, which does not market prescription drugs, is regulated only by the Federal Trade Commission (“FTC”) in the same manner as any other consumer good, which just means that the advertisement must not be false or misleading.

The advertisements these Telehealth companies are putting forward demonstrate that it is time for the FDA to step in because they are combining medical services and prescription drug treatment. They use predatory tactics to lure consumers into believing they have ADHD and then provide them direct treatment on a monthly subscription basis.

The potential for consumer harm is clear and many experts are pointing to the similarities between the opioid epidemic and stimulant drugs. However, the FDA has not currently made any changes to how they regulate advertising in light of social media. The laws regarding DTC drug advertising were prompted in part by the practice of self-diagnosis/self-medication by consumers and the false therapeutic claims made by manufacturers. The telemedicine model these companies are using is emphasizing these exact concerns by targeting consumers, convincing them they have a specific condition, and then offering the medication to treat it after quick virtual visit. Instead of patients going to their doctors requesting a specific prescription that may be inappropriate for a patient’s medical needs, patients are going to the telehealth providers that only prescribe a particular prescription that may also be inappropriate for a patient’s medical needs.

Through the use of social media, diagnosis and treatment with addictive prescription drugs can be initiated by an interactive advertisement in a manner that was not possible when the FDA made the distinctions that these types of advertisements would not be subject to its oversight. Thus, to protect consumers, it is vital that telemedicine advertisements are subjected to a more intrusive monitoring than consumer goods. This will require the companies making these advertisements to properly address the complex symptoms associated with conditions like ADHD and give fair balance to the harms of treatment.

According to the Pew Research Center, 69% of adults and 81% of teens in the United States use social media. Further, about 48% of Americans get their information regularly from social media. We often talk about misinformation in politics and news stories, but it’s permeating every corner of the internet. As these numbers continue to grow, it’s crucial to develop new methods to protect consumers, and regulating these advertisements is only the first step.

Is it HIGH TIME we allow Cannabis Content on Social Media?

 

Is it HIGHT TIME we allow Cannabis Content on Social Media?

The Cannabis Industry is Growing like a Weed

Social media provides a relationship between consumers and their favorite brands. Just about every company has a social media presence to advertise its products and grow its brand. Large companies command the advertising market, but smaller companies and one-person startups have their place too. The opportunity to expand your brand using social media is limitless to just about everyone. Except for the cannabis industry. With the developing struggle between social media companies and the politics of cannabis, comes an onslaught of problems facing the modern cannabis market. With recreational marijuana use legal in 21 states and Washington, D.C., and medical marijuana legal in 38 states, it may be time for this community to join the social media metaverse.

We know now that algorithms determine how many followers on a platform see a business’ content, whether or not the content is permitted, and whether the post or the user should be deleted. The legal cannabis industry has found itself in a similar struggle to legislators with social media giants ( like Facebook, Twitter, and Instagram) for increased transparency about their internal processes for filtering information, banning users, and moderating its platform. Mainstream cannabis businesses have been prevented from making their presence known on social media in the past, but legitimate businesses are being placed in a box with illicit drug users and prevented from advertising on public social media sites. The Legal cannabis industry is expected to be worth over $60 billion by 2024, and support for federal legalization is at an all-time high (68%). Now more than ever, brands are fighting for higher visibility amongst cannabis consumers.

Recent Legislation Could Open the Door for Cannabis

The question remains, whether the legal cannabis businesses have a place in the ever-changing landscape of the social media metaverse. Marijuana is currently a Schedule 1 narcotic on the Controlled Substances Act (1970). This categorization of Marijuana as Schedule 1 means that it has no currently accepted medical use and has a high potential for abuse. While that definition was acceptable when cannabis was placed on the DEAs list back in 1971, there has been evidence presented in opposition to that decision. Historians note, overt racism, combined with New Deal reforms and bureaucratic self-interest is often blamed for the first round of federal cannabis prohibition under the Marihuana Tax Act of 1937, which restricted possession to those who paid a steep tax for a limited set of medical and industrial applications.    The legitimacy of cannabis businesses within the past few decades based on individual state legalization (both medical and recreational) is at the center of debate for the opportunity to market as any other business has. Legislation like the MORE act (Marijuana Opportunity Reinvestment and Expungement) which was passed by The House of Representatives gives companies some hope that they can one day be seen as legitimate businesses. If passed into law, Marijuana will be lowered or removed from the schedule list which would blow the hinges off the cannabis industry, legitimate businesses in states that have legalized its use are patiently waiting in the wings for this moment.

States like New York have made great strides in passing legislation to legalize marijuana the “right” way and legitimize business, while simultaneously separating themselves from the illegal and dangerous drug trade that has parasitically attached itself to this movement. The  Marijuana Regulation and Tax Act (MRTA)  establishes a new framework for the production and sale of cannabis, creates a new adult-use cannabis program, and expands the existing medical cannabis and cannabinoid (CBD) hemp programs. MRTA also established the Office of Cannabis Management (OCM), which is the governing body for cannabis reform and regulation, particularly for emerging businesses that wish to establish a presence in New York. The OCM also oversees the licensure, cultivation, production, distribution, sal,e and taxation of medical, adult-use, and cannabinoid hemp within New York State. This sort of regulatory body and structure are becoming commonplace in a world that was deemed to be like the “wild-west” with regulatory abandonment, and lawlessness.

 

But, What of the Children?

In light of all the regulation that is slowly surrounding the Cannabis businesses, will the rapidly growing social media landscape have to concede to the demands of the industry and recognize their presence? Even with regulations cannabis exposure is still an issue to many about the more impressionable members of the user pool. Children and young adults are spending more time than ever online and on social media.  On average, daily screen use went up among tweens (ages 8 to 12) to five hours and 33 minutes from four hours and 44 minutes, and to eight hours and 39 minutes from seven hours and 22 minutes for teens (ages 13 to 18). This group of social media consumers is of particular concern to both the legislators and the social media companies themselves. MRTA offers protection from companies advertising with the intent of looking like common brands marketed to children. Companies are restricted to using their name and their logo, with explicit language that the item inside of the wrapper has cannabis or Tetrahydrocannabinol (THC) in it. MRTA restrictions along with strict community guidelines from several social media platforms and government regulations around the promotion of marijuana products, many brands are having a hard time building their communities’ presence on social media. The cannabis companies have resorted to creating their own that promote the content they are being prevented from blasting on other sites. Big-name rapper and cannabis enthusiast, Berner who created the popular edible brand “Cookies”, has been approached to partner with the creators to bolster their brand and raise awareness.  Unfortunately, the sites became what mainstream social media sites feared in creating their guideline, an unsavory haven for illicit drug use and other illegal behavior. One of the pioneer apps in this field Social Club was removed from the app store after multiple reports of illegal behavior. The apps have since been more internally regulated but have not taken off like the creators intended. Legitimate cannabis businesses are still being blocked from advertising on mainstream apps.

These Companies Won’t go Down Without a Fight

While cannabis companies aren’t supposed to be allowed on social media sites, there are special rules in place if a legal cannabis business were to have a presence on a social media site. Social media is the fastest and most efficient way to advertise to a desired audience. With appropriate regulatory oversight and within the confines of the changing law, social media sites may start to feel pressure to allow more advertising from cannabis brands.

A Petition has been generated to bring META, the company that owns Facebook and Instagram among other sites, to discuss the growing frustrations and strict restrictions on their social media platforms. The petition on Change.org has managed to amass 13,000 signatures. Arden Richard, the founder of WeedTube, has been outspoken about the issues saying  “This systematic change won’t come without a fight. Instagram has already begun deleting posts and accounts just for sharing the petition,”. He also stated, “The cannabis industry and community need to come together now for these changes and solutions to happen,”. If not, he fears, “we will be delivering this industry into the hands of mainstream corporations when federal legalization happens.”

Social media companies recognize the magnitude of the legal cannabis community because they have been banning its content nonstop since its inception. However, the changing landscape of the cannabis industry has made their decision to ban their content more difficult. Until federal regulation changes, businesses operating in states that have legalized cannabis will be force banned by the largest advertising platforms in the world.

 

I Knew I Smelled a Rat! How Derivative Works on Social Media can “Cook Up” Infringement Lawsuits

 

If you have spent more than 60 seconds scrolling on social media, you have undoubtably been exposed to short clips or “reels” that often reference different pop culture elements that may be protected intellectual property. While seemingly harmless, it is possible that the clips you see on various platforms are infringing on another’s copyrighted work. Oh Rats!

What Does Copyright Law Tell Us?

Copyright protection, which is codified in 17 U.S.C. §102, extends to “original works of authorship fixed in any tangible medium of expression”. It refers to your right, as the original creator, to make copies of, control, and reproduce your own original content. This applies to any created work that is reduced to a tangible medium. Some examples of copyrightable material include, but are not limited to, literary works, musical works, dramatic works, motion pictures, and sound recordings.

Additionally, one of the rights associated with a copyright holder is the right to make derivative works from your original work. Codified in 17 U.S.C. §101, a derivative work is “a work based upon one or more preexisting works, such as a translation, musical arrangement, dramatization, fictionalization, motion picture version, sound recording, art reproduction, abridgment, condensation, or any other form in which a work may be recast, transformed, or adapted. A work consisting of editorial revisions, annotations, elaborations, or other modifications which, as a whole, represent an original work of authorship, is a ‘derivative work’.” This means that the copyright owner of the original work also reserves the right to make derivative works. Therefore, the owner of the copyright to the original work may bring a lawsuit against someone who creates a derivative work without permission.

Derivative Works: A Recipe for Disaster!

The issue of regulating derivative works has only intensified with the growth of cyberspace and “fandoms”. A fandom is a community or subculture of fans that’s built itself up around one specific piece of pop culture and who share a mutual bond over their enthusiasm for the source material. Fandoms can also be composed of fans that actively participate and engage with the source material through creative works, which is made easier by social media. Historically, fan works have been deemed legal under the fair use doctrine, which states that some copyrighted material can be used without legal permission for the purposes of scholarship, education, parody, or news reporting, so long as the copyrighted work is only being used to the extent necessary. Fair use can also apply to a derivative work that significantly transforms the original copyrighted work, adding a new expression, meaning, or message to the original work. So, that means that “anyone can cook”, right? …Well, not exactly! The new, derivative work cannot have an economic impact on the original copyright holder. I.e., profits cannot be “diverted to the person making the derivative work”, when the revenue could or should have gone to original copyright holder.

With the increased use of “sharing” platforms, such as TikTok, Instagram, or YouTube, it has become increasingly easier to share or distribute intellectual property via monetized accounts. Specifically, due to the large amount of content that is being consumed daily on TikTok, its users are incentivized with the ability to go “viral” instantaneity, if not overnight,  as well the ability to earn money through the platform’s “Creator Fund.” The Creator Fund is paid for by the TikTok ads program, and it allows creators to get paid based on the amount of views they receive. This creates a problem because now that users are getting paid for their posts, the line is blurred between what is fair use and what is a violation of copyright law. The Copyright Act fails to address the monetization of social media accounts and how that fits neatly into a fair use analysis.

Ratatouille the Musical: Anyone Can Cook?

Back in 2020, TikTok users Blake Rouse and Emily Jacobson were the first of many to release songs based on Disney-Pixar’s 2007 film, Ratatouille. What started out as a fun trend for users to participate in, turned into a full-fledged viral project and eventual tangible creation. Big name Broadway stars including André De Shields, Wayne Brady, Adam Lambert, Mary Testa, Kevin Chamberlin, Priscilla Lopez, and Tituss Burgess all participated in the trend, and on December 9, 2020, it was announced that Ratatouille was coming to Broadway via a virtual benefit concert.

Premiered as a one-night livestream event in January 1 2021, all profits generated from the event were donated to the Entertainment Community Fund (formerly the Actors Fund), which is a non-profit organization that supports performers and workers in the arts and entertainment industry. It initially streamed in over 138 countries and raised over $1.5 million for the charity. Due to its success, an encore production was streamed on TikTok 10 days later, which raised an additional $500,000 for the fund (totaling $2 million). While this is unarguably a derivative work, the question of fair use was not addressed here because Disney lawyers were smart enough not to sue. In fact, they embraced the Ratatouille musical by releasing a statement to the Verge magazine:

Although we do not have development plans for the title, we love when our fans engage with Disney stories. We applaud and thank all of the online theatre makers for helping to benefit The Actors Fund in this unprecedented time of need.

Normally, Disney is EXTREMELY strict and protective over their intellectual property. However, this small change of heart has now opened a door for other TikTok creators and fandom members to create unauthorized derivative works based on others’ copyrighted material.

Too Many Cooks in the Kitchen!

Take the “Unofficial Bridgerton Musical”, for example. In July of 2022, Netflix sued content creators Abigail Barlow and Emily Bear for their unauthorized use of Netflix’s original series, Bridgerton. The Bridgerton Series on Netflix is based on the Bridgerton book series by Julia Quinn. Back in 2020, Barlow and Bear began writing and uploading songs based on the Bridgerton series to TikTok for fun. Needless to say, the videos went viral, thus prompting Barlow and Bear to release an entire musical soundtrack based on Bridgerton. They even went so far as to win the 2022 Grammy Award for Best Musical Album.

On July 26, Barlow and Bear staged a sold-out performance with tickets ranging from $29-$149 at the New York Kennedy Center, and also incorporated merchandise for sale that included the “Bridgerton” trademark. Netflix then sued, demanding an end to these for-profit performances. Interestingly enough, Netflix was allegedly initially on board with Barlow and Bear’s project. However, although Barlow and Bear’s conduct began on social media, the complaint alleges they “stretched fanfiction way past its breaking point”. According to the complaint, Netflix “offered Barlow & Bear a license that would allow them to proceed with their scheduled live performances at the Kennedy Center and Royal Albert Hall, continue distributing their album, and perform their Bridgerton-inspired songs live as part of larger programs going forward,” which Barlow and Bear refused. Netflix also alleged that the musical interfered with its own derivative work, the “Bridgerton Experience,” an in-person pop-up event that has been offered in several cities.

Unlike the Ratatouille: The Musical, which was created to raise money for a non-profit organization that benefited actors during the COVID-19 pandemic, the Unofficial Bridgerton Musical helped line the pockets of its creators, Barlow and Bear, in an effort to build an international brand for themselves. Netflix ended up privately settling the lawsuit in September of 2022.

Has the Aftermath Left a Bad Taste in IP Holder’s Mouths?

The stage has been set, and courts have yet to determine exactly how fan-made derivative works play out in a fair use analysis. New technologies only exacerbate this issue with the monetization of social media accounts and “viral” trends. At a certain point, no matter how much you want to root for the “little guy”, you have to admit when they’ve gone too far. Average “fan art” does not go so far as to derive significant profits off the original work and it is very rare that a large company will take legal action against a small content creator unless the infringement is so blatant and explicit, there is no other choice. IP law exists to protect and enforce the rights of the creators and owners that have worked hard to secure their rights. Allowing content creators to infringe in the name of “fair use” poses a dangerous threat to intellectual property law and those it serves to protect.

 

Update Required: An Analysis of the Conflict Between Copyright Holders and Social Media Users

Opening

For anyone who is chronically online as yours truly, in one way or another we have seen our favorite social media influencers, artists, commentators, and content creators complain about their problems with the current US Intellectual Property (IP) system. Be it that their posts are deleted without explanation or portions of their video files are muted, the combination of factors leading to copyright issues on social media is endless. This, in turn, has a markedly negative impact on free and fair expression on the internet, especially within the context of our contemporary online culture. For better or worse, interaction in society today is intertwined with the services of social media sites. Conflict arises when the interests of copyright holders clash with this reality. They are empowered by byzantine and unrealistic laws that hamper our ability to exist as freely as we do in real life. While they do have legitimate and fundamental rights that need to be protected, such rights must be balanced out with desperately needed reform. People’s interaction with society and culture must not be hampered, for that is one of the many foundations of a healthy and thriving society. To understand this, I venture to analyze the current legal infrastructure we find ourselves in.

Current Relevant Law

The current controlling laws for copyright issues on social media are the Copyright Act of 1976 and the Digital Millennium Copyright Act (DMCA). The DMCA is most relevant to our analysis; it gives copyright holders relatively unrestrained power to demand removal of their property from the internet and to punish those using illegal methods to get ahold of their property. This broad law, of course, impacted social media sites. Title II of the law added 17 U.S. Code § 512 to the Copyright Act of 1976, creating several safe harbor provisions for online service providers (OSP), such as social media sites, when hosting content posted by third parties. The most relevant of these safe harbors to this issue is 17 U.S. Code § 512(c), which states that an OSP cannot be liable for monetary damages if it meets several requirements and provides a copyright holder a quick and easy way to claim their property. The mechanism, known as a “notice and takedown” procedure, varies by social media service and is outlined in their terms and conditions of service (YouTube, Twitter, Instagram, TikTok, Facebook/Meta). Regardless, they all have a complaint form or application that follows the rules of the DMCA and usually will rapidly strike objectionable social media posts by users. 17 U.S. Code § 512(g) does provide the user some leeway with an appeal process and § 512(f) imposes liability to those who send unjustifiable takedowns. Nevertheless, a perfect balance of rights is not achieved.

The doctrine of fair use, codified as 17 U.S. Code § 107 via the Copyright Act of 1976, also plays a massive role here. It established a legal pathway for the use of copyrighted material for “purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research” without having to acquire right to said IP from the owner. This legal safety valve has been a blessing for social media users, especially with recent victories like Hosseinzadeh v. Klein, which protected reaction content from DMCA takedowns. Cases like Lenz v. Universal Music Corp further established that fair use must be considered by copyright holders when preparing for takedowns. Nevertheless, failure to consider said rights by true copyright holders still happens, as sites are quick to react to DMCA complaints. Furthermore, the flawed reporting systems of social media sites lead to abuse by unscrupulous actors faking true ownership. On top of that, such legal actions can be psychologically and financially intimidating, especially when facing off with a major IP holder, adding to the unbalanced power dynamic between the holder and the poster.

The Telecommunications Act of 1996, which focuses primarily on cellular and landline carriers, is also particularly relevant to social media companies in this conflict. At the time of its passing, the internet was still in its infancy. Thus, it does not incorporate an understanding of the current cultural paradigm we find ourselves in. Specifically, the contentious Section 230 of the Communication Decency Act (Title V of the 1996 Act) works against social media companies in this instance, incorporating a broad and draconian rule on copyright infringement. 47 U.S. Code § 230(e)(2) states in no uncertain terms that “nothing in this section shall be construed to limit or expand any law pertaining to intellectual property.” This has been interpreted and restated in Perfect 10, Inc. v. CCBill LLC to mean that such companies are liable for user copyright infringement. This gap in the protective armor of Section 230 is a great concern to such companies, therefore they react strongly to such issues.

What is To Be Done?

Arguably, fixing the issues around copyright on social media is far beyond the capacity of current legal mechanisms. With ostensibly billions of posts each day on various sites, regulation by copyright holders and sites is far beyond reason. It will take serious reform in the socio-cultural, technological, and legal arenas before a true balance of liberty and justice can be established. Perhaps we can start with an understanding by copyright holders not to overreact when their property is posted online. Popularity is key to success in business, so shouldn’t you value the free marketing that comes with your copyrighted property getting shared honestly within the cultural sphere of social media?  Social media sites can also expand their DMCA case management teams or create tools for users to accredit and even share revenue with, if they are an influencer or content creator, the copyright holder. Finally, congressional action is desperately needed as we have entered a new era that requires new laws. That being said, achieving a balance between the free exchange of ideas and creations and the rights of copyright holders must be the cornerstone of the government’s approach to socio-cultural expression on social media. That is the only way we can progress as an ever more online society.

 

Image: Freepik.com

https://www.freepik.com/free-vector/flat-design-intellectual-property-concept-with-woman-laptop_10491685.htm#query=intellectual%20property&position=2&from_view=keyword”>Image by pikisuperstar

Shadow Banning Does(n’t) Exist

Shadow Banning Doesn’t Exist

#mushroom

Recent posts from #mushroom are currently hidden because the community has reported some content that may not meet Instagram’s community guidelines.

 

Dear Instagram, get your mind outta the gutter! Mushrooms are probably one of the most searched hashtags in my Instagram history. It all started when I found my first batch of wild chicken-of-the-woods mushrooms. I wanted to learn more about mushroom foraging, so I consulted Instagram. I knew there were tons of foragers sharing photos, videos, and tips about finding different species. But imagine not being able to find content related to your hobby?

What if you loved eggplant varieties? But nothing came up in the search bar? Perhaps you’re an heirloom eggplant farmer trying to sell your product on social media? Yet you’ve only gotten two likes—even though you added #eggplantman to your post. Shadow banned? I think yes.

The deep void of shadow banning is a social media user’s worst nightmare. Especially for influencers whose career depends on engagement. Shadow banning comes with so many uncertainties, but there are a few factors many users agree on:

      1. Certain posts and videos remain hidden from other users
      2. It hurts user engagement
      3. It DOES exist

#Shadowbanning

Shadow banning is an act of restricting or censoring a user’s content on social media without notifying the user. This usually occurs when a user posts content deemed inappropriate or it violates the platform’s guidelines. If a user is shadow banned, the user’s content is only visible to the user and their followers.

Influencers, artists, creators, and business owners are vulnerable victims to the shadow banning void. They depend the most on user engagement, growth, and reaching new audiences. As much as it hurts them, it also hurts other users searching for this specific content. There’s no clear way of telling whether you’ve been shadow banned. You don’t get a notice. You can’t make an appeal to fix your lack of engagement. However, you will see a decline in engagement because no one can see your content in their feeds.

According to the head of Instagram, Adam Mosseri, “shadow banning is not a thing.” In an interview with the Meta CEO, Mark Zuckerberg, he stated Facebook has “no policy that is shadow banning.” Even a Twitter blog stated, “People are asking us if we shadow ban. We do not.” There is no official way of knowing if it exists, but there is evidence it does take place on various social media platforms.

#Shadowbanningisacoverup?

Pole dancing on social media probably would have been deemed inappropriate 20 years ago. But this isn’t the case today. Pole dancing is a growing sport industry. Stigmas associating strippers with pole dancing is shifting with its increasing popularity and trendy nature. However, social media standards may still be stuck in the early 2000s.

In 2019, user posts with hashtags including #poledancing, #polesportorg, and #poledancenation were hidden from Instagram’s Explore page. This affected many users who connect and share new pole dancing techniques with each other. It also had a huge impact on businesses who rely on the pole community to promote their products and services: pole equipment, pole clothing, pole studios, pole sports competitions, pole photographers, and more.

Due to a drastic decrease in user engagement, a petition directing Instagram to stop pole dancing censorship was circulated worldwide. Is pole dancing so controversial it can’t be shared on social media? I think not. There is so much to learn from sharing information virtually, and Section 230 of the Communications Decency Act supports this.

Section 230 was passed in 1996, and it provides limited federal immunity to websites from lawsuits if a user posts something illegal. This means that if User X decides to post illegal content on Twitter, the Twitter platform could not be sued because of User X’s post. Section 230 does not stop the user who posted such content from being sued, so User X can still be held accountable.

It is clear that Section 230 embraces the importance of sharing knowledge. Section 230(a)(1) tells us this. So why would Instagram want to shadow ban pole dancers who are simply sharing new tricks and techniques?

The short answer is: It’s inappropriate.

But users want to know: what makes it inappropriate?

Is it the pole? A metal pole itself does not seem so.

Is it the person on the pole? Would visibility change depending on gender?

Is it the tight clothing? Well, I don’t see how it is any different from my 17  bikini photos on my personal profile.

Section 230 also provides a carve-out for sex-related work, such as sex trafficking. But this is where the line is drawn between appropriate and inappropriate content. Sex trafficking is illegal, but pole dancing is not. Instagram’s community guidelines also support this. Under the guidelines, sharing pole dancing content would not violate it. Shadow banning clearly seeks to suppress certain content, and in this case, the pole dancing community was a target.

Cultural expression also battles with shadow banning. In 2020, Instagram shadow banned Caribbean Carnival content. The Caribbean Carnival is an elaborate celebration to commemorate slavery abolition in the West Indies and showcases ensembles representing different cultures and countries.

User posts with hashtags including #stluciacarnival, #fuzionmas, and #trinidadcarnival2020 could not be found nor viewed by other users. Some people viewed this as suppressing culture and impacting tourism. Additionally, Facebook and Instagram shadow banned #sikh for almost three months. Due to numerous user feedback, the hashtag was restored, but Instagram failed to state how or why the hashtag was blocked.

In March 2020, The Intercept obtained internal TikTok documents alluding to shadow banning methods. Documents revealed moderators were to suppress content depicting users with “‘abnormal body shape,’ ‘ugly facial looks,’ dwarfism, and ‘obvious beer belly,’ ‘too many wrinkles,’ ‘eye disorders[.]'” While this is a short excerpt of the longer list, this shows how shadow banning may not be a coincidence at all.

Does shadow banning exist? What are the pros and cons of shadow banning?

 

 

 

#ad : The Rise of Social Media Influencer Marketing

 

 

 

 

 

 

 

 

#Ad : The rise of social media influence marketing.

When was the last time you bought something from a billboard or a newspaper? Probably not recently. Instead, advertisers are now spending their money on digital market platforms. And at the pinnacle of these marketing platforms are influencers. Since millennial, generation Y, and generation Z consumers spend so much time consuming user-generated content, the creator begins to become their acquaintance and could even be categorized as a friend. Once that happens, the influencer has more power to do what their name suggests and influence the user to purchase. This is where our current e-commerce market is headed.

Imagine this:

If a person you know and trust suggests you try a brand new product, you would probably try it. Now, if that same person were to divulge to you that they were paid to tell you all about how wonderful this product is, you would probably have some questions about the reality of their love for this product, right?

Lucky for us consumers, the Federal Trade Commission (FTC) has established an Endorsement Guide so we can all have that information when we are being advertised to by our favorite social media influencers.

 

The times have changed, quickly.

Over the past 8 years, there has been a resounding shift in the way companies market their products, to the younger generation specifically. The unprecedented changes throughout the physical and digital marketplace have forced brands to think thoroughly through their strategies on how to reach the desired consumer. Businesses are now forced to rely on digital and social media marketing more than they ever have before.

With the rise of social media and apps like Vine, and Tik Tok, came a new metaverse with almost untapped potential for marketing. This was the way companies would be able to reach this younger generation of consumers, you know, the ones with their heads craned over a phone and their thumbs constantly scrolling. These were the people that advertisers had trouble reaching, until now.

 

What the heck is an “ Influencer”?

The question “What is an influencer?” has become standard in conversations among social media users. We know who they are, but the term is very loosely defined. Rachel David, a popular, YouTube personality, defined it with the least ambiguity as “Someone like you and me, except they chose to consistently post stuff online”. This definition seems harmless enough until you understand that it is much more nuanced than that and these individuals are being paid huge sums of money to push products that they most likely don’t use themselves, despite what their posts may say. The reign of celebrity-endorsed marketing is shifting to a new form of celebrity called an “Influencer”. High-profile celebrities were too far removed from the average consumer. A new category emerged with the rise of social media use, and the only difference between a celebrity and a famous influencer is…relatability. Consumers could now see themselves in the influencer and would default to trusting them and their opinion.

One of the first instances we saw influencers flexing their advertising muscle was the popular app Vine .Vine was a revolutionary app and frankly existed before its time. It introduced the user to a virtual experience that matched their dwindling attention span. Clips were no more than 6 seconds long and would repeat indefinitely until the user swiped to the next one. This short clip captured the user’s attention and provided that much-needed dopamine hit. This unique platform began rising in popularity, rivaling other apps like the powerhouse of user engagement, YouTube. Unlike YouTube, however, Vine required less work on the shorter videos, and more short videos were produced by the creator. Since the videos were so short, the consumers wanted more and more videos (content), which opened the door for other users to blast their content, creating an explosion of “Vine Famous” creators. Casual creators were now, almost overnight, amassing millions of followers, followers they can now influence. Vine failed to capitalize on its users and its inability to monetize on its success, it ultimately went under in 2016. But, what happened to all of those influencers? They made their way to alternate platforms like YouTube, Instagram, and Facebook taking with them their followers and subsequently their influencer status. These popular influencers went from being complete strangers to people the users inherently trusted because of the perceived transparency into their daily life.

 

Here come the #ads.

Digital marketing was not introduced by Vine, but putting a friendly influencer face behind the product has some genesis there. Consumerism changed when social media traffic increased. E-commerce rose categorically when the products were right in front of the consumer’s face, even embedded into the content they were viewing. Users were watching advertisements and didn’t even care. YouTube channels that were dedicated solely to reviewing different products and giving them a rating became an incredibly popular genre of video. Advertisers saw content becoming promotion for a product and the shift from traditional marketing strategies took off. Digital, inter-content advertising was the new way to reach this generation.

Now that influencer marketing is a mainstream form of marketing, the prevalence of the FTC Endorsement Guide has amplified. Creators are required to be transparent about their intentions in marketing a product. The FTC guide suggests ways influencers can effectively market the product they are endorsing while remaining transparent about their motivations to the user. The FTC guide provides examples of how and when to disclose the fact that a creator is sponsoring or endorsing a particular product that must be followed to avoid costly penalties. Most users prefer to have their content remain as “on brand” as possible and will resort to the most surreptitious option and choose to disguise the “#ad” within a litany of other relevant hashtags.

The age of advertising has certainly changed right in front of our eyes, literally. As long as influencers remain transparent about their involvement with the products they show in their content, consumers will inherently trust them and their opinion on the product. So sit back, relax, and enjoy your scrolling. But, always be cognizant that your friendly neighborhood influencer may have monetary motivation behind their most recent post.

 

 

 

 

Skip to toolbar