Privacy Please: Privacy Law, Social Media Regulation and the Evolving Privacy Landscape in the US

Social media regulation is a touchy subject in the United States.  Congress and the White House have proposed, advocated, and voted on various bills, aimed at protecting and guarding people from data misuse and misappropriation, misinformation, harms suffered by children, and for the implications of vast data collection. Some of the most potent concerns about social media stem from use and misuse of information by the platforms- from the method of collection, to notice of collection and use of collected information. Efforts to pass a bill regulating social media have been frustrated, primarily by the First Amendment right to free speech. Congress has thus far failed to enact meaningful regulation on social media platforms.

The way forward may well be through privacy law. Privacy laws give people some right to control their own personhood including their data, right to be left alone, and how and when people see and view them. Privacy laws originated in their current form in the late 1800’s with the impetus being one’s freedom from constant surveillance by paparazzi and reporters, and the right to control your own personal information. As technology mutated, our understanding of privacy rights grew to encompass rights in our likeness, our reputation, and our data. Current US privacy laws do not directly address social media, and a struggle is currently playing between the vast data collection practices of the platforms, immunity for platforms under Section 230, and private rights of privacy for users.

There is very little Federal Privacy law, and that which does exist is narrowly tailored to specific purposes and circumstances in the form of specific bills. Somes states have enacted their own privacy law scheme, California being on the forefront, Virginia, Colorado, Connecticut, and Utah following in its footsteps. In the absence of a comprehensive Federal scheme, privacy law is often judge-made, and offers several private rights of action for a person whose right to be left alone has been invaded in some way. These are tort actions available for one person to bring against another for a violation of their right to privacy.

Privacy Law Introduction

Privacy law policy in the United States is premised on three fundamental personal rights to privacy:

  1. Physical right to privacy- Right to control your own information
  2. Privacy of decisions– such as decisions about sexuality, health, and child-rearing. These are the constitutional rights to privacy. Typically not about information, but about an act that flows from the decision
  3. Proprietary Privacy – the ability to protect your information from being misused by others in a proprietary sense.

Privacy Torts

Privacy law, as it concerns the individual, gives rise to four separate tort causes of action for invasion of privacy:

  1. Intrusion upon Seclusion- Privacy law provides a tort cause of action for intrusion upon seclusion when someone intentionally intrudes upon the reasonable expectation of seclusion of another, physically or otherwise, and the intrusion is objectively highly offensive.
  2. Publication of Private Facts- One gives publicity To a matter concerning the Private life of another that is not of legitimate concern to the public, and the matter publicized would be objectively highly offensive. The first amendment provides a strong defense for publication of truthful matters when they are considered newsworthy.
  3. False Light – One who gives publicity to a matter concerning another that places the other before the public in a false light when The false light in which the other was placed would be objectively highly offensive and the actor had knowledge of or acted in reckless disregard as to the falsity of the publicized matter and the false light in which the other would be placed.
  4. Appropriation of name and likeness- Appropriation of one’s name or likeness to the defendant’s own use or benefit. There is no appropriation when a persona’s picture is used to illustrate a non-commercial, newsworthy article. This is usually commercial in nature but need not be. The appropriation could be of “identity”. It need not be misappropriation of name, it could be the reputation, prestige, social or commercial standing, public interest, or other value on the plaintiff’s likeness.

These private rights of action are currently unavailable for use against social media platforms because of Section 230 of the Decency in Communications Act, which provides broad immunity to online providers for posts on their platforms. Section 230 prevents any of the privacy torts from being raised against social media platforms.

The Federal Trade Commission (FTC) and Social Media

Privacy law can implicate social media platforms when their practices become unfair or deceptive to the public through investigation by the Federal Trade Commission (FTC). The FTC is the only federal agency with both consumer protection and competition jurisdiction in broad sectors of the economy. FTC investigates business practices where those practices are unfair or deceptive. FTC Act 15 U.S.C S 45- Act prohibits “unfair or deceptive acts or practices in or affecting commerce” and grants broad jurisdiction over privacy practices of businesses to the FTC. Trade practice is unfair if it causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and is not outweighed by countervailing benefits to consumers or competition. A deceptive act or practice is a material representation, omission, or practice that is likely to mislead the consumer acting reasonably in the circumstances, to the consumer’s detriment.

Critically, there is no private right of action in FTC enforcement. The FTC has no ability to enforce fines for S5 violations but can provide injunctive relief. By design, the FTC has very limited rulemaking authority, and looks to consent decrees and procedural, long-lasting relief as an ideal remedy. The FTC pursues several types of misleading or deceptive policy and practices that implicate social media platforms: notice and choice paradigms, broken promises, retroactive policy changes, inadequate notice, and inadequate security measures. Their primary objective is to negotiate a settlement where the company submits to certain measures of control of oversight by the FTC for a certain period of time. Violations of the agreements could yield additional consequences, including steep fines and vulnerability to class action lawsuits.

Relating to social media platforms, the FTC has investigated misleading terms and conditions, and violations of platform’s own policies. In Re Snapchat, the platform claimed that user’s posted information disappeared completely after a certain period of time, however, through third party apps and manipulation of user’s posts off of the platform, posts could be retained. The FTC and Snapchat settled, through a consent decree, to subject Snapchat to FTC oversight for 20 years.

The FTC has also investigated Facebook for violation of its privacy policy. Facebook has been ordered to pay a $5 billion penalty and to submit to new restrictions and a modified corporate structure that will hold the company accountable for the decisions it makes about its users’ privacy to settle FTC charges claiming that they violated a 2012 agreement with the agency.

Unfortunately, none of these measures directly give individuals more power over their own privacy. Nor do these policies and processes give individuals any right to hold platforms responsible for being misled by algorithms using their data, or for intrusion into their privacy by collecting data without allowing an opt-out.

Some of the most harmful social media practices today relate to personal privacy. Some examples include the collection of personal data, the selling and dissemination of data through the use of algorithms designed to subtly manipulate our pocketbooks and tastes, collection and use of data belonging to children, and the design of social media sites to be more addictive- all in service of the goal of commercialization of data.

No current Federal privacy scheme exists. Previous Bills on Privacy have been few and narrowly tailored to relatively specific circumstances and topics like healthcare and medical data protection by HIPPA, protection of data surrounding video rentals as in the Video Privacy Protection Act, and narrow protection for children’s data in Children’s Online Protection Act. All the schemes are outdated and fall short of meeting the immediate need of broad protection of widely collected and broadly utilized data from social media.

Current Bills on Privacy

Upon request from some of the biggest platforms, outcry from the public, and the White House’s request for Federal Privacy regulation, Congress appears poised to act. The 118th Congress has pushed privacy law as a priority in this term by introducing several bills related to social media privacy. There are at least ten Bills currently pending between the House of the Senate addressing a variety of issues and concerns from Children’s data privacy to the minimum age for use and designation of a new agency to monitor some aspects of privacy.

S744The Data Care Act of 2023 aims to protect social media user’s data privacy by imposing fiduciary duties on the platforms. The original iteration of the bill was introduced in 2021 and failed to receive a vote. It was re-introduced in March of 2023 and is currently pending. Under the act, social media platforms would have the duty to reasonably secure user’s data from access, refrain from using the data in a way that could foreseeably “benefit the online service provider to the detriment of the end user” and to prevent disclosure of user’s data unless the party is also bound by these duties. The bill authorizes the FTC and certain state officials to take enforcement actions upon breach of those duties. The states would be permitted to take their own legal action against companies for privacy violations. The bill would also allow the FTC to intervene in the enforcement efforts by imposing fines for violations.

H.R.2701 – Perhaps the most comprehensive piece of legislation on the House floor is the Online Privacy Act. In 2023, the bill was reintroduced by democrat Anna Eshoo after an earlier version on the bill failed to receive a vote and died in Congress. The Online Privacy Act aims to protect users by providing individuals rights relating to the privacy of their personal information. The bill would also provide privacy and security requirements for treatment of personal information. To accomplish this, the bill established a new agency – the Digital Privacy Agency- which would be responsible for enforcement of the rights and requirements. The new individual rights in privacy are broad and include the rights of access, correction, deletion, human review of automated decision, individual autonomy, right to be informed, and right to impermanence, amongst others. This would be the most comprehensive plan to date. The establishment of a new agency with a task specific to administration and enforcement of privacy laws would be incredibly powerful. The creation of this agency would be valuable irrespective of whether this bill is passed.

HR 821– The Social Media Child Protection Act is a sister bill to one by a similar name which originated in the Senate. This bill aims to protect children from the harms of social media by limiting children’s access to it. Under the bill, Social Media platforms are required to verify the age of every user before accessing the platform by submitting a valid identity document or by using another reasonable verification method. A social media platform will be prohibited from allowing users under the age of 16 to access the platform. The bill also requires platforms to establish and maintain reasonable procedures to protect personal data collected from users. The bill affords for a private right of action as well as state and FTC enforcement.

S 1291The Protecting Kids on Social Media Act is similar to its counterpart in the House, with slightly less tenacity. It similarly aims to protect children from social media’s harms. Under the bill, platforms must verify its user’s age, not allow the user to use the service unless their age has been verified, and must limit access to the platform for children under 12. The bill also prohibits retention and use of information collected during the age verification process. Platforms must take reasonable steps to require affirmative consent from the parent or guardian of a minor who is at least 13 years old for the creation of a minor account, and reasonably allow access for the parent to later revoke that consent. The bill also prohibits use of data collected from minors for algorithmic recommendations. The bill would require the Department of Commerce to establish a voluntary program for secure digital age verification for social media platforms. Enforcement would be through the FTC or state action.

S 1409– The Kids Online Safety Act, proposed by Senator Blumenthal of Connecticut, also aims to protect minors from online harms. This bill, as does the Online Safety Bill, establishes fiduciary duties for social media platforms regarding children using their sites. The bill requires that platforms act in the best interest of minors using their services, including mitigating harms that may arise from use, sweeping in online bullying and sexual exploitation. Social media sites would be required to establish and provide access to safeguards such as settings that restrict access to minor’s personal data and granting parents the tools to supervise and monitor minor’s use of the platforms. Critically, the bill establishes a duty for social media platforms to create and maintain research portals for non-commercial purposes to study the effect that corporations like the platforms have on society.

Overall, these bills indicate Congress’s creative thinking and commitment to broad privacy protection for users from social media harms. I believe the establishment of a separate body to govern, other than the FTC which lacks the powers needed to compel compliance, to be a necessary step. Recourse for violations on par with the EU’s new regulatory scheme, mainly fines in the billions, could help.

Many of the bills, for myriad aims, establish new fiduciary duties for the platforms in preventing unauthorized use and harms for children. There is real promise in this scheme- establishing duty of loyalty, diligence and care for one party has a sound basis in many areas of law and would be more easily understood in implementation.

The notion that platforms would need to be vigilant in knowing their content, studying its affects, and reporting those effects may do the most to create a stable future for social media.

The legal responsibility for platforms to police and enforce their policies and terms and conditions is another opportunity to further incentivize platforms. The FTC currently investigates policies that are misleading or unfair, sweeping in the social media sites, but there could be an opportunity to make the platforms legally responsible for enforcing their own policies, regarding age, against hate, and inappropriate content, for example.

What would you like to see considered in Privacy law innovation for social media regulation?

Social Media, Minors, and Algorithms, Oh My!

What is an algorithm and why does it matter?

Social media algorithms are intricately designed data organization systems aimed at maximizing user engagement by sorting and delivering content tailored to individual preferences. At their core, social media algorithms collect and subsequently use extensive user data, employing machine learning techniques to better understand and predict user behavior. Social media algorithms note and analyze hundreds of thousands of data points, including past interactions, likes, shares, content preferences, time spent viewing content, and social connections to curate a personalized feed for each user. Social media algorithms are designed this way to keep users on the site, thus giving the site more time to put advertisements on the user’s feed and drive more profits for the social media site in question. The fundamental objective of an algorithm is to capture and maintain user attention, expose the user to an optimal amount of advertisements, and use data from users to curate their feed to keep them engaged for longer.

Addiction comes in many forms

One key element contributing to the addictiveness of social media is the concept of variable rewards. Algorithms strategically present a mix of content, varying in type and engagement level, to keep users interested in their feed. This unpredictability taps into the psychological principle of operant conditioning, where intermittent reinforcement, such as receiving likes, comments, or discovering new content, reinforces habitual platform use. Every time a user sees an entertaining post or receives a positive notification, the brain releases dopamine, the main chemical associated with addiction and addictive behaviors. The constant stream of notifications and updates, fueled by algorithmic insights and carefully tailored content suggestions, can create a sense of anticipation in users for their next dopamine fix, which encourages users to frequently update and scan their feeds to receive the next ‘reward’ on their timeline. The algorithmic and numbers-driven emphasis on user engagement metrics, such as the amount of likes, comments, and shares on a post, further intensifies the competitive and social nature of social media platforms, promoting frequent use.

Algorithms know you too well

Furthermore, algorithms continuously adapt to user behavior through real-time machine learning. As users engage with content, algorithms will analyze and refine their predictions, ensuring that the content remains compelling and relevant to the user over time. This iterative feedback loop further deepens the platform’s understanding of individual users, creating a specially curated and highly addictive feed that the user can always turn to for a boost of dopamine. This heightened social aspect, coupled with the algorithms’ ability to surface content that resonates deeply with the user, enhances the emotional connection users feel to the platform and their specific feed, which keeps users coming back time after time. Whether it be from seeing a new, dopamine-producing post, or posting a status that receives many likes and shares, every time one opens a social media app or website, it can produce seemingly endless new content, further reinforcing regular, and often unhealthy use.

A fine line to tread

As explained above, social media algorithms are key to user engagement. They are able to provide seemingly endless bouts of personalized content and maintain users’ undivided attention through their ability to understand the user and the user’s preferences in content. This pervasive influence extends to children, who are increasingly immersed in digital environments from an early age. Social media algorithms can offer constructive experiences for children by promoting educational content discovery, creativity, and social connectivity that would otherwise be impossible without a social media platform. Some platforms, like YouTube Kids, leverage algorithms to recommend age-appropriate content tailored to a child’s developmental stage. This personalized curation of interest-based content can enhance learning outcomes and produce a beneficial online experience for children. However, while being exposed to age-appropriate content may not harm the child viewers, it can still cause problems related to content addiction.

‘Protected Development’

Children are generally known to be naïve and impressionable, meaning full access to the internet can be harmful for their development, as they may take anything they see at face value. The American Psychological Association has said that, “[d]uring adolescent development, brain regions associated with the desire for attention, feedback, and reinforcement from peers become more sensitive. Meanwhile, the brain regions involved in self-control have not fully matured.” Social media algorithms play a pivotal role in shaping the content children can encounter by prioritizing engagement metrics such as likes, comments, and shares. In doing this, social media sites create an almost gamified experience that encourages frequent and prolonged use amongst children. Children also have a tendency to intensely fixate on certain activities, interests, or characters during their early development, further increasing the chances of being addicted to their feed.

Additionally, the addictive nature of social media algorithms poses significant risks to children’s physical and mental well-being. The constant stream of personalized content, notifications, and variable rewards can contribute to excessive screen time, impacting sleep patterns and physical health. Likewise, the competitive nature of engagement metrics may result in a sense of inadequacy or social pressure among young users, leading to issues such as cyberbullying, depression, low self-esteem, and anxiety.

Stop Addictive Feeds Exploitation (SAFE) for Kids

The New York legislature has spotted the anemic state of internet protection for children and identified the rising mental health issues relating to social media in the youth.  Announced their intentions at passing laws to better protect kids online. The Stop Addictive Feeds Exploitation (SAFE) for Kids Act is aimed explicitly at social media companies and their feed-bolstering algorithms. The SAFE for Kids Act is intended to “protect the mental health of children from addictive feeds used by social media platforms, and from disrupted sleep due to night-time use of social media.”

Section 1501 of The Act would essentially prohibit operators of social media sites from providing addictive, algorithm-based feeds to minors without first obtaining parental permission. Instead the default feed on the program would be a chronologically sorted main timeline, one more popular in the infancy of social media sites. Section 1502 of The Act would also require social media platforms to obtain parental consent before allowing notifications between the hours of 12:00 AM and 6:00 AM and creates an avenue for opting out of access to the platform between the same hours. The Act would also provide a limit on the overall number of hours a minor can spend on a social media platform. Additionally, the Act would authorize the Office of the Attorney General to bring a legal action to enjoin or seek damages/civil penalties of up to $5,000 per violation and allow any parent/guardian of a covered minor to sue for damages of up to $5,000 per user per incident, or actual damages, whichever is greater.

A sign of the times

The Act accurately represents the growing concerns of the public in its justification section, where it details many of the above referenced problems with social media algorithms and the State’s role in curtailing the well-known negative effects they can have on a protected class. The New York legislature has identified the problems that social media addiction can present, and have taken necessary steps in an attempt to curtail it.

Social media algorithms will always play an intricate role in shaping user experiences. However, their addictive nature should rightfully subject them to scrutiny, especially in their effects among children. While social media algorithms offer personalized content and can produce constructive experiences, their addictive nature poses significant risks, prompting legislative responses like the Stop Addictive Feeds Exploitation (SAFE) for Kids Act.  Considering the profound impact of these algorithms on young users’ physical and mental well-being, a critical question arises: How can we effectively balance the benefits of algorithm-driven engagement with the importance of protecting children from potential harm in the ever evolving digital landscape? The SAFE for Kids Act is a step in the right direction, inspiring critical reflection on the broader responsibility of parents and regulatory bodies to cultivate a digital environment that nurtures healthy online experiences for the next generation.

 

Short & Not So Sweet

LIGHTS ON BUT NOBODY’S HOME

Social media clips have and continue to worry parents worldwide based on the impact this may have on their children’s brains—specifically, the effects on children’s attention span. TikTok, the prime example of short-form video content, had nearly 1.5 billion monthly active users in the third quarter of 2022. The reason behind this trend? This sudden and drastic usage is because users on this platform desire a short and sweet entertainment approach. Studies support the finding that TikTok has and continues to take over consumers’ time through its short-clip process. 

Research indicates that the average user in the United States spends roughly 45.8 minutes per day on TikTok, beating other forms of social media videos such as Instagram, Facebook, and Twitter. Globally, the viewing amount is up to nearly 53 minutes per day. Additionally, expanding beyond just TikTok, these short-term, vertical videos constitute a much higher watch rate than horizontal, longer videos. 

WRAP YOUR BRAIN AROUND THIS

Given that the brain’s prefrontal cortex – the part that accounts for decision- making and impulse control – does not fully develop until age 25, children worldwide struggle to regulate access to these shorter videos. The issue of shorter videos isn’t just present in the United States; studies in countries like China are also prevalent. China’s Zhejiang University determined that China’s version of TikTok, Douyin, found that parts of the brain responsible for addiction were present in students, and many dealt with difficulty in stopping to watch these clips. 

TikTok and other short-term videos on social media appeal directly to this generation’s desire to avoid sustaining attention. In turn, such an influence on children can impact their ability to function in the real world. Dr. Michael Manos of Cleveland Clinic Children’s Center for Attention states, “If kids’ brains become accustomed to constant changes, the brain finds it difficult to adapt to a non-digital activity where things don’t move quite as fast.” Although this is a relatively new scientific study, it has long been that the use of social media negatively impacts academic performance. The impact of social media has created what society recognizes as an attention deficit. The trend of short-term videos on social media and entertainment devices worldwide has and continues to impair society’s cognitive functions. 

QUALITY OVER QUANTITY

Although many factors are in consideration as to why society has progressed into this current state, one in particular that takes the sole blame away from the consumer is that consumer standards are high. Specifically, with the rise of these various new forms of social media, society has prioritized offering content that can appeal to different consumers and preferences. Consumers today have much more options in what they choose to watch. 

Accordingly, the consumer may not utilize their attention span for poor entertainment. Instead, viewers have opportunities in a competitive area that pressures society to develop ways to draw this attention and obtain viewership through this source of entertainment. Studies from the Technical University of Denmark determined a substantial decrease in attention space due to the “increasing production and consumption of content.” 

TOO LONG IS WRONG

Why society has become fascinated with this new form of entertainment is simple. Short films provide the same level of emotion within a much shorter period. The same sort of behavior is seen even outside of social media. Students who watch recorded lectures tend to adjust the speed to get through the material faster. Movies engineered for viewership in 2 hours stray from the regularity of older films and directors that portrayed their image through 5-hour screen times. Even certain songs created by various artists have seen a rapid drop in the number of lyrics accumulated to ensure that this type of content adjusts to shorter attention spans. 

What is the reason behind this new obsession amongst the youth Satisfaction. Studies show that almost 50% of users surveyed by TikTok said that videos lasting merely longer than a minute became “stressful.” Such studies directly correlate to the painful truth that an individual’s attention spans are minimal compared to life before social media and short media clips. Thus, creators of entertainment accommodate this ongoing concern not by attempting to provide a remedy but by adjusting to this current desire for short-film viewership. 

Through the appeal to this newly recognizable satisfaction, many creators of entertainment have further fueled this addiction rather than creating content to distract from the lure of short-form videos. Is this a wise business decision? Absolutely. The market of addictive children craves this type of entertainment. However, with any addiction, consequences don’t linger too far behind. 

ADDICTION TURNS INTO BRAINWASH

The question remains: what does this new obsession have to do with the law? It’s not illegal to provide children with entertainment, regardless of how bad the effect it may have on generations of children. The problem is the allegations that apps such as TikTok make no effort to ensure the platform is safe for children and teens. Between the inability to monitor the content and the addictive nature discussed above, the outcome has proved catastrophic for the youth across the globe. Not only is the mental capacity of children in jeopardy, but their physical well-being faces the consequences of association with the addictive nature of short-term videos. 

The Social Media Victims Law Group has filed a lawsuit, Case Number 22STCV21355 Smith et al. v. TikTok Inc., on behalf of the parents of two young girls who suddenly died after attempting one of the short clips trending at the time on this platform. Specifically, the dangerous sensation is known as the “blackout challenge.” 

The blackout challenge constitutes objects taken and used to strangle oneself to the point of losing consciousness. One of the victims, Lalani Erika Renee Walton, started watching TikTok at age 8. As mentioned above, the addictive nature of these short-clipped videos took control. Not long thereafter, Lalani became addicted to watching the videos and attempting to duplicate them. Due to her latest attempt at reproducing these short-clipped sensations, on July 15, 2021, Lalani died hanging from her bed with a rope around her neck after the disastrous success of this trending challenge.  

Similarly, Arriania Jaileen Arroyo, a seven-year-old girl, downloaded TikTok not long after receiving her first phone. Within two years, she, too, became addicted to this frenzy of short-clip social media such as TikTok. Eventually, the new trend of short-clipped videos was this very same blackout challenge. On February 26, 2021, Arriani died hanging from their family’s dog leash fastened to her bedroom door. All because of the opportunity to replicate these videos that jeopardize society’s mental and physical health. 

WHY IS THIS TIKTOK’S PROBLEM?

The question then becomes, why is this horrific trend TikTok’s problem? Section 230 of the Communications Decency Act dictates that responsibility does not fall on social media platforms for the content others have posted. Instead, platforms are to moderate as they deem necessary and appropriate. The benefit of 230 created an attempt to provide diversity and opportunity for cultural and intellectual development. 

Through the increase of casualties, many argue that such content of dangerous challenges associated with and spread through this program extend beyond the confinement of Section 230’s protections. The “blackout challenge” is only one of many examples that have caused harmful content to be spread and mimicked by others. Others include the Benadryl challenge (hallucinogenic effects) and the salt and ice challenge (chemical burns on one’s skin). 

Although Section 230 has and continues to protect individuals and platforms from having to suffer the consequences of others’ conduct, its protection is not obsolete. The act’s protections do not extend to companies that create illegal or harmful content. Although TikTok may not have made the content, this does not end the discussion of its exposure to the consequences of Section 230. 

Currently, in these ongoing lawsuits, the actions of the users are not in question, but instead, the one under review is TikTok’s actions. In addition to no protection for creating harmful content, Section 230’s defense is of no avail for failure to warn users. The lawsuit emphasizes TikTok’s omission of warning to parents and users of such foreseeable risks in connection with the product. Specifically, no ordinary and reasonable individual would presume this type of entertaining device, which directs itself to teens and young children, poses these dangers, including the effectiveness of its addictive qualities and ability to lead to a surplus of screen time. 

There are also arguments that the design of TikTok’s platform itself is flawed. Such design defects alleged include the creation of an addictive product and failure to verify the ages and identities of minor users. Under the Children’s Online Privacy Protection Act, allowing children under age 13 on a social media platform violates this statute.

To circumvent the legal repercussions of their viewer’s actions, TikTok has attempted to improve its safety and warning features to provide a greater understanding of the content shared on its platform. Specifically, TikTok has altered safety features and even offered ways for parents to monitor it. Under TikTok family pairing, this feature allows parents to dictate how much time children can view the application. It links the parent’s control with their child’s account. Furthermore, a section of TikTok provides a specific area for children 13 and under, which primarily shows child-safe content. 

A VIDEO A DAY KEEPS THE HARM IN PLAY: 

Everything ties back to TikTok’s appeal that led to this addictive nature in the first place. That is the appeal behind its short-clipped entertainment. Such addiction has impacted children’s cognitive ability and even caused the loss of lives in the process. The extent of this concern stretches far beyond the impact on the ability of children’s brain capacity. Instead, this trend has led to children taking this addictive behavior to a whole new extremity through mimicry.

Such conduct cannot continue progressing in the direction it has thus far. Such modifications to the platform, although an attempt for betterment, have failed to suffice and prevent this irreparable damage. This area of concern needs to be addressed before society not only loses its ability to think but its ability to act as well.  How many lives must be lost before TikTok takes affirmative action?

New York is Protecting Your Privacy

TAKING A STANCE ON DATA PRIVACY LAW

The digital age has brought with it unprecedented complexity surrounding personal data and the need for comprehensive data legislation. Recognizing this gap in legislative protection, New York has introduced the New York Privacy Act (NYPA), and the Stop Addictive Feeds Exploitation (SAFE) For Kids Act, two comprehensive initiatives designed to better document and safeguard personal data from the consumer side of data collection transactions. New York is taking a stand to protect consumers and children from the harms of data harvesting.

Currently under consideration in the Standing Committee on Consumer Affairs And Protection, chaired by Assemblywoman Nily Rozic, the New York Privacy Act was introduced as “An Act to amend the general business law, in relation to the management and oversight of personal data.” The NYPA was sponsored by State Senator Kevin Thomas and closely resembles the California Consumer Privacy Act (CCPA), which was finalized in 2019. In passing the NYPA, New York would become just the 12th state to adopt a comprehensive data privacy law protecting state residents.

DOING IT FOR THE DOLLAR

Companies buy and sell millions of user’s sensitive personal data in the pursuit of boosting profits. By purchasing personal user data from social media sites, web browsers, and other applications, advertisement companies can predict and drive trends that will increase product sales among different target groups.

Social media companies are notorious for selling user data to data collection companies, things such as your: name, phone number, payment information, email address, stored videos and photos, photo and file metadata, IP address, networks and connections, messages, videos watched, advertisement interactions, and sensor data, as well as time, frequency, and duration of activity on the site. The NYPA targets businesses like these by regulating legal persons that conduct business in the state of New York, or who produce products and services aimed at residents of New York. The entity that stands to be regulated must:

  • (a) have annual gross revenue of twenty-five million dollars or more;
  • (b) control or process personal data of fifty thousand consumers or more;
  • or (c) derive over fifty percent of gross revenue from the sale of personal data.

The NYPA does more for residents of New York because it places the consumer first, as the Act is not restricted to regulating businesses operating within New York but encompasses every resident of New York State who may be subject to targeted data collection, an immense step forward in giving consumers control over their digital footprint.

MORE RIGHTS, LESS FRIGHT

The NYPA works by granting all New Yorkers additional rights regarding how their data is maintained by controllers to which the Act applies. The comprehensive rights granted to New York consumers include the right to notice, opt out, consent, portability, correct, and delete personal information. The right to notice requires each controller provide a conspicuous and readily available notice statement describing the consumer’s rights, indicating the categories of personal data the controller will be collecting, where its collected from, and what it may be used for. The right to opt out includes allowing for consumers to opt out of processing their personal data for the purposes of targeted advertising, the sale of their personal data, and for profiling purposes. This gives the consumer an advantage when browsing sites and using apps, as they will be duly informed of exactly what information they are giving up when online.

While all the rights included in the NYPA are groundbreaking for the New York consumer, the right to consent to sensitive data collection and the right to delete data cannot be understated. The right to consent requires controllers to conspicuously ask for express consent to collect sensitive personal data. It also contains a zero-discrimination clause to protect consumers who do not give controllers express consent to use their personal data. The right to delete requires controllers to delete any or all of a consumer’s personal data upon request, demanding controllers delete said data within 45 days of receiving the request. These two clauses alone can do more for New Yorker’s digital privacy rights than ever before, allowing for complete control over who may access and keep sensitive personal data.

BUILDING A SAFER FUTURE

Following the early success of the NYPA, New York announced their comprehensive plan to better protect children from the harms of social media algorithms, which are some of the main drivers of personal data collection. Governor Kathy Hochul, State Senator Andrew Gounardes, and Assemblywoman Nily Rozic recently proposed the Stop Addictive Feeds Exploitation (SAFE) For Kids Act, directly targeting social media sites and their algorithms. It has long been suspected that social media usage contributes to worsening mental health conditions in the United States, especially among youths. The SAFE For Kids Act seeks to require parental consent for children to have access to social media feeds that use algorithms to boost usage.

On top of selling user data, social media sites like Facebook, YouTube, and X/Twitter also use carefully constructed algorithms to push content that the user has expressed interest in, usually based on the profiles they click on or the posts they ‘like’. Social media sites feed user data to algorithms they’ve designed to promote content that will keep the user engaged for longer, which exposes the user to more advertisements and produces more revenue.

Children, however, are particularly susceptible to these algorithms, and depending on the posts they view, can be exposed to harmful images or content that can have serious consequences for their mental health. Social media algorithms can show children things they are not meant to see, regardless of their naiveté and blind trust, traits that are not exactly cohesive with internet use. Distressing posts or controversial images could be plastered across children’s feeds if the algorithm determines it would drive better engagement by putting them there. Under the SAFE For Kids Act, without parental consent, children on social media sites would see their feed in chronological order, and only see posts from users they ‘follow’ on the platform. This change would completely alter the way platforms treat accounts associated with children, ensuring they are not exposed to content they don’t seek out themselves. This legislation would build upon the foundations established by the NYPA, opening the door to even further regulations that could increase protections for the average consumer and more importantly, for the average child online.

New Yorkers: If you have ever spent time on the internet, your personal data is out there, but now you have the power to protect it.

When in Doubt, DISCLOSE it Out!

The sweeping transformation of social media platforms over the past several years has given rise to convenient and cost-effective advertising. Advertisers are now able to market their products or services to consumers (i.e. users) at low cost, right at their fingertips…literally! But convenience comes with a few simple and easy rules. Influencers, such as, athletes, celebrities, and high-profile individuals are trusted by their followers to remain transparent. Doing so does not require anything difficult. In fact, including “Ad” or “#Ad” at the beginning of a post is satisfactory. The question then becomes, who’s making these rules?

The Federal Trade Commission (FTC) works to stop deceptive or misleading advertising and provides guidance on how to go about doing so. Under the FTC, individuals have a legal obligation to clearly and conspicuously disclose their material connection to the products, services, brands, and/or companies they promote on their feeds. The FTC highlights one objective component to help users identify an endorsement. That is, a statement made by the speaker where their relationship with the advertiser is such that the speaker’s statement can be understood to be sponsored by the advertiser. In other words, if the speaker is acting on behalf of the advertiser, then that statement will be taken as an endorsement and subject to guidelines. Several factors will determine this, such as compensation, free products, and the terms of any agreement. Two basic principles of advertising law apply to all types of advertising in any media. They include 1) a reasonable basis to evidence claims and 2) clear and conspicuous disclosure. Overall, the FTC works to ensure transparent sponsorship in an effort to maintain consumer trust.

The Breakdown—When, How, & What Else

Influencers should disclose when they have a financial, employment, personal, or family relationship with a brand. Financial relationships do not have to be limited to money. If for example, a brand gives you a free product, disclosure is required even if you were not asked to mention it in a post. Similarly, if a user posts from abroad, U.S. law still applies if it is reasonably foreseeable that U.S. consumers will be affected.

When disclosing your material connection to the brand, make sure that disclosure is easy to see and understand. The FTC has previously disapproved of disclosure in places that are remote from the post itself. For instance, users should not have to press “show more” in the comments section to see that the post is actually an endorsement.

Another important aspect advertisers and endorsers should consider when disclosing are making sure not to talk about items they have not yet tried. They should also avoid saying that a product was great when they in fact thought it was not. In addition, individuals should not convey information or produce claims that are unsupported by actual evidence.

However, not everyone who posts about a brand needs to disclose. If you want to post a Sephora haul or a Crumbl Cookie review, that is okay! As long as a company is not giving you products for free or paying you to sponsor them, individuals are free to post at their leisure, without disclosing.

Now that you realize how seamless disclosure is, it may be surprising that people still fail to do so.

Rule Breakers

In Spring 2020 we saw an uptick of social media posts due to the fact that most people abided by stay-at-home orders and turned to social media for entertainment. TikTok is deemed particularly addictive, with users spending substantially more time on it over other apps, such as Instagram and Twitter.

TikTok star Charlie D’Amelio spoke positively about the enhancement drink, Muse in a Q&A post. She never acknowledged that the brand was paying her to sponsor their product and failed to use the platform’s content enabling tool which makes it even easier for users to disclose. D’Amelio is the second most followed account on the platform.

The Teami brand found itself in a similar position when stars like Cardi B and Brittany Renner made unfounded claims that the wellness company made products that resulted in unrealistic health benefits. The FTC instituted a complaint alleging that the company misled consumers to think that their 30-day detox pack would ensure weight loss. A subsequent court order prohibited them from making such unsubstantiated claims.

Still, these influencers hardly got punished, but received a mere ‘slap on the wrist’ for making inadequate disclosures. They were ultimately sent warning letters and received some bad press.

Challenges in Regulation & Recourse

Section 5(a) of the FTC Act is the statute that allows the agency to investigate and prevent unfair methods of competition. It is what gives them the authority to seek relief for consumers. This includes injunctions and restitution and in some cases, civil penalties. However, regulation is challenging because noncompliance is so easy. While endorsers have the ultimate responsibility to disclose their content, advertising companies are urged to implement procedures that make doing so more probable. There are never-ending amounts of content on social media to regulate, making it difficult for entities like the FTC to know when rules are actually being broken.

Users can report undisclosed posts through their social media accounts directly, their state attorneys general office, or to the FTC. Private parties can also bring suit. In 2022, a travel agency group sued a travel influencer for deceptive advertising. The influencer made false claims, such as being the first woman to travel to every country and failed to disclose paid promotions on her Instagram and TikTok accounts. The group seeks to enjoin the influencer from advertising without disclosing and to engage in corrective measures on her remaining posts that violate the FTC’s rules. Social media users are better able to weigh the value of endorsements when they can see the truth behind such posts.

In a world filled with filters, when it comes to advertisements on social media, let’s just keep it real.

Destroying Defamation

The explosion of Fake News spread among social media sites is destroying a plaintiff’s ability to succeed in a defamation action. The recent proliferation of rushed journalism, online conspiracy theories, and the belief that most stories are, in fact, “Fake News” have created a desert of veracity. Widespread public skepticism about even the most mainstream social media reporting means plaintiffs need help convincing jurors that third parties believed any reported statement to be true. Such proof is necessary for a plaintiff to prove the elements of defamation.

Fake News Today

Fake News is any journalistic story that knowingly and intentionallyincludes untrue factual statements. Today, many speak of Fake News as a noun. There is no shortage of examples of Fake News and its impact.

      • Pizzagate: During the 2016 Presidential Election, Edgar Madison Welch, 28, read a story on (then) Facebook that Hilary Clinton was running a child trafficking ring out of the basement of a pizzeria. Welch, a self-described vigilante, shot open a locked door of the pizzeria with his AR-15.
      • A study by three MIT scholars found that false news stories spread faster on Twitter than true stories, with the former being 70% more likely to be retweeted than the latter.
      • During the defamation trial of Amber Heard and Johnny Depp, a considerable number of “Fake News” reports circulated across social media platforms, particularly TikTok, Twitter, and YouTube, attacking Ms. Heard at a disproportionality more significant rate than Mr. Depp.

 

What is Defamation?

To establish defamation, a plaintiff must show the defendant published a false assertion of fact that damages the plaintiff’s reputation. Hyperbolic language or other indications that a statement was not meant to be taken seriously are not actionable. Today’s understanding that everything on the Internet is susceptible to manipulation destroys defamation.

Because the factuality of a statement is a question of law, a plaintiff must first convince a judge that the offending statement is fact and not opinion. Courts often find that Internet and social media statements are hyperbole or opinion. If a plaintiff succeeds in persuading the judge, then the issue of whether the statement defamed the plaintiff heads to the jury. A jury faced with defamation must determine whether the statement of fact harmed the defendant’s reputation or livelihood to the extent that it caused the plaintiff to incur damages. The prevalence of Fake News creates another layer of difficulty for the Internet plaintiff, who must convince the jury that the statement was true.

Defamation’s Slow and Steady Erosion

Since the 1960s, the judiciary has limited plaintiffs’ ability to succeed in defamation claims. The decisions in Sullivan v. New York Times and Gertz increased the difficulty for public figures, and those with limited public figure status, to succeed by requiring them to prove actual malice against a defendant, a standard higher than the mere negligence standard allowed for individuals who are not of community interest.

The rise of Internet use, mainly social media, presents plaintiffs with yet another hurdle. Plaintiffs can only succeed if the challenged statement is fact, not opinion. However, judges find that statements made on the Internet are opinions and not points. The combined effect of Supreme Court limitations on proof and the increased belief that social media posts are mostly opinions has limited the plaintiff’s ability to succeed in a defamation claim.

Destroying Defamation

If the Supreme Court and social media have eroded defamation, Fake News has destroyed it. Today, convincing a jury that a false statement purporting to be fact has defamed a plaintiff is difficult given the dual issues of society’s objective mistrust of the media and the understanding that information on the Internet is generally opinion, not fact. Fake News sows confusion and makes it almost impossible for jurors to believe any statement has the credibility necessary to cause harm.

To be clear, in some instances, fake News is so intolerable that a jury will find for the plaintiffs. A Connecticut jury found conspiracy theorist Alex Jones liable for defamation based on his assertion that the government had faked the Sandy Hook shootings.

But often, plaintiffs are unsuccessful where the challenged language is conflated with untruths. Fox News successfully defended itself against a lawsuit claiming that it had aired false and deceptive content about the coronavirus, even though its reporting was, in fact, untrue.

Similarly, a federal judge dismissed a defamation case against Fox News for Tucker Carlson’s report that the plaintiff had extorted then-President Donald Trump. In reaching its conclusion, the judge observed that Carlson’s comments were rhetorical hyperbole and that the reasonable viewer “‘arrive[s] with the appropriate amount of skepticism.”‘ Reports of media success in defending against defamation claims further fuel media mistrust.

The current polarization caused by identity politics is furthering the tendency for Americans to mistrust the media. Sarah Palin announced that the goal of her recent defamation case against The New York Times was to reveal that the “lamestream media” publishes “fake news.”

If jurors believe that no reasonable person could credit a challenged statement as accurate, they cannot find that the statement the plaintiff asserts is defamatory caused harm. An essential element of defamation is that the defendant’s remarks damaged the plaintiff’s reputation. The large number of people who believe News is fake, the media’s rush to publish, and external attacks on credible journalism have created a problematization of truth among members of society. The potential for defamatory harm is minimal when every news story is questionable. Ultimately, the presence of Fake News is a blight on the tort of defamation and, like the credibility of present-day news organizations, will erode it to the point of irrelevance.

Is there any hope for a world without Fake News?

 

MODERN WARFARE OF COMMUNICATION

As an individual that has been around, and currently living with, a call-of-dutier  (i.e. one that plays Call of Duty), if I heard @yoUrD0gzM1n3 scream “S*** THAT YOU F**** N****,”  over the microphone, I wouln’t flinch. The vulgar and violent communication between video game players, is not only normalized in today’s society, but vigilantly evades regulation. The video game industry, once thought of as an innocent pass-time or hobby, has since developed into a weapon for communication, and from the looks of it, it was my responsibility to hold @yoUrD0gzM1n3 accountable for his discriminatory remarks.

THE VIRTUAL SNIPER

The video game industry began on a some-what yellow brick road. Beginning in the ‘70s and well into the ‘80s, video game launches such as Space Invaders, Pac-Man, Donkey Kong, and Flight started the mark of a new era: the gamer life.

But as video games began to increase in popularity, and new technology began to emerge, the video game industry, too, was greeted with a dark upgrade. The creation, mass production, and incorporation of computers, consoles, and P.C. monitors in our day-to-day lives created a new opportunity for video game Developers. Slowly, first person shooter (FPS), a genre of video games played from the point of view of the protagonist carrying a weapon, began to emerge.

Today’s FPS games are a general ode to one of the first series that pioneered the gaming industry down this gruesome path: Doom. The Doom franchise, developed by id Software, is a series that includes various FPS video games. Among the first, the Doom series introduced “3D graphics, third-dimension spatiality, networked multiplayer gameplay, and support for player-created modifications” that the gaming industry had seen at the time. 
Since, the FPS genre quickly grew, and more games, as well as players began to emerge, creating a gaming franchise obsessed with virtually-simulated violence. Today, certain games are so intwined with violence and gore, that their reputations have been entirely built upon these society-created gamer-pillars. Major gaming franchises such as League of Legends, Call of Duty, Counter-Strike, Dota 2, Overwatch, Ark and Valorant have caused uproar since their creation due to the gross level of violence and harm that is depicted and encouraged, and the emergence of hostile communities that support them, both through action and speech.

BEFORE I SHOOT, LET ME SAY SOMETHING REAL QUICK

The FPS and violent video game genre, however wouldn’t have grown, if it not for the creation of “Voice Over Internet Protocol” (“VoIP”). VoIP allows for “Voice Chat,” and other subsequent communicative functions that allow game players to interact with others while playing. The infamous launch of Xbox Live on the original Xbox in 2002 marked a groundbreaking milestone in gaming history, allowing gamers to now “chat with both friends and strangers, in and out of games, across multiple games” all while from the comfort of their couch.  
But with creation of VoIP and mass incorporation into consoles and games, along came certain issues, as with all online communication tools and devices typically do. The introduction of communicative tools in the gaming industry allowed users to not only communicate, but also harass one-another too. Today, essentially all video games include some sort of communication tool. Users of most video games can usually chat, talk, or communicate with symbols or gestures with other users while also playing. Video games, however, are still socially considered games. To purchase, they are found in the video-game isle of Target or Best Buy, or in a video-game store like Game Stop.

But, despite being a game, they are considered more similar to Shakespeare’s Romeo and Juliet, and are constitutionally protected as such.

@SCOTUS on ‘Live’

The turning point of whether video games deserved constitutional protection ultimately rested on whether video games were to be considered more like mechanical entertainment devices or, rather, mediums of expression.

The Supreme Court ultimately put the nail in the coffin, and went with the latter.

 In Brown v. Entertainment Merchants Ass’n (2011), video games first received constitutional protection. In Brown, the Court invalidated a California law that prohibited the sale or rental of violent video games to minors without a parent present. The Court stated:

Like the protected books, plays, and movies that preceded them, video games communicate ideas — and even social messages — through many familiar literary devices … and through features distinctive to the medium. That suffices to confer First Amendment protection.

But, what if the Supreme Court went…the other way? Had the Court viewed video games more like “pinball machines”, today’s video game world and culture would be unrecognizable. Before Brown, courts generally viewed video games as lacking the communicative, informative element that is required of free speech in order for its protection to kick in. Video games were viewed to be more like “mechanical entertainment devices” and “recreational pass times” rather than a tool to spread knowledge or information.

BLIND PROTECTION

Objectively, video games consist of overall rules that are essentially the same as other non-online games and pass times such as chess, baseball, and poker. The key distinction, however, is the iron shield protection of First Amendment speech that has permitted, and continues to permit, societal issues of verbal racial discrimination and harassment to grow.

Video games, although as a product are a form of expression, include certain communicative elements that shouldn’t necessarily be protected as such. What about the expression within the game, between players? Should @PIgSl@y3r’s chat to @B100DpR1NC3$$ stating “f*** y**, your mom is a b****” be considered ‘the spreading of knowledge or information’?

Brown protects video games as a whole, but fails to address or elude to the harmful effects that certain toxic communications between players because of the availability of communicative tools within video games, specifically violent ones. All communications between
players are therefore at the disposal of monitoring, or not, by game developers and software creators. But rather, the plight of violent and toxic communication and its impact on society is left in the hands of the gamer itself. It’s now up to @B100DpR1NC3$$ to bring justice for @PIgSl@y3r’s potty mouth by effectively remembering to submit a complaint after he finished slaying the dragon and never knowing or remembering to check if the user was ever banned.

JUSTICE IN THE HANDS OF @B100DpR1NC3$$

To combat overall video game toxicity (generally encompassing all in-game and game-related harassment, hate speech, discrimination, bullying, sexualization, incitement of violence, and like conduct), developers have met calls for a solution with mediocre monitoring and reporting systems. Creators across the gaming industry largely rely on in-game player reporting systems, and artificial intelligence-backed automated filtering systems to find and detect abusive players. Community standards and guidelines are posted and updated, gamer-submitted reports are reviewed, and the automated systems continue to filter. Developers have also had to curtail their video games overseas, in order to abide by International censorship guidelines. Most recently in 2009, Russia took argument with the overall terroristic portrayal of the Russians in Activision’s Call of Duty: Modern Warfare 2, forcing Activsion to make edits in certain versions of the game, and banning the console version as a whole.

Censorship policies, however, are ultimately upheld by users and players themselves. In order to monitor speech, developers have created varieties of reporting mechanisms in which users can report other users for harassment, discrimination, and other forms of harmful speech. Players not only have the responsibility of beating the next level and unlocking the next perk, but in order to play the game, Activsion say’s they must help out, too.

A CLOSER LOOK: ACTIVISION BLIZZARD

Activision Blizzard, Inc. the first third-party game developer (solely developing software, and not physical consoles), first emerged in 1979 and has since made a core presence in the gaming-realm. Activsion’s world renowned games, such as Candy Crush, The Call of Duty Series, and World of Warcraft (oh my!). But along with the Developer’s positive impact and developments on the industry, also came the bad. The Call of Duty Series, the most violent of them all, has notoriously been scrutinized for its incessant depiction of violence and racism, as well as vulgar, hostile gamer-to-gamer communication. If you compare Atari’s 1980 Battlezone with Activision’s Call of Duty Series, the overall deadliness and gore depicted within the game has greatly expanded, as much as the harassment, hate speech, violence, and discrimination both portrayed and encouraged.

Most recently, Activision’s latest update to it’s Code of Conduct for the Call of Duty series outlines efforts in “combat[ting] toxic behavior.” Before it’s latest release of the series, Call of Duty: Warzone 2.0, Activision publicly reiterated its commitment in “delivering a positive gameplay experience.” The three key elements outlining the new code include: treat everyone with respect, compete with integrity, and stay vigilant.

The Developer introduced “automated filtering systems” that monitor and review both text-chat as well as account names, and announced that as a result 500,000 accounts have been banned and 300,000 more have been re-named. The Call of Duty team stated that the implementation of such filtering systems resulted in seeing “more than [a] 55% drop in the number of offensive username and clan tags reports from our players, year-over-year, in the month of August alone in Call of Duty: Warzone.”

The anti-toxicity upgrade includes new features for in-game reporting including an optional “dialog box” that allow players to communicate more about the situation, as well as more tools to help report offensive or inappropriate behavior. Players found to engage in offensive voice chat by the moderation team are also muted the use from all in-game voice chat. Activision explained:

“We know addressing toxicity requires a 24/7 sustained effort. Since our last Call of Duty® community update, our enforcement and anti-toxicity teams have continued to progress, including scrubbing our global player database to remove toxic users.”

 @B100DpR1NC3$$ does it all: Virtually beheading dragons and monitoring speech. 

Although Activision’s efforts to reduce overall gamer-toxicity have proved to be seemingly successful, the true credit should be given to the players that reported misusers. Activision has repeatedly credited their so-called “enforcement and anti-toxicity teams,” effectively creating a glare over the fact that these teams, aren’t exactly team-players. These teams instead rely on the leg work of in-game reports by actively-playing gamers, and artificial intelligence. As it turns out, the anti-toxicity team doesn’t even play the game. The team, as acknowledged by Activision, merely review reports that have already been submitted by game players. 

Rather than actively dropping-in on live games to monitor and view the live talking and chat functions as they are in use,  players themselves are required to do the monitoring, for them. Once players take the time to independently submit and report other users, only then will the “enforcement and anti-toxicity teams” review the report. Effectively, instead of a true monitoring system, inappropriate and non-conforming gamers will only ever be banned if someone else cares enough to report them.

LOSING THE MEANING OF VIDEO ‘GAME

So, going back to video games being protected as a medium of expression because they communicate ideas and social messages….. How can video games be considered a “medium of expression” to “communicate ideas,” yet evade any real monitoring of the expression within? If video games are to be considered in the boat of “books, plays, and movies” deserving of the protection of freedom of speech and expression, then it’s time for Developers and Software companies to do the leg work.

Communications between video game players are protected by the First Amendment, the same as posts by users on Facebook. Yet in society, video games are not thought of as a ‘way to communicate with someone’, the way that Facebook is, but rather as a game to play for entertainment. The reality, however, is that video games are no longer merely just games. With the rise in technology and incorporation of communication tools, video games are now a platform for toxic communication. Developers lack pressure or incentive to actually monitor the content and speech of video game players to one another, and evade further attention by publishing standards and mediocre efforts. Although Activision states that the new system “allows our moderation teams to restrict player features in response to confirmed player reports,” it’s up to players to start the process by taking the time to report in the first place. After a report is confirmed, only then, will the anti-toxicity get on their feet.

I Get High With a Little Help From My (Online) Friends: The Role of Social Media in the Marketing of Illegal and Gray-Market Drugs

Opening

For better or worse, social media has changed our society forever. We all see and experience its impact in our daily lives, no matter the national, cultural, or social context. Nowhere is this more true than in the realm of commerce. Social media has proven to be an incredibly effective tool for the creation and maintenance of business, arguably making massive inroads into the world of marketing and sales. Above all, those with even a drop of entrepreneurial spirit no longer need to rely solely upon external investment and institutional gatekeepers to get their product out to the masses; they only need an internet connection, a device, and a willingness to build a social media presence. However, social media marketing has also enabled the growth of unethical and illegal business, including the world of illicit and gray-market drug sales.

After 40 years of the War on Drugs, many experts, commentators, and members of law enforcement argue that the illegal drug trade is alive and well. We need only to look at the evidence: drug overdoses in the US are rising, organized crime has more power than ever, and transnational shipments are becoming more common. Furthermore, the drugs being traded are becoming stronger and more dangerous. Many countries are therefore forced to search for alternative legal solutions to this crisis. For example, a growing number of jurisdictions, domestically and internationally, are (rightfully) decriminalizing or even legalizing the production and sale of cannabis. Some are pursuing the decriminalization of possession for all drugs in an effort to combat the resulting health, economic, and social equity crises from criminalization policies.

Regardless of what you think of these various policies, we cannot ignore how social media has impacted and accelerated the sale of illegal and gray-market drugs. Therefore, it behooves us to understand how dealers and companies are marketing on social media, what law is relevant in the US, and what social media companies and policy makers are doing to deal with these challenges if we are to even begin to search for solutions for this complex problem.

Examples of Marketing

The most common form of illegal drug marketing on social media is achieved by the use of timed stories functions on major image-oriented sites (e.g. Snapchat, Instagram) or by quickly posting and manually removing the advertisement (e.g. Facebook, Twitter). Essentially, the dealer posts the advertisement, sometimes showing off the product, and lists other relevant information. Once the time period expires, the post is removed and dealers feel as if they have protected their anonymity. On top of this, dealers may use emojis, other text symbols, or slang as a code to communicate the nature or type of product. These methods are used for all kinds of illegal drugs, from fentanyl to MDMA to cocaine. After that, customers usually reach out to the dealer directly. Some use the direct messaging systems of the relevant social media services. Others reach out to the dealer on a wide range of messaging applications, especially those that market privacy and security (WhatsApp, Signal, Telegram, etc).

For gray-market drug sales, we must turn to the major example of THC isomer products. Δ-9-Tetrahydrocannabinol (THC) is the main psychoactive substance found in cannabis. It is a Schedule 1 controlled substance under US federal law banned in about half of the States. However, creative chemists and growers in cannabis-legal states have engineered a wide range of alternative isomer products, meaning products that are chemically different from the traditional THC understood by the law. While many of them are naturally occurring, their deliberate concentration essentially creates the same desired effect for users as traditional cannabis. Due to legal confusion and inaction, and alongside real world advertising and product availability, social media companies have shown that they are quite comfortable with running advertisements for such products from formal companies and letting individuals post about them

The Law

When it comes to illegal drug dealing, the law is, as one would hope, unfavorable towards the social media companies. Most importantly, Section 230 of the Communications Decency Act (Title V of the Telecommunications Act of 1996) does not help them at all. Specifically, 47 USC §§ 230(e)(1) and (3) clearly delineate that federal criminal law and state laws (including state criminal law) are not impacted in any way when it comes to enforcement against them. Therefore, the § 230(c) Good Samaritan provisions, which protect social media companies from legal liability for the posts of their users so long as they actively remove them, is not relevant.

The main law of concern for these companies is the Controlled Substances Act of 1970 which, along with various amendments and international treaties, regulates the production and sale of illegal drugs at the federal level. The most relevant part is 21 USC § 843(c), which makes it illegal for anyone to advertise illegal drugs, including on the internet. While the liability balance between the user and the social media service is unclear in both statutory and case law, the lack of Section 230 protection makes these companies uneasy.

For the issue of gray-market THC isomers, the main problem is that a loophole exists in federal law via the Agriculture Improvement Act of 2018. This omnibus bill, among other things, descheduled low Δ-9-Tetrahydrocannabinol cannabis, also known as hemp, from the Controlled Substances Act. While the goal was to introduce it back into farming as a useful industrial crop, the vagueness and broadness of the bill here accidentally legalized Δ-8-Tetrahydrocannabinol cannabis, another psychoactive THC isomer that can be found in hemp, and a wide range of other isomers. This fluke arguably has opened up the floodgates on these products, in the forms of vapes, edibles, tincture drops, and smokeable flower. At the federal level, the DEA has failed to address this issue under the Federal Analogue Act. Specifically, 21 USC § 802(32) defines what analogues, including isomers, are and how they can be regulated under the Controlled Substances Act. States are trying to keep up with all the isomers but clearly fighting a losing battle; just go to your local gas station or convenience store and you will find a wide array of these items available.

Role of Social Media Companies and Policy Makers

Due to the fact that the law is quite underdeveloped and scattered surrounding illegal drug advertising in the US, many social media companies have attempted to moderate such content on their own accord. The major platforms all have policies that ban the sale, display, or solicitation of illegal drugs in one form or another (Facebook/Meta, Instagram, Twitter, TikTok, Snapchat). Nevertheless, this self-regulation has arguably failed

However, the companies are not the only ones with a share of the blame for this problem; Congress needs to act by passing new statutes that force the companies to regulate and report the marketing of illegal drugs. Surprisingly, a handful of bills have been proposed to alleviate this legal quandary. Senator Marshall’s “Cooper Davis Act” (S.4858) aims to amend the Controlled Substances Act by obliging all social media companies to report any attempt to market or sell illegal drugs to the DEA within a certain time frame. This would include all user data, history, and anything else deemed relevant by investigators. Representative Wasserman Schultz is currently drafting the “Let Parents Choose Protection Act” (aka Sammy’s Law), which would force social media companies to allow parents to track the social media activity of their kids, including their interaction with drug dealers or posts about illegal drugs. These laws, among many others, raise significant and obvious concerns about privacy and free speech rights. These need to be taken seriously and included in any such bill going forward.

On the issue of the gray market for THC isomers, social media companies and Congress must also act. While I am an advocate for the federal legalization of cannabis, allowing an unregulated market to exist is quite reckless. On top of the fact that the effects of the various isomers are not well known and not regulated by the FDA, their advertising, in person and online, as a cure-all snake oil is unethical and unjustifiable.  All of the major social media platforms have advertiser and business policies against unethical practices such as false advertising but fail to use them. Congress, on the other hand, has not introduced any bills in this specific area. Likewise, state lawmakers are not exempt from acting here. They need to pursue policies to regulate this gray-market in their jurisdictions to fill in the shortcomings of Congress, as New York and Kentucky are attempting to do.

Overall, the impact of social media marketing at-large must be taken seriously by the federal and state governments. While it brings about some good in spurring business, the current paradigm enables bad actors to sell seriously dangerous illegal drugs and irresponsible businessmen to push unregulated, untested, and poorly understood gray-market drugs with little to no serious oversight. Can we, as a society, change for the better? Or will we be beholden to an unsustainable status quo of techno-anarchy that will cause unnecessary and preventable harm and suffering? Only time will tell.

Social Media Has Gone Wild

Increasing technological advances and consumer demands have taken shopping to a new level. You can now buy clothes, food, and household items from the comfort of your couch, and in a few clicks: add to cart, pay, ship, and confirm. Not only are you limited to products sold in nearby stores, but shipping makes it possible to obtain items internationally. Even social media platforms have shopping features for users, such as Instagram Shopping, Facebook Marketplace, and WhatsApp. Despite its convenience, online shopping has also created an illegal marketplace for wildlife species and products.

Most trafficked animal-the Pangolin

Wildlife trafficking is the illegal trading or sale of wildlife species and their products. Elephant ivory, rhinoceros horns, turtle shells, pangolin scales, tiger furs, and shark fins are a few examples of highly sought after wildlife animal products. As social media platforms expand, so does wildlife trafficking.

Wildlife Trafficking Exists on Social Media?

Social media platforms make it easier for people to connect with others internationally. These platforms are great for staying in contact with distant aunts and uncles, but it also creates another method for criminals and traffickers to communicate. It provides a way to remain anonymous without having to meet in-person, which makes it harder for law enforcement to identify a user’s true identity. Even so, can social media platforms be held responsible for making it easier for criminals to commit wildlife trafficking crimes?

Thanks to Section 230 of the Communications Decency Act, the answer is most likely: no.

Section 230 provides broad immunity to websites for content a third-party user posts on the website. Even when a user posts illegal content on a website, the website cannot be held liable for such content. However, there are certain exceptions where websites have no immunity. It includes human and sex trafficking. Although these carve-outs are fairly new, it is clear that there is an interest in protecting people vulnerable to abuse.

So why don’t we apply the same logic to animals? Animals are also a vulnerable population. Many species are unmatched to guns, weapons, traps, and human encroachment on their natural habitats. Similar to children, animals may not have the ability to understand what trafficking is or even the physical strength to fight back. Social media platforms, like Facebook, attempt to combat the online wildlife trade, but its efforts continue to fall short.

How is Social Media Fighting Back?

 

In 2018, the World Wildlife Fund and 21 tech companies created the Coalition to End Wildlife Trafficking Online. The goal was to reduce illegal trade by 80% by 2020. While it is difficult to measure whether this goal is achievable, some social media platforms have created new policies to help meet this goal.

“We’re delighted to join the coalition to end wildlife trafficking online today. TikTok is a space for creative expression and content promoting wildlife trafficking is strictly prohibited. We look forward to partnering with the coalition and its members as we work together to share intelligence and best-practices to help protect endangered species.”

Luc Adenot, Global Policy Lead, Illegal Activities & Regulated Goods, TikTok

In 2019, Facebook banned the sale of animals altogether on its platform. But this did not stop users. A 2020 report showed a variety of illegal wildlife was for sale on Facebook. This clearly shows the new policies were ineffective. Furthermore, the report stated:

“29% of pages containing illegal wildlife for sale were found through the ‘Related Pages’ feature.”

This suggests that Facebook’s algorithm purposefully connects users to pages and similar content based on a user’s interest. Algorithms incentivize users to rely and depend on wildlife trafficking content. They will continue to use social media platforms because it does half of the work for them:

      • Facilitating communication
      • Connecting users to potential buyers
      • Connecting users to other sellers
      • Discovering online chat groups
      • Discovering online community pages

This fails to reduce wildlife trafficking outreach. Instead, it accelerates visibility of this type of content to other users. Does Facebook’s algorithms go beyond Section 230 immunity?

Under these circumstances, Facebook maintains immunity. In Gonzalez v. Google LLC, the court explains how websites are not liable for user content when the website employs content-neutral algorithms. This means that a website did nothing more than program an algorithm to present similar content to a user’s interest. The website did not offer direct encouragement to publish illegal content, nor did it treat the content differently from other user content.

What about when a website profits from illegal posts? Facebook receives a 5% selling fee for each shipment sold by a user. Since illegal wildlife products are rare, these transactions are highly profitable. A pound of ivory can be worth up to $3,300. If a user sells five pounds of ivory from endangered elephants on Facebook, the platform would profit $825 from one transaction. The Facebook Marketplace algorithm is similar to the algorithm based on user interest and engagement. Here, Facebook’s algorithm can push illegal wildlife products to a user who has searched for similar products. Yet, if illegal products are constantly pushed and successful sales are made, Facebook then benefits and makes a profit off these transactions. Does this mean that Section 230 will continue to protect Facebook when it profits from illegal activity?

Evading Detection

Even with Facebook’s prohibited sales policy, users get creative to avoid detection. A simple search of “animals for sale” led me to a public Facebook group. Within 30 seconds of scrolling, I found a user selling live coral, and another user selling an aquarium system with live coral, and live fish. The former reads: Leather $50. However, the picture shows a live coral in a fish tank. Leather identifies the type of coral it is, without saying it’s coral. Even if this was fake coral, a simple Google search shows a piece of fake coral is worth less than $50. If Facebook is failing to prevent users from selling live coral and live fish, it is most likely failing to prevent online wildlife trafficking on its platform.

Another method commonly used to evade detection is when users post a vague description or a photo of an item and include the words “pm me” or “dm me.” These are abbreviations for “private message me” or “direct message me.” It is a quick way to direct interested users to personally reach out to the individual and discuss details in a private chat. It is a way to communicate outside of the leering public eye. Sometimes a user will offer alternative contact methods, such as a personal phone number or an email address. This transitions the interaction off of or to a new social media platform.

Due to high profitability, there are lower stakes when transactions are conducted anonymously online. Social media platforms are great for concealing a user’s identity. Users can use fake names to maintain anonymity behind their computer and phone screen. There are no real consequences for using a fake name when the user is unknown. Nor is there any type of identity verification to truly discover the user’s true identity. Even if a user is banned, the person can create a new account under a different alias. Some users are criminals tied to organized crime syndicates or terrorist groups. Many users operate outside of the United States and are overseas, which makes it difficult to locate them. Thus, social media platforms incentivize criminals to hide among various aliases with little to lose.

Why Are Wildlife Products Popular?

Wildlife products have a high demand for human benefit and use. Common reasons why humans value wildlife products include:

Do We Go After the Traffickers or the Social Media Platform?

Taking down every single wildlife trafficker, and users that facilitate these transactions would be the perfect solution to end wildlife trafficking. Realistically, it’s too difficult to identify these users due to online anonymity and geographical limitations. On the other hand, social media platforms continue to tolerate these illegal activities.

Here, it is clear that Facebook is not doing enough to stop wildlife trafficking. With each sale made on Facebook, Facebook receives a percentage. Section 230 should not protect Facebook when it reaps the benefits of illegal transactions. This takes it a step too far and should open Facebook to the market of: Section 230 liability.

Should Facebook maintain Section 230 immunity when it receives proceeds from illegal wildlife trafficking transactions? Where do we draw the line?

Skip to toolbar