When the Internet Moves Faster than the Market: Impacts of Viral Products and Trends in Social Media

Recently, but not shockingly, the internet and its consumers entered a social media-driven craze over a plush monster toy, labubus. The cute, plush monster took over the internet and social media platforms like Tiktok and Instagram, integrating into one of 2025’s latest fashion trends. Created by Kasing Lung in 2015, a labubu is a fictional character that Lung transformed into collectibles by entering into a licensing agreement with Pop Mart in 2019.  

Blackpink’s star Lisa, and celebrities like Rihanna, Dua Lipa, and Kim Kardashian have all contributed to the fame of this doll — making it one of 2025’s most sought after trends. Aside from the various celebrities and influencers that are attributed to the Labubu’s popularity, the collectibles are also sold in what are referred to as “blind boxes”. Essentially, the color and type of the labubu is revealed only after the blind box is bought and received by the buyer – adding to the excitement and anticipation behind finding a rare figure. Labubus were seen all over Instagram and Tiktok, going “viral” as people created memes of them, posted videos unboxing them, and incorporated them into their fashion style.  

The head of licensing at Pop Mart North America, Emily Brough, disclosed that such blind boxes generated more than $419 million in revenue in 2024 — achieving 726.6% year-over-year growth. The generated increased revenue can be significantly attributed to TikTok’s platform, considering as of April 2025, Pop Mart generated $4.8 million in sales on TikTok Shop, a rise of 89% in only one month. The doll rapidly became ultra-desirable on the internet. Although the collection retails at approximately $27 in the U.S., resellers typically double the price on the market, such as $149 on e-bay for a rare Chestnut Cocoa Labubu. Not only do blind boxes play into the fascination behind a Labubu, but resellers also create the concept of exclusivity for the collectibles that ultimately attracts even more consumers. For consumers, Labubus are more than a fashion accessory; owning a Labubu symbolizes being involved and up to date with the current trends, being relatable to other consumers and influencers, and buying something that is highly sought-after.   

With a product like Labubus going viral, it is natural to wonder what exactly the standard is for something to be considered viral.  The Merriam-Webster dictionary describes “viral” as something that is “quickly and widely spread or popularized especially by means of social media”. Although such definition can be broadly applied, the Merriam Webster dictionary explains the simple, core attributes to something that is viral: 1) it is quickly spread, and 2) it is popular. Something can be popular but gain popularity throughout an extended period of time – but what distinguishes a viral product from a typical popular product is the rapid pace the product gains recognition. It could happen in a timeline of a few months, weeks, days, and even overnight. Essentially, a product or trend becomes viral when the promoter of the product creates highly engaging and shareable content that taps into the emotional connection of their audience, who then tap, click, like, comment, and share about the product. The viewer engages more with the post, and the product being mentioned spreads widely throughout the social media universe, quickly earning that viral title.  

Why should the latest trend of Labubus not be considered shocking? The concept of a plush monster being ultra-desirable and extremely sought-after going viral on the internet might not be expected by everyone; but it is important to note that the common ground behind a majority of viral products is the internet. Labubus are only one example of a nearly endless list of trends and viral products social media has boosted. The same effect social media had on Labubus, it also had on Stanley cups in 2023. Viral videos of Stanley cups circulated TikTok, resulting in a significant jump in Stanley’s revenue from approximately $94 million in 2020 to $750 million in 2023. It is clear that social media platforms are an underlying basis for viral products because of the easy access to posts and videos, alongside the individualized algorithms, content creators, celebrities, and e-community that social media platforms like TikTok and Instagram provide. 

It is no secret that celebrities and influencers use social media platforms like Tiktok and Instagram to promote products as part of their brand endorsements. Inevitably, viewers and followers of such celebrities are influenced, resulting in the lifecycle of a trend. The trend typically becomes viral quickly, with much contribution associated to social media algorithms as well, but the product trend cycle is rarely long-term.  

Often, these social media trends that come and go are referred to as “micro-trends”; essentially, micro-trends refer to short-lived trends that gain a high amount of attention in a fairly short period outside of a traditional trend cycle, and ultimately lose public relevance just as fast as they gained it. Micro-trends are advertised through social media as consumer must-haves, creating the ripple effect that consumers feel like they need to buy, buy, buy. The shortened lifecycle of viral products and micro-trends have resulted in a long-term cycle amongst consumers to buy them. It is a full circle of a product going viral, that turns into a micro-trend, leads into overproduction and inevitable overconsumption, creating a higher demand in markets that destabilize economies.  

The issue is that micro-trends are highly associated with the issue of overconsumption that results in companies’ fast production and release of products to keep up with trends.  This issue of overconsumption is accompanied by the rapid and disposable use of micro-trend related products, adding to the broader waste problems that already exist in, for example, the fashion industry. Further, these micro-trends impact the longevity of businesses. The quick turnover of consumers losing interest after these trends hit their highest popularity impacts local businesses from keeping up with the rapid production necessary for micro-trends to exist. Simply put, micro-trends are not sustainable for consumers, businesses, and the environment. For example, fast fashion clothing associated with such micro-trends are commonly received from the Kantamanto Market in Accra, Ghana, where about 40% of the clothing leaves as waste.  

The want and need by consumers to be part of current micro-trends can always be drawn back to social media. Moving away from magazines like Vogue or Elle, social media platforms like TikTok have progressed into the new resource for consumers to find the newest and most popular trend.  Social media algorithms create echo chambers of specific trends by identifying when certain style gains recognition and then feeding users with similar tastes; and from there a micro-trend is born. The algorithm identifies specific trends by recognizing which posts receive the most engagement (what content is viral). The more a post is shared, liked, or commented on, the faster it will spread. These algorithms typically have a faster trend turnaround because users of such platforms have access to almost instant updates of what is trending and what is popular – leading into a loophole of doomscrolling and impulsive spending. Trends are appearing in algorithms at a higher pace and demand than supply chains can respond to. With social media apps and their algorithms, consumers have almost instant access to finding micro-trends and buying into them; and almost instant access creates instant gratification for consumers.  

Algorithms are not the only role in the social media realm that contributes to the viral impact on businesses. Now, social media platforms have also progressed into the new digital storefront, serving as a place to both look and buy. It is simple: open the app, scroll, click, and buy it. E-commerce platforms like Instagram and Tiktok have individualized and specific storefronts to make it easier for their users to buy into the most viral, latest trends, and fast. For example, in 2024, TikTok shop had grown to more than 500,000 United States based sellers within the eight months of launching, and had around 15 million sellers worldwide. E-commerce sites like such can benefit companies that prioritize overconsumption, but they also can promote micro-trends. Algorithms and e-commerce sites can have the ability to strongly affect the economy, where in 2024, economists at the Federal Reserve discovered that inflation-adjusted spending on retail goods increased compared to 2018. Additionally, businesses are impacted as consumers are being drawn away from shopping in-person at small, local, and traditional retailers. The overarching economic impact can be conceptualized by the fact that viral products and micro-trends result in temporary, short-term sales, while creating long-term instability in businesses.  

With the rise of e-commerce in social media, also comes the rise of issues for consumers. Whether in store or online, consumers have the right to safety, to be informed, to choose, to be heard, and to redress. To protect consumers, businesses can provide clear, transparent information about their products; maintain fair transactions; hold themselves accountable for the safety of their products; and protect the privacy of their consumers. Given the large volume of transactions taking place on e-commerce sites, it becomes a challenge to accurately and properly regulate and monitor all transactions to protect against any and all issues that may arise. Consumers are now concerned with where the personal information they share is going, avoiding cyber fraud and scams, and receiving low quality products. Even further, new issues regarding consumer safeguards such as intellectual property concerns are introduced. For example, with social media’s rapid spread of products and trends, copycat products are becoming increasingly more common. A copycat product is a product that is designed, branded, or packaged to resemble exactly the like of a well-established competitor. Copycat products are created deliberately, to use the established brand’s identity and reputation and market off that. The legal implications associated with copycat products include trademark infringement, unfair competition, and consumer fraud liability. Brands will reproduce viral creator designs without permission and devalue creative labor, and viewers are more susceptible to believing and trusting such copycat products are either associated with the original or of similar quality. However, influencers must be aware that when using social media to share and promote products, and earn that viral title, if the product is a dupe, or a copycat, it could fall under a violation of Section 5(a) of the Federal Trade Commission Act.

E-commerce’s part towards overconsumption can be analyzed by looking at the four step process behind purchasing a product: awareness, desire, consideration, and purchase. Because of how quickly products become viral and how fast micro-trends come and go, this four step process for consumers is sped up. Often, consumers will jump from awareness to purchase if the price tag is small enough. Either way, the desire gets created when either the algorithm brings it to the viewer or a content creator references it. Many posts will draw consumers’ attention to e-commerce sites, showing them how easily accessible their shopping can be, by merging social media and e-commerce. Other methods retailers use to draw in consumers are cognitive biases. For example, countdown banners can create an urgency bias amongst consumers that they need the product now; and a social proof bias can push the consumer who is considering to purchase to buy when they see tags highlighting how many people have bought it or how high the ratings are.

Although the progression of e-commerce and social media bring initial yet exciting benefits to consumers,  the intricacies should not be overlooked. It is important to identify when our internet moves faster than our market. Viral products and trends may have a short lifecycle, yet their impacts can have the potential to be longstanding for businesses and consumers. 

Francesca Rocha

November 12, 2025

The New Border: Immigration Law in the Age of Social Media Monitoring

In today’s digital world, where much of public discourse takes place online, the intersection between social media and immigration law has become increasingly critical. From viral debates over “migrant bashing” posts to visa revocations tied to online activism, social media now serves both as a platform for immigrant voices and as a frontier for government surveillance.

Social Media Monitoring & Immigration

Recent policy developments confirm that U.S. immigration authorities are not only observing social media activity but actively using it to inform decisions.

On April 9th 2025, U.S. Citizenship and Immigration Services (USCIS) announced it will begin considering  antisemitic activity on social media platforms when evaluating immigration benefit applications. This policy immediately affected green card applicants, international students, and others seeking immigration benefits. 

USCIS will consider social media content that indicates an alien endorsing, espousing, promoting, or supporting antisemitic terrorism, antisemitic terrorist organizations, or other antisemitic activity as a negative factor in any USCIS discretionary analysis when adjudicating immigration benefit requests.” 

This marks a significant shift from traditional factors like criminal history or fraud to now assessing online speech and ideology. It reflects a growing willingness to treat moral or political expression, which was once considered private and protected, as a legitimate basis for immigration decisions.

These “discretionary analyses” primarily affect benefit applications such as adjustment of status, asylum, and visa renewals where officers have broad authority to evaluate an applicant’s moral character and other subjective factors.

ICE and Algorithmic Surveillance

Meanwhile, U.S. Immigration and Customs Enforcement (ICE) continues to expand its social media surveillance capabilities. ICE contracts with private technology companies to build AI driven systems that scrape and analyze public posts, images, and online networks across multiple languages. These systems search for “threat indicators” or potential immigration violations, flagging accounts through pattern recognition and linguistic analysis.

ICE’s Open Source Intelligence program relies on vendors such as Palantir and ShadowDragon to automate the collection and analysis of social media data for enforcement leads. Because these algorithms are secretive and often shielded from public records laws like the Freedom of Information Act (FOIA), immigrants often have no way to learn what online data was used against them or to challenge any mistakes or errors.

Observers  describe this trend as part of a broader “tech powered enforcement” model, in which digital footprints shape immigration outcomes.  In effect, a digital border has emerged. One that exists not at airports or checkpoints but within the virtual spaces people inhabit every day.

Speech and Expanding Risk

The implications are profound. A noncitizens tweets, Facebook posts, or even tagged photos can be scrutinized and used as evidence in visa adjudications or deportation proceedings.

This pervasive monitoring encourages self censorship. Immigrants and lawful permanent residents may delete posts, avoid political discussion, or disengage from activism online out of fear that a misunderstood comment could threaten their status. What once felt like ordinary self expression now carries real legal risk.

As the Brennan Center for Justice warns, vague or discretionary standards create chilling effects on speech by making it impossible to predict how officials will interpret online expression.

the April 9 notice is likely to quell speech, discouraging immigrants and non-immigrants who are lawfully seeking a variety of immigration benefits…..from taking part in a wide range of constitutionally protected activity for fear of retaliation. And its smorgasbord of vague terms, many with no legally recognized meaning, enables USCIS officers to exercise nearly unchecked discretion in determining when to reject an otherwise unobjectionable application for a benefit……”

The First Amendment and Ideological Vetting

This new surveillance landscape raises pressing First Amendment concerns. Although noncitizens do not enjoy the full range of constitutional protections, courts have long held that the government may not condition immigration benefits on ideological conformity. Social media vetting, however, blurs that line. Turning online expression into a proxy for moral or political loyalty tests.

Courts have long struggled to balance the executive’s plenary power over immigration with First Amendment concerns raised by ideological exclusions. In Kleindienst v. Mandel (1972) the Supreme Court upheld the government’s exclusion of a Belgian Marxist scholar, deferring to the executive’s authority over immigration even when the denial indirectly burdened U.S. citizens right to receive information and ideas. Decades later, in American Academy of Religion v. Napolitano (2009), the Second Circuit reaffirmed that while the executive retains broad power, it cannot rely on secret or arbitrary rationales for ideological exclusions. Together, these cases highlight the unresolved tension between immigration control and free speech protections.

Case Study: Mahmoud Khalil

The collision of social media, political activism, and immigration enforcement is sharply illustrated in the case of Mahmoud Khalil.

Mahmoud Khalil, a lawful permanent resident and recent Columbia University graduate, was arrested by ICE in New York in March 2025 after participating in pro-Palestinian demonstrations. He was detained in Louisiana for over three months pending removal proceedings.

The government cited  Immigration and Nationality Act  (INA) § 237(a)(4)(C)(i), a rarely used provision allowing deportation of a noncitizen whose “presence or activities” are deemed to have “potentially serious adverse foreign policy consequences.” The evidence reportedly consisted of a brief undated letter referencing Khalil’s activism and supposed foreign policy concerns

Khalil’s attorneys argued that he was targeted not for any criminal conduct but instead for his speech, association, and protest activity both on campus and online raising serious First Amendment and due process issues. 

 In May 2025, a federal judge found the statute likely unconstitutional as applied, and Khalil was released after 104 days in detention. 

The Future of the Digital Border

As immigration enforcement integrates algorithmic surveillance, the border is no longer confined to geography. It exists everywhere a user logs in. This new reality challenges long standing principles of due process, privacy and free expression.

Whether justified under national security, anti-hate policies, or fraud prevention, social media vetting transforms immigration law into a form of ideological policing. The challenge for policymakers is to balance legitimate screening needs with fundamental rights in an age when one tweet can determine a person’s future.

Cases like Mahmoud Khalil’s reveal how online activism can trigger enforcement actions that test the limits of constitutional and civil liberties protections. Legal scholars and advocates have urged Congress and Department of Homeland Security (DHS) to establish clearer rules ensuring transparency in algorithms, limiting ideology based denials, and mandating bias audits of surveillance tools.

Future litigation will test how the First Amendment and due process doctrines evolve in an age where immigration enforcement operates through data analytics rather than physical checkpoints.

Ultimately, the key questions we must ask ourselves are:

To what extent can authorities treat social media activism as a legitimate factor in visa or green card adjudications?

Does using immigration law to penalize online speech amount to viewpoint discrimination?

The answers will shape not only the future of immigration law but the very boundaries of free speech in the digital age.

Regulating the Scroll: How Lawmakers Are Redefining Social Media for Minors

In today’s digital world, the question is no longer if minors use social media but how they use it. 

Social media platforms don’t just host young users, they shape their experiences through algorithmic feeds and “addictive” design features that keep kids scrolling long after bedtime. As the mental health toll becomes increasingly clear, lawmakers are stepping in to limit how much control these platforms have over young minds.

What is an “addictive” feed and why target it? 

Algorithms don’t just show content, they promote it. By tracking what users click, watch, or like, these feeds are designed to keep attention flowing. For minors, that means endless scrolling and constant engagement which typically is at the expense of sleep, focus, and self-esteem.

Under New York’s Stop Addictive Feeds Exploitation (SAFE) for Kids Act, lawmakers found that:

 “social media companies have created feeds designed to keep minors scrolling for dangerously long periods of time.”

The Act defines an “addictive feed” as one that recommends or prioritizes content based on data linked to the user or their device.

The harms aren’t hypothetical. Studies link heavy social media use among teens with higher rates of depression, anxiety, and sleep disruption. Platforms often push notifications late at night or during school hours. Times when young users are most vulnerable. 

Features like autoplay, for you page, endless “you may also like” suggestions, and quick likes or comments can trap kids in an endless scroll. What begins as fun and harmless entertainment soon becomes a routine they struggle to escape.                              

 

Key Developments in Legislation 

It’s no surprise that minors exposure to social media algorithms sits at the center of today’s policy debates. Over the past two years, state and federal lawmakers have introduced laws seeking to rein in the “addictive” design features of online platforms. While many of these measures face ongoing rule making or constitutional challenges, together they signal a national shift toward stronger regulations of social media’s impact on youth. 

Let’s take a closer look at some of the major legal developments shaping this issue.

New York’s SAFE for Kids Act

New York’s Stop Addictive Feeds Exploitation (SAFE) for Kids Act represents one of the nation’s most ambitious efforts to regulate algorithmic feeds. The law prohibits platforms from providing “addictive feeds” to users under 18 unless the platform obtains verifiable parental consent or reasonably determines that the user is not a minor. It also bans push notifications and advertisements tied to those feeds between 12 a.m. and 6 a.m. unless parents explicitly consent. The rule making process remains ongoing, and enforcement will likely begin once these standards are finalized.

The Kids Off Social Media Act (KOSMA)

At the federal level, the Kids Off Social Media Act (KOSMA) seeks to create national baselines for youth protections online. Reintroduced to Congress, the bill would:

  • Ban social media accounts for children under 13.
  • Prohibit algorithmic recommendation systems for users under 17.
  • Restrict social media access in schools during instructional hours.

Supporters argue the bill is necessary to counteract the addictive nature of social media design. Critics, including digital rights advocates, question whether such sweeping restrictions could survive First Amendment scrutiny or prove enforceable at scale. 

KOSMA remains pending in Congress but continues to shape the national conversation about youth and online safety.

California’s SB 976 

California’s Protecting Our Kids from Social Media Addiction Act (SB 976) reflects a growing trend of regulating design features rather than content. The law requires platforms to:

  • Obtain parental consent before delivering addictive feeds to minors.
  • Mute notifications for minors between midnight and 6 a.m. and during school hours unless parents opt in.

The statute is currently under legal challenge for potential First Amendment violations, however, the Ninth Circuit allowed enforcement of key provisions to proceed suggesting that narrowly tailored design regulations aimed at protecting minors may survive early constitutional scrutiny.

Other State Efforts

Other states are following suit. According to the National Conference of State Legislatures (NCSL), at least 13 states have passed or proposed laws requiring age verification, parental consent, or restrictions on algorithmic recommendations for minors. Mississippi’s HB 1126, for example, requires both age verification and parental consent, and the U.S. Supreme Court allowed the law to remain in effect while litigation continues. 

Final Thoughts

We are at a pivotal moment. The era when children’s digital consumption went largely unregulated is coming to an end. The question now isn’t if  regulation is on the horizon, it’s how it will take shape, and whether it can strike the right balance between safety, free expression, and innovation.

As lawmakers, parents, and platforms navigate this evolving landscape, one challenge remains constant: ensuring that efforts to protect minors from harmful algorithmic design do not come at the expense of their ability to connect, learn, and express themselves online.

What do you think is the right balance between protecting minors from harmful algorithmic exposure and preserving their access to social media as a space for connection and expression?

Privacy Please: Privacy Law, Social Media Regulation and the Evolving Privacy Landscape in the US

Social media regulation is a touchy subject in the United States.  Congress and the White House have proposed, advocated, and voted on various bills, aimed at protecting and guarding people from data misuse and misappropriation, misinformation, harms suffered by children, and for the implications of vast data collection. Some of the most potent concerns about social media stem from use and misuse of information by the platforms- from the method of collection, to notice of collection and use of collected information. Efforts to pass a bill regulating social media have been frustrated, primarily by the First Amendment right to free speech. Congress has thus far failed to enact meaningful regulation on social media platforms.

The way forward may well be through privacy law. Privacy laws give people some right to control their own personhood including their data, right to be left alone, and how and when people see and view them. Privacy laws originated in their current form in the late 1800’s with the impetus being one’s freedom from constant surveillance by paparazzi and reporters, and the right to control your own personal information. As technology mutated, our understanding of privacy rights grew to encompass rights in our likeness, our reputation, and our data. Current US privacy laws do not directly address social media, and a struggle is currently playing between the vast data collection practices of the platforms, immunity for platforms under Section 230, and private rights of privacy for users.

There is very little Federal Privacy law, and that which does exist is narrowly tailored to specific purposes and circumstances in the form of specific bills. Somes states have enacted their own privacy law scheme, California being on the forefront, Virginia, Colorado, Connecticut, and Utah following in its footsteps. In the absence of a comprehensive Federal scheme, privacy law is often judge-made, and offers several private rights of action for a person whose right to be left alone has been invaded in some way. These are tort actions available for one person to bring against another for a violation of their right to privacy.

Privacy Law Introduction

Privacy law policy in the United States is premised on three fundamental personal rights to privacy:

  1. Physical right to privacy- Right to control your own information
  2. Privacy of decisions– such as decisions about sexuality, health, and child-rearing. These are the constitutional rights to privacy. Typically not about information, but about an act that flows from the decision
  3. Proprietary Privacy – the ability to protect your information from being misused by others in a proprietary sense.

Privacy Torts

Privacy law, as it concerns the individual, gives rise to four separate tort causes of action for invasion of privacy:

  1. Intrusion upon Seclusion- Privacy law provides a tort cause of action for intrusion upon seclusion when someone intentionally intrudes upon the reasonable expectation of seclusion of another, physically or otherwise, and the intrusion is objectively highly offensive.
  2. Publication of Private Facts- One gives publicity To a matter concerning the Private life of another that is not of legitimate concern to the public, and the matter publicized would be objectively highly offensive. The first amendment provides a strong defense for publication of truthful matters when they are considered newsworthy.
  3. False Light – One who gives publicity to a matter concerning another that places the other before the public in a false light when The false light in which the other was placed would be objectively highly offensive and the actor had knowledge of or acted in reckless disregard as to the falsity of the publicized matter and the false light in which the other would be placed.
  4. Appropriation of name and likeness- Appropriation of one’s name or likeness to the defendant’s own use or benefit. There is no appropriation when a persona’s picture is used to illustrate a non-commercial, newsworthy article. This is usually commercial in nature but need not be. The appropriation could be of “identity”. It need not be misappropriation of name, it could be the reputation, prestige, social or commercial standing, public interest, or other value on the plaintiff’s likeness.

These private rights of action are currently unavailable for use against social media platforms because of Section 230 of the Decency in Communications Act, which provides broad immunity to online providers for posts on their platforms. Section 230 prevents any of the privacy torts from being raised against social media platforms.

The Federal Trade Commission (FTC) and Social Media

Privacy law can implicate social media platforms when their practices become unfair or deceptive to the public through investigation by the Federal Trade Commission (FTC). The FTC is the only federal agency with both consumer protection and competition jurisdiction in broad sectors of the economy. FTC investigates business practices where those practices are unfair or deceptive. FTC Act 15 U.S.C S 45- Act prohibits “unfair or deceptive acts or practices in or affecting commerce” and grants broad jurisdiction over privacy practices of businesses to the FTC. Trade practice is unfair if it causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and is not outweighed by countervailing benefits to consumers or competition. A deceptive act or practice is a material representation, omission, or practice that is likely to mislead the consumer acting reasonably in the circumstances, to the consumer’s detriment.

Critically, there is no private right of action in FTC enforcement. The FTC has no ability to enforce fines for S5 violations but can provide injunctive relief. By design, the FTC has very limited rulemaking authority, and looks to consent decrees and procedural, long-lasting relief as an ideal remedy. The FTC pursues several types of misleading or deceptive policy and practices that implicate social media platforms: notice and choice paradigms, broken promises, retroactive policy changes, inadequate notice, and inadequate security measures. Their primary objective is to negotiate a settlement where the company submits to certain measures of control of oversight by the FTC for a certain period of time. Violations of the agreements could yield additional consequences, including steep fines and vulnerability to class action lawsuits.

Relating to social media platforms, the FTC has investigated misleading terms and conditions, and violations of platform’s own policies. In Re Snapchat, the platform claimed that user’s posted information disappeared completely after a certain period of time, however, through third party apps and manipulation of user’s posts off of the platform, posts could be retained. The FTC and Snapchat settled, through a consent decree, to subject Snapchat to FTC oversight for 20 years.

The FTC has also investigated Facebook for violation of its privacy policy. Facebook has been ordered to pay a $5 billion penalty and to submit to new restrictions and a modified corporate structure that will hold the company accountable for the decisions it makes about its users’ privacy to settle FTC charges claiming that they violated a 2012 agreement with the agency.

Unfortunately, none of these measures directly give individuals more power over their own privacy. Nor do these policies and processes give individuals any right to hold platforms responsible for being misled by algorithms using their data, or for intrusion into their privacy by collecting data without allowing an opt-out.

Some of the most harmful social media practices today relate to personal privacy. Some examples include the collection of personal data, the selling and dissemination of data through the use of algorithms designed to subtly manipulate our pocketbooks and tastes, collection and use of data belonging to children, and the design of social media sites to be more addictive- all in service of the goal of commercialization of data.

No current Federal privacy scheme exists. Previous Bills on Privacy have been few and narrowly tailored to relatively specific circumstances and topics like healthcare and medical data protection by HIPPA, protection of data surrounding video rentals as in the Video Privacy Protection Act, and narrow protection for children’s data in Children’s Online Protection Act. All the schemes are outdated and fall short of meeting the immediate need of broad protection of widely collected and broadly utilized data from social media.

Current Bills on Privacy

Upon request from some of the biggest platforms, outcry from the public, and the White House’s request for Federal Privacy regulation, Congress appears poised to act. The 118th Congress has pushed privacy law as a priority in this term by introducing several bills related to social media privacy. There are at least ten Bills currently pending between the House of the Senate addressing a variety of issues and concerns from Children’s data privacy to the minimum age for use and designation of a new agency to monitor some aspects of privacy.

S744The Data Care Act of 2023 aims to protect social media user’s data privacy by imposing fiduciary duties on the platforms. The original iteration of the bill was introduced in 2021 and failed to receive a vote. It was re-introduced in March of 2023 and is currently pending. Under the act, social media platforms would have the duty to reasonably secure user’s data from access, refrain from using the data in a way that could foreseeably “benefit the online service provider to the detriment of the end user” and to prevent disclosure of user’s data unless the party is also bound by these duties. The bill authorizes the FTC and certain state officials to take enforcement actions upon breach of those duties. The states would be permitted to take their own legal action against companies for privacy violations. The bill would also allow the FTC to intervene in the enforcement efforts by imposing fines for violations.

H.R.2701 – Perhaps the most comprehensive piece of legislation on the House floor is the Online Privacy Act. In 2023, the bill was reintroduced by democrat Anna Eshoo after an earlier version on the bill failed to receive a vote and died in Congress. The Online Privacy Act aims to protect users by providing individuals rights relating to the privacy of their personal information. The bill would also provide privacy and security requirements for treatment of personal information. To accomplish this, the bill established a new agency – the Digital Privacy Agency- which would be responsible for enforcement of the rights and requirements. The new individual rights in privacy are broad and include the rights of access, correction, deletion, human review of automated decision, individual autonomy, right to be informed, and right to impermanence, amongst others. This would be the most comprehensive plan to date. The establishment of a new agency with a task specific to administration and enforcement of privacy laws would be incredibly powerful. The creation of this agency would be valuable irrespective of whether this bill is passed.

HR 821– The Social Media Child Protection Act is a sister bill to one by a similar name which originated in the Senate. This bill aims to protect children from the harms of social media by limiting children’s access to it. Under the bill, Social Media platforms are required to verify the age of every user before accessing the platform by submitting a valid identity document or by using another reasonable verification method. A social media platform will be prohibited from allowing users under the age of 16 to access the platform. The bill also requires platforms to establish and maintain reasonable procedures to protect personal data collected from users. The bill affords for a private right of action as well as state and FTC enforcement.

S 1291The Protecting Kids on Social Media Act is similar to its counterpart in the House, with slightly less tenacity. It similarly aims to protect children from social media’s harms. Under the bill, platforms must verify its user’s age, not allow the user to use the service unless their age has been verified, and must limit access to the platform for children under 12. The bill also prohibits retention and use of information collected during the age verification process. Platforms must take reasonable steps to require affirmative consent from the parent or guardian of a minor who is at least 13 years old for the creation of a minor account, and reasonably allow access for the parent to later revoke that consent. The bill also prohibits use of data collected from minors for algorithmic recommendations. The bill would require the Department of Commerce to establish a voluntary program for secure digital age verification for social media platforms. Enforcement would be through the FTC or state action.

S 1409– The Kids Online Safety Act, proposed by Senator Blumenthal of Connecticut, also aims to protect minors from online harms. This bill, as does the Online Safety Bill, establishes fiduciary duties for social media platforms regarding children using their sites. The bill requires that platforms act in the best interest of minors using their services, including mitigating harms that may arise from use, sweeping in online bullying and sexual exploitation. Social media sites would be required to establish and provide access to safeguards such as settings that restrict access to minor’s personal data and granting parents the tools to supervise and monitor minor’s use of the platforms. Critically, the bill establishes a duty for social media platforms to create and maintain research portals for non-commercial purposes to study the effect that corporations like the platforms have on society.

Overall, these bills indicate Congress’s creative thinking and commitment to broad privacy protection for users from social media harms. I believe the establishment of a separate body to govern, other than the FTC which lacks the powers needed to compel compliance, to be a necessary step. Recourse for violations on par with the EU’s new regulatory scheme, mainly fines in the billions, could help.

Many of the bills, for myriad aims, establish new fiduciary duties for the platforms in preventing unauthorized use and harms for children. There is real promise in this scheme- establishing duty of loyalty, diligence and care for one party has a sound basis in many areas of law and would be more easily understood in implementation.

The notion that platforms would need to be vigilant in knowing their content, studying its affects, and reporting those effects may do the most to create a stable future for social media.

The legal responsibility for platforms to police and enforce their policies and terms and conditions is another opportunity to further incentivize platforms. The FTC currently investigates policies that are misleading or unfair, sweeping in the social media sites, but there could be an opportunity to make the platforms legally responsible for enforcing their own policies, regarding age, against hate, and inappropriate content, for example.

What would you like to see considered in Privacy law innovation for social media regulation?

Social Media, Minors, and Algorithms, Oh My!

What is an algorithm and why does it matter?

Social media algorithms are intricately designed data organization systems aimed at maximizing user engagement by sorting and delivering content tailored to individual preferences. At their core, social media algorithms collect and subsequently use extensive user data, employing machine learning techniques to better understand and predict user behavior. Social media algorithms note and analyze hundreds of thousands of data points, including past interactions, likes, shares, content preferences, time spent viewing content, and social connections to curate a personalized feed for each user. Social media algorithms are designed this way to keep users on the site, thus giving the site more time to put advertisements on the user’s feed and drive more profits for the social media site in question. The fundamental objective of an algorithm is to capture and maintain user attention, expose the user to an optimal amount of advertisements, and use data from users to curate their feed to keep them engaged for longer.

Addiction comes in many forms

One key element contributing to the addictiveness of social media is the concept of variable rewards. Algorithms strategically present a mix of content, varying in type and engagement level, to keep users interested in their feed. This unpredictability taps into the psychological principle of operant conditioning, where intermittent reinforcement, such as receiving likes, comments, or discovering new content, reinforces habitual platform use. Every time a user sees an entertaining post or receives a positive notification, the brain releases dopamine, the main chemical associated with addiction and addictive behaviors. The constant stream of notifications and updates, fueled by algorithmic insights and carefully tailored content suggestions, can create a sense of anticipation in users for their next dopamine fix, which encourages users to frequently update and scan their feeds to receive the next ‘reward’ on their timeline. The algorithmic and numbers-driven emphasis on user engagement metrics, such as the amount of likes, comments, and shares on a post, further intensifies the competitive and social nature of social media platforms, promoting frequent use.

Algorithms know you too well

Furthermore, algorithms continuously adapt to user behavior through real-time machine learning. As users engage with content, algorithms will analyze and refine their predictions, ensuring that the content remains compelling and relevant to the user over time. This iterative feedback loop further deepens the platform’s understanding of individual users, creating a specially curated and highly addictive feed that the user can always turn to for a boost of dopamine. This heightened social aspect, coupled with the algorithms’ ability to surface content that resonates deeply with the user, enhances the emotional connection users feel to the platform and their specific feed, which keeps users coming back time after time. Whether it be from seeing a new, dopamine-producing post, or posting a status that receives many likes and shares, every time one opens a social media app or website, it can produce seemingly endless new content, further reinforcing regular, and often unhealthy use.

A fine line to tread

As explained above, social media algorithms are key to user engagement. They are able to provide seemingly endless bouts of personalized content and maintain users’ undivided attention through their ability to understand the user and the user’s preferences in content. This pervasive influence extends to children, who are increasingly immersed in digital environments from an early age. Social media algorithms can offer constructive experiences for children by promoting educational content discovery, creativity, and social connectivity that would otherwise be impossible without a social media platform. Some platforms, like YouTube Kids, leverage algorithms to recommend age-appropriate content tailored to a child’s developmental stage. This personalized curation of interest-based content can enhance learning outcomes and produce a beneficial online experience for children. However, while being exposed to age-appropriate content may not harm the child viewers, it can still cause problems related to content addiction.

‘Protected Development’

Children are generally known to be naïve and impressionable, meaning full access to the internet can be harmful for their development, as they may take anything they see at face value. The American Psychological Association has said that, “[d]uring adolescent development, brain regions associated with the desire for attention, feedback, and reinforcement from peers become more sensitive. Meanwhile, the brain regions involved in self-control have not fully matured.” Social media algorithms play a pivotal role in shaping the content children can encounter by prioritizing engagement metrics such as likes, comments, and shares. In doing this, social media sites create an almost gamified experience that encourages frequent and prolonged use amongst children. Children also have a tendency to intensely fixate on certain activities, interests, or characters during their early development, further increasing the chances of being addicted to their feed.

Additionally, the addictive nature of social media algorithms poses significant risks to children’s physical and mental well-being. The constant stream of personalized content, notifications, and variable rewards can contribute to excessive screen time, impacting sleep patterns and physical health. Likewise, the competitive nature of engagement metrics may result in a sense of inadequacy or social pressure among young users, leading to issues such as cyberbullying, depression, low self-esteem, and anxiety.

Stop Addictive Feeds Exploitation (SAFE) for Kids

The New York legislature has spotted the anemic state of internet protection for children and identified the rising mental health issues relating to social media in the youth.  Announced their intentions at passing laws to better protect kids online. The Stop Addictive Feeds Exploitation (SAFE) for Kids Act is aimed explicitly at social media companies and their feed-bolstering algorithms. The SAFE for Kids Act is intended to “protect the mental health of children from addictive feeds used by social media platforms, and from disrupted sleep due to night-time use of social media.”

Section 1501 of The Act would essentially prohibit operators of social media sites from providing addictive, algorithm-based feeds to minors without first obtaining parental permission. Instead the default feed on the program would be a chronologically sorted main timeline, one more popular in the infancy of social media sites. Section 1502 of The Act would also require social media platforms to obtain parental consent before allowing notifications between the hours of 12:00 AM and 6:00 AM and creates an avenue for opting out of access to the platform between the same hours. The Act would also provide a limit on the overall number of hours a minor can spend on a social media platform. Additionally, the Act would authorize the Office of the Attorney General to bring a legal action to enjoin or seek damages/civil penalties of up to $5,000 per violation and allow any parent/guardian of a covered minor to sue for damages of up to $5,000 per user per incident, or actual damages, whichever is greater.

A sign of the times

The Act accurately represents the growing concerns of the public in its justification section, where it details many of the above referenced problems with social media algorithms and the State’s role in curtailing the well-known negative effects they can have on a protected class. The New York legislature has identified the problems that social media addiction can present, and have taken necessary steps in an attempt to curtail it.

Social media algorithms will always play an intricate role in shaping user experiences. However, their addictive nature should rightfully subject them to scrutiny, especially in their effects among children. While social media algorithms offer personalized content and can produce constructive experiences, their addictive nature poses significant risks, prompting legislative responses like the Stop Addictive Feeds Exploitation (SAFE) for Kids Act.  Considering the profound impact of these algorithms on young users’ physical and mental well-being, a critical question arises: How can we effectively balance the benefits of algorithm-driven engagement with the importance of protecting children from potential harm in the ever evolving digital landscape? The SAFE for Kids Act is a step in the right direction, inspiring critical reflection on the broader responsibility of parents and regulatory bodies to cultivate a digital environment that nurtures healthy online experiences for the next generation.

 

Sharing is NOT Always Caring

Where There’s Good, There’s Bad

Social media’s vast growth over the past several years has attracted millions of users who use these platforms to share content, connect with others, conduct business, and spread news and information. However, social media is a double-edged sword. While it creates communities of people and bands them together, it destroys privacy in the meantime. All of the convenient aspects of social media that we know and love lead to significant exposure of personal information and related privacy risks. Social media companies retain massive amounts of sensitive information regarding users’ online behavior, including their interests, daily activities, and political views. Algorithms are embedded within these functions to promote specific goals of social media companies, such as user engagement and targeted advertising. As a result, the means to achieve these goals conflict with consumers’ privacy concerns.

Common Issues

In 2022, several U.S. state and federal agencies banned their employees from using TikTok on government-subsidized devices, fearful that foreign governments could acquire confidential information. While a lot of the information collected through these platforms is voluntarily shared by users, much of it is also tracked using “cookies,” and you can’t have these with a glass of milk! Tracking cookies allows information regarding users’ online browsing activity to be stored and displayed in a way that targets specific interests and personalizes content tailored to these particular likings. Signing up for a social account and agreeing to the platform’s terms permits companies to collect all of this data.

Social media users leave a “digital footprint” on the internet when they create and use their accounts. Unfortunately, enabling a “private” account does not solve the problem because data is still retrieved in other ways. For example, engagement in certain posts through likes, shares, comments, buying history, and status updates all increase the likelihood that privacy will be intruded on.

Two of the most notorious issues related to privacy on social media are data breaches and data mining. Data breaches occur when individuals with unauthorized access steal private or confidential information from a network or computer system. Data mining on social media is the process in which user information is analyzed to identify specific tendencies which are subsequently used to inform research and other advertising functions.

Other issues that affect privacy are certain loopholes that can be taken around preventive measures already in place. For example, if an individual maintains a private social account but then shares something with their friend, others who are connected with the friend can view the post. Moreover, location settings enable a person’s location to be known even if the setting is turned off. Other means, such as Public Wi-Fi and websites can still track users’ locations.

Taking into account all of these prevailing issues, only a small amount of information is actually protected under federal law. Financial and healthcare transactions as well as details regarding children are among the classes of information that receive heightened protection. Most other data that is gathered through social media can be collected, stored, and used. Social media platforms are unregulated to a great degree with respect to data privacy and consumer data protection. The United States does have a few laws in place to safeguard privacy on social media but more stringent ones exist abroad.

Social media platforms are required to implement certain procedures to comply with privacy laws. They include obtaining user consent, data protection and security, user rights and transparency, and data breach notifications. Social media platforms typically ask their users to agree to their Terms and Conditions to obtain consent and authorization for processing personal data. However, most are guilty of accepting without actually reading these terms so that they can quickly get to using the app.

Share & Beware: The Law

Privacy laws are put in place to regulate how social media companies can act on all of the information users share, or don’t share. These laws aim to ensure that users’ privacy rights are protected.

There are two prominent social media laws in the United States. The first is the Communications Decency Act (CDA) which regulates indecency that occurs through computer networks. Nevertheless, Section 230 of the CDA provides enhanced immunity to any cause of action that would make internet providers, including social media platforms, legally liable for information posted by other users. Therefore, accountability for common issues on social media like data breaches and data misuse is limited under the CDA. The second is the Children’s Online Privacy Protection Act (COPPA). COPPA protects privacy on websites and other online services for children under the age of thirteen. The law prevents social media sites from gathering personal information without first providing written notice of disclosure practices and obtaining parental consent. The challenge remains in actually knowing whether a user is underage because it’s so easy to misrepresent oneself when signing up for an account. On the other hand, the European Union has General Data Protection Regulation (GDPR) which grants users certain control over when and how their data is processed. The GDPR contains a set of guidelines that restrict personal data from being disseminated on social media platforms. In the same way, it also gives internet users a long set of rights in cases where their data is shared and processed. Some of these rights include the ability to withdraw consent that was previously given, access information that is collected from them, and delete or restrict personal data in certain situations. The most similar domestic law to the GDPR is the California Consumer Privacy Act (CCPA) which was enacted in 2020. The CCPA regulates what kind of information can be collected by social media companies, giving platforms like Google and Facebook much less freedom in harvesting user data. The goal of the CCPA is to make data collection transparent and understandable to users.

Laws on the state level are lacking and many lawsuits have occurred as a result of this deficiency. A class action lawsuit was brought in response to the collection of users’ information by Nick.com. These users were all children under the age of thirteen who sued Viacom and Google for violating privacy laws. They argued that the data collected by the website together with Google’s stored data relative to its users was personally identifiable information. A separate lawsuit was brought against Facebook for tracking users when they visited third-party websites. Individuals who brought suit claimed that Facebook was able to personally identify and track them through shares and likes when they visited certain healthcare websites. Facebook was able to collect sensitive healthcare information as users browsed these sites, without their consent. However, the court asserted that users did indeed consent to these actions when they agreed to Facebook’s data tracking and data collection policies. The court also stated that the nature of this data was not subject to any stricter requirements as plaintiffs claimed it was because it was all available on publicly accessible websites. In other words, public information is fair game for Facebook and many other social media platforms when it comes to third-party sites.

In contrast to these two failed lawsuits, TikTok agreed to pay a $92 million settlement for twenty-one combined lawsuits due to privacy violations earlier this year. The lawsuit included substantial claims, such as allegations that the app analyzed users’ faces and collected private data on users’ devices without obtaining their permission.

We are living in a new social media era, one that is so advanced that it is difficult to fully comprehend. With that being said, data privacy is a major concern for users who spend a large amount of time sharing personal information, whether they realize it or not. Laws are put in place to regulate content and protect users, however, keeping up with the growing presence of social media is not an easy task–sharing is inevitable and so are privacy risks.

To share or not to share? That is the question. Will you think twice before using social media?

Skip to toolbar