End The Loop

Have you ever found yourself stuck in an endless loop of viewing social media posts as time flies by? It’s likely. On average, people spend about 2 hours and 24 minutes on social media daily, that is 144 minutes. It is time for users to take back control of their daily lives. But how? Well, Ethan Zuckerman is at the forefront of empowering users to control their social media algorithms.

 

Photo Credits

 

 

 

 

Unfollow Everything 2.0

When users of Facebook friend request another person upon being accepted, they automatically “follow” the person. This means they will see all their posts on their Home Page. Following every page, friend, or group you are involved with is what creates the infinite loop of posts users get sucked into. Right now, there is no extension or tool that gives users the ability to combat infinite scrolling on social media platforms.

Ethan Zuckerman is in the process of creating a browser extension that lets Facebook users unfollow all of their friends, groups, and pages with the click of a button. 

Here’s how it works: When a user activates the browser extension, Unfollow Everything 2.0 causes the user’s browser to retrieve their list of friends, groups, and pages from Facebook. The tool would then comb through the “followed” list, causing the browser to ask Facebook to unfollow each friend, group, or page on the users list. The tool would allow the user to select friends, groups, and pages to refollow or gives the option keep their newsfeed blank and view only content that they seek out. It would also encrypt the user’s “followed list and save it locally on the user’s device, which would allow the user to keep the list private while still being able to automatically reverse the unfollowing process. By unfollowing everything, users can eliminate their entire News Feed. This leaves them free to use Facebook without the feed or to more actively curate it by refollowing only those friends and groups whose posts they really want to see.

Note that this isn’t the same as unfriending. By unfollowing their friends, groups, and pages, users remain connected to them and can look up their profiles at their convenience.

Tools like Unfollow Everything 2.0 can help users have better and safer online experiences by allowing them to gain control of their feeds without the involvement of government regulation.

 

Photo Credits:

 

 

Unfollow Everything 1.0

The original version of the toolUnfollow Everything 1.0was created by British developer Louis Barclay in 2021. Barclay believed that unfollowing everything but remaining friends with everyone on the app and staying in all the user-joined groups forced users to use Facebook deliberately rather than as an endless time-suck.“I still remember the feeling of unfollowing everything for the first time. It was near-miraculous. I had lost nothing, since I could still see my favorite friends and groups by going to them directly. But I had gained a staggering amount of control. I was no longer tempted to scroll down an infinite feed of content. The time I spent on Facebook decreased dramatically. Overnight, my Facebook addiction became manageable.”

Barclay eventually received a cease and desist letter and was permanently banned from using the Facebook platform. Meta claims he violated their terms of service.

Meta’s Current Model

Currently, there is no way for users to not automatically follow every friend, page, and group on Facebook that they have liked or befriended, forcing the endless feed of posts users to see on their timelines.

Metas’ steps to unfollow everything involve manually going through each friend, group, or business and clicking the unfollow button. This task can take hours as users tend to have hundreds of connections; this is likely deterring users from going through the extensive process of regaining control over their social media algorithm.

Meta unfollow someone’s profile Directions:

  • Go to that profile by typing their profile name into the search bar at the top of Facebook.
  • Click at the top of their profile.
  • Click Unfriend/Unfollow, then Confirm

 

 

Photo Credits:

 

 

Making a Change:

Unfollow Everything 2.0 filed a preemptive lawsuit asking the court to determine whether Facebook users’ news feeds contain objectionable material that users should be able to filter out to enjoy the platform. They argue that Unfollow Everything 2.0 is the type of tool Section 230(c)(2) intended to encourage by giving users more control over their online experiences and adequate ability to filter out content they do not want.

Zuckerman explains users currently have little to no control over how they use social media networks. “We basically get whatever controls Facebook wants. And that’s actually pretty different from how the internet has worked historically.

Meta, in its defense against Unfollow Everything 2.0 (Ethan Zuckerman), is pushing the court to rule that a platform such as Facebook can circumvent Section 230(c)(2) through its terms of service.

Section 230

Section 230 is known for providing immunity for online computer services regarding third-party content users generate. Section 230(c)(1): “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”  While Section 230(c)(1) has been a commonly litigated topic, Section 230(c)(2) however, has been rarely discussed in front of the courts.

So what is Section 230(c)(2)? Section 230(c)(2) was adopted to allow users to regulate their online experiences through technological means, including tools like Unfollow Everything 2.0.  Force V. Facebook (2019) discretions that Section 230(c)(2)(B) provides immunity from claims based on actions that “enable or make available to . . . others the technical means to restrict access to” the same categories of “objectionable” material.  Essentially, Section 230(c)(2)(B) empowers people to have control over their online experiences by providing immunity to the  3rd party developers of extensions/tools that users can use with social networking platforms such as Facebook.

 

Photo Credits:

 

 

Timeline of Litigation

May 1, 2024: Zuckerman filed a lawsuit asking the court to recognize that Section 230 protects the development of tools that empower social media users to control what they see online.

July 15, 2024: Meta filed a motion to dismiss on the lack of Zuckerman’s standing at the current time.

August 29, 2024: Zuckerman filed an opposition to Meta’s motion to dismiss.

November 7, 2024: Dismissed. However, the researcher could file at a later date because his tool was not complete at the time of the suit. Once developed, it will likely test the law.

Why social media companies do not want this:

Companies like Meta want to prevent these 3rd party extensions as much as possible because it’s in their best interest to continuously keep users engaged. Keeping users on their platform allows Meta to display more advertisements, which is their primary source of revenue. Meta’s large scale of users gives advertisers an excellent opportunity to have their message reach a broad audience. For example, in 2023, Meta generated $134 billion in revenue, 98% of which came from advertising. By making it difficult for users to control their feed adequately, Meta can make more money. If the extension of Unfollow Everything was released to the public, Meta would likely need to shift their prioritization model.

The potential future of section 230:

What’s next? In the event that the court rules in favor of Zuckerman in a future trial, giving users an expanded ability to control their social mediaitIt likely isn’t the end of the problem. Social Media Platforms have previously changed their algorithms to prevent third-party tools from being used on platforms. For example, X (then Twitter)  put an end to Block Party‘s user tool by changing its API (Application Programming Interface) pricing.

Lawmakers will need to step in to fortify users’ control over their Social media algorithms. It is unreasonable to forsee the massive media conglomerates willingly giving up control that would negatively affect their ability to generate revenue.

For now, if users wish to take the initiative and control their social media usage, Android and Apple allow their consumers to regulate specific app usage in their phone settings.

Due Process vs. Public Backlash: Is it Time to Cancel Cancel Culture?

Throughout history, people have often challenged and criticized each other’s ideas and opinions. But with the rise of internet accessibility, especially social media, the way these interactions unfold have changed. Now, it’s easy for anyone to call out someone else’s behavior or words online, and the power of social media makes it simple to gather a large group of people to join in. What starts as a single person’s post can quickly turn into a bigger movement, with others sharing the same views and adding their own criticism. This is cancel culture.

Cancel culture has become a highly relevant topic in today’s digital world, especially because it often leads to serious public backlash and consequences for people or companies seen as saying or doing something offensive. The phrase “cancel culture” first originated from the word cancel, meaning to cut ties with someone. In the abstract, this concept aims to demand accountability, but it also raises important legal questions. When does criticism go too far and become defamation? How does this “online backlash” affect a person’s right to fair treatment? And what legal options are available for those who feel unfairly targeted by “cancel culture”?

 

What Is Cancel Culture?

Cancel culture is a collective online call-out and boycott of individuals, brands, or organizations accused of offensive behavior, often driven by social media. Critics argue that it can lead to mob justice, where people are judged and punished without proper due process. On the other hand, supporters believe it gives a voice to marginalized groups and holds powerful people accountable in ways that traditional systems often fail to. It’s a debate about how accountability should work in a digital age—whether it’s a tool for justice or a dangerous trend that threatens free speech and fairness.

The impact of cancel culture can be extensive, leading to reputational harm, financial losses, and social exclusion. When these outcomes affect a person’s livelihood or well-being, the legal implications become significant, because public accusations, whether true or false, can cause real damage.

In a Pew Research study from September 2020, 44% of Americans reported being familiar with the term “cancel culture,” with 22% saying they were very familiar. Familiarity with the term varies by age, with 64% of adults under 30 aware of it, compared to 46% of those ages 30-49 and only 34% of people 50 and older. Individuals with higher levels of education are also more likely to have heard of cancel culture. Political affiliation shows little difference in awareness, although more liberal Democrats and conservative Republicans tend to be more familiar with the term than their moderate counterparts.

 

Cancel Culture x Defamation Law

In a legal context, defamation law is essential in determining when online criticism crosses the line. Defamation generally involves a false statement presented as fact that causes reputational harm.

To succeed in a defamation lawsuit, plaintiffs must show:

  • a false statement purporting to be fact;
  • publication or communication of that statement to a third person;
  • fault amounting to at least negligence; and
  • damages, or some harm caused to the reputation of the person or entity who is the subject of the statement.

US Dominion, Inc. v. Fox News Network, Inc. is a defamation case highlighting how the media can impact reputations. Dominion sued Fox News for $1.6 billion, claiming the network falsely accused it of being involved in election fraud during the 2020 presidential election. Fox News defended itself by saying that it was simply reporting on claims made by others, even if those claims turned out to be false. The case was settled in March 2023 for $787.5 million, showing that media outlets can be held accountable when they spread information without regard for the truth. This is similar to how cancel culture works – individuals or companies can face backlash and reputational damage based on viral accusations that may not be fully verified. Ultimately, the case highlights how defamation law can provide legal recourse for those harmed by false public statements while emphasizing the balance between free speech and accountability in today’s fast paced digital environment.

 

Free Speech vs. Harm: The Tensions of Cancel Culture

Cancel culture brings to light the ongoing tension between free speech and reputational harm. On one hand, it provides a platform for people to criticize others and hold them accountable for their actions. However, the consequences of these public accusations can be severe, leading to job loss, emotional distress, and social isolation—sometimes even beyond what the law might consider fair.

While the First Amendment protects free speech, it doesn’t cover defamatory or harmful speech. This means people can face consequences for their words, especially when they cause harm. But in the realm of cancel culture, these consequences can sometimes feel disproportionate, where the public reaction can go beyond what might be considered reasonable or just. This raises concerns about fairness and justice – whether the punishment fits the crime, especially when the public can amplify the damage in ways that the legal system may not address.

In Cajune v. Indep. Sch. Dist. 194, the Eighth Circuit Court addressed a First Amendment issue regarding the display of “Black Lives Matter” (BLM) posters in classrooms. The case revolves around whether the school district’s policies, which allow teachers to choose whether to display these posters, restrict or support free speech. The plaintiffs argue that this limitation on expression resembles the broader dynamics of cancel culture, where certain viewpoints can be suppressed or silenced. Much like cancel culture, where individuals or ideas are “canceled” for holding or expressing controversial views, this case touches on how institutions control public expression. If the district restricts messages like “All Lives Matter” or “Blue Lives Matter,” it could be seen as institutional “canceling” of dissenting or unpopular opinions, which can show how cancel culture restricts diverse speech. This shows the clash between promoting free speech and managing controversial messages in public spaces.

 

New York’s Anti-SLAPP Law

New York’s Anti-SLAPP (Strategic Lawsuit Against Public Participation) law is also highly relevant in the context of cancel culture, especially for cases involving public figures. This statute protects defendants from lawsuits intended to silence free speech on matters of public interest. In 2020, New York amended the law to broaden protections, allowing it to cover speech on any issue of public concern.

In Gottwald v. Sebert (aka Kesha v. Dr. Luke), New York’s Court of Appeals upheld a high legal standard for defamation claims made by public figures, by requiring them to prove actual malice. This means Dr. Luke would need to show that Kesha knowingly made false statements or acted with reckless disregard. The court’s decision highlights the strong free speech protections that apply to public figures, making it difficult for them to win defamation cases unless they provide clear evidence of malice. This reflects how cancel culture incidents involving public figures are subject to stricter legal standards.

 

Social Media Platforms: Responsibility and Liability

Social media platforms like Twitter, Facebook, and Instagram play an important role in cancel culture by allowing for public criticism and allowing for rapid, widespread responses. Section 230 shields platforms from liability for user generated content, so they typically aren’t held liable if users post defamatory or harmful content. However, recent Supreme Court decisions upholding Section 230 protections highlight the tension between free speech and holding platforms accountable. These decisions have affirmed that platforms aren’t liable for third-party content, which affects the spread of cancel culture by limiting individuals’ ability to hold platforms accountable for hosting potentially defamatory or harmful content.

 

Legal Recourse for the Cancelled

For individuals targeted by cancel culture, legal options are limited but exist. Potential actions include:

  • Defamation lawsuits: If individuals can prove they were defamed, they may recover damages.
  • Privacy claims: Those whose personal information is shared publicly without consent.
  • Wrongful termination suits: If cancel culture leads to job loss, employees may have grounds for legal action if the termination was discriminatory or violated their rights.
  • Pursuing legal action can be difficult, especially given New York’s high standard for defamation and its expanded anti-SLAPP protections. In cases involving public figures, plaintiffs face many obstacles due to the requirement of proving actual malice.

 

Looking Ahead: Can the Law Catch Up with Cancel Culture?

As cancel culture continues to evolve, legislature will continue to face challenges in determining how best to regulate it. Reforms in privacy laws, online harassment protections, and Section 230 could provide clearer boundaries, but any change will have to account for free speech protections. Cancel culture poses a unique legal and social challenge, as public opinion on accountability and consequences continues to evolve alongside new media platforms. Balancing free expression with protections against reputational harm will likely remain a major challenge for future legal developments.

When Social Media Brand Deals Sour: The Case for Promissory Estoppel in Influencer Agreements

In a world now driven by social media, the advertising industry has been taken over by Influencer brand deals and paid product placement.  Businesses, both small and large, are utilizing, and sometimes relying solely, on influencers to promote their products.  Most of these brand deals are negotiated through formal agreements and contracts, clearly outlining the actions expected by each party.  One common way businesses engage in this marketing is by providing influencers with their products in exchange for exposure.  This typically involves the influencer posting a photo or video on social media that reviews or recommends the product to their audience.  Thus, a review or recommendation from an influencer with a bigger audience is far more valuable.  However, for smaller businesses who do not have prepared contracts for this type of exchange, reliance on informal agreements by influencers to review a product can lead to misunderstandings.  This blog post explores a recent TikTok controversy, where this type of scenario unfolded involving beauty influencer Mikayla Nogueira and Matthew Stevens, the owner of Illusion Bronze, a custom self-tanning product.  Could promissory estoppel, a doctrine in contract law, provide a solution where there are informal agreements for a product review?

The Controversy: Mikayla Nogueira and Illusion Bronze

In 2022, Matthew Stevens, the owner of Illusion Bronze, reached out to beauty influencer Mikayla Nogueira via Instagram direct messages, seeking a video reviewing his custom sunless tanner line.  Following their interaction, Nogueira allegedly agreed to review the product “ASAP.”  Nogueira is known for her product reviews, and previously mentioned in one of her videos that one challenge with reviewing products from small, independent brands is their limited inventory.  These startup brands often struggle to handle the sudden surge in demand from her audience, leading to website crashes and quick sellouts, leaving her audience frustrated and feeling snubbed.

Mikayla Nogueira vs. Illusion Bronze's Matthew Stevens: How Smaller Creators Leverage Influencer Scandal for Clout – Centennial World: Internet Culture, Creators & News

Relying on her promise to review the product “ASAP” and keeping in mind Mikayla’s concerns, Stevens, in the form of a loan to shopify, purchased $10,000 worth of inventory, preparing for this surge in sales that typically accompanies a product review from a major influencer like herself. Stevens waited some time with no review, reached out to Mikayla for reassurance that she would stick to her promise (and even receiving it).  After a few months, Stevens posted a video explaining the situation, accusing Nogueira of failing to honor her promise and claiming financial harm to his business with her to blame.

@whatstrending #mikaylanogueira has been called out on TikTok after #illusionbronze ♬ original sound – WhatsTrending

 Nogueira responded by stating that there was no formal agreement obligating her to review the product and that Stevens’ financial decision was his own.  The dispute escalated via public video responses to one another, with Nogueira insisting that Stevens was trying to rely on her audience for his success, while Stevens felt that her promise was the only reason he took his costly steps. Despite there being no formal agreement between the two requiring Nogueira to review the product in a certain time frame, this situation poses an interesting legal question: Could Stevens have a valid claim under promissory estoppel? Or is this just a risk of the business, as some seasoned public figures have commented:

@bethennyfrankel I’m team @Mikayla Nogueira ALL DAY errday🙌🏼🙂 #mikaylanogueira #influencerdrama #illusionbronze #mikaylanogueiradrama #productreviews #smallbusiness #joshuasanders #lymalaser #lyma #tiktokreviews #beautyinfluencer ♬ original sound – Bethenny Frankel

An implied agreement between the two?

Promissory estoppel is a principle in contract law that enforces a promise even in the absence of a formal contract, provided certain conditions are met.  Under this doctrine, if one party makes a promise, and the other party reasonably relies on that promise to their detriment, the promisor is estopped from arguing that the promise is unenforceable due to a lack of formal contract.

To succeed in a promissory estoppel claim, the following elements must be met:

  1. A clear and definite promise. There must be a clear promise made by the promisor.
  2. Reasonable reliance. The promisee must have reasonably relied on the promise.
  3. The promisee must have suffered a detriment due to their reliance on the promise.
  4. Injustice. Some remedy is necessary to avoid an injustice.

In this case, Nogueira’s message indicating she would review the product “ASAP” might be considered a clear enough promise to satisfy the first requirement.  Nogueira publicly expressed a valid concern about reviewing small businesses that are not capable of handing a large influx of orders.  Thus, Stevens’ advance of $10,000 worth of product might have been a reasonable step to take in reliance of her promise to review.  Since he was expecting a review from her, likely leading to a high influx of orders, he took steps to prepare his business for this scenario and avoid consumer frustration.  Lastly, Stevens’ financial loss from the unsold inventory and any interest on the loan to Shopify may be considered a detriment as a result of his reliance.  With those first three elements met, there is a possibility that injustice could only be avoided by some action and treating their exchange as a legally binding contract.

From a legal standpoint, Nogueira might defend her position by claiming that her statement was not a formal promise but merely an expression of intent.  This is especially possible given the fact that in her responses, she claimed that she “was going to get to it”, admitting that she took too long and should have made the video quicker. With that, there may be a valid argument that while there was some informal agreement, there was no urgency or deadline in place.  This fact might make it unreasonable to hold Nogueira liable for an implied contract that she did not technically breach (yet).  She might also argue that Stevens acted unreasonably by relying on her statement without securing a formal agreement or awaiting some notification from Nogueira that she had recorded the video and was preparing to post it.

This controversy raises important considerations about the relationship between influencers and brands, and how these type of marketing agreements should be arranged.  In traditional commercial settings, contracts mitigate the risk of situations like the Illusion Bronze controversy by ensuring that both parties understand their obligations.  However, social media interactions are far more casual.  The influencer economy may at times operate on less formal interactions, where DMs and verbal agreements may form the basis of understanding between parties.

Implications for Influencers and Brands

For influencers, the takeaway is clear: avoid making promises unless you are prepared to fulfill them, or at least have a standard process for intake of brand deals which clearly outlines obligations and timelines. This also serves as a lesson to influencers to be mindful that businesses, especially small brands, might be making decisions based on their interactions due to the fact that influencers may serve as a direct liaison to their target audience.  Simple steps such as including disclaimers in communications and clarifying any existence of obligations or guarantees every step of the way could draw the line in avoiding miscommunication and reliance.

For upcoming independent brands, let this be a lesson to formalize agreements before making financial decisions.  While it may feel natural to seek out influencer marketing informally on social media, small businesses should prioritize retaining their capital no matter what these interactions sound like.  There are real economic stakes when it comes to making investments based on words.

Conclusion

The economy is clearly evolving with social media, and along with it evolves the business efforts and strategies of brands everywhere. However, the legal principles governing these interactions remain grounded in traditional doctrines such as promissory estoppel.  These doctrines and the law may not evolve as fast as e-commerce, which could make the difference in an influencer’s liability to brands who seek exposure.  As influencer marketing becomes key in the online marketplace, trust and reputation are everything.  Therefore, both parties stand to benefit from clearer terms and understandings of their obligations to each other.

 

 

 

The Internet, Too Big for One Nation to Handle?

Social media is a powerful tool for individuals to exchange ideas and messages quickly. What makes Social Media so powerful? It is powerful because it allows individuals to spread and or exchange information in an instant but with anonymity. Social Media platform systems are built in a way that allows for accounts to be published where an individual or even entity may disguise themselves and post. (Such platforms are Instagram, X, TikTok, etc.) Platforms are aware of the power and ability to make an account and spread information whether it is true or not. This power has spawned a few issues for which the platform must adjust to. However, the liability and need for the platforms to adjust depends.

Here, in the United States, we recognize the need for ethics and the responsibility of upholding a fiduciary duty regardless if an individual is a lawyer or working in another profession that requires such duty. In law, we have the Model Rules of Professional Conduct. We recognize that certain positions in power concerning business transactions can be used in an abusive manner to take advantage of the other party. However, there is a limit to the liability as the United States is a capitalist, free-market economy. Do we want individuals to go ahead and get into business deals without doing their due diligence since the law could help remedy losses due to a business partner not upholding their duties? No, we would not see an end to the amount of cases that could be heard and it would be unfair to the courts, and business partners at some point. Bad business decisions are made all the time just as bad business deals are made. Businesses must adjust and do the work required to ensure they do not make bad choices and find their business suffering from them. Businesses are mature enough to properly face the consequences of a bad decision and work through it to get back into a profitable position.

Well, should these platforms owe a certain duty to the individuals on them? With Section 230 of the 1996 Communications Decency Act, it seems the idea that we have taken as a country is, that it is the user that is liable not the provider of the service for which questionable activity is posted. Therefore liability for activity that can be determined as hateful, inciting violence, or harm will be placed on the account/user who posted the activity, not the individual or company that gives such user a platform (a place for which the user may spread such questionable activity to other users). According to PBS it is a law that dates back to the 1950’s when bookstores were held liable for the content in the books they sold. SCOTUS determined “created a “chilling effect” to hold someone liable for someone else’s content.”

European Commission logo

EUROPE

The European Union (EU) has a different feeling on the liability of platforms. The EU, a committee of European nations who come along to meet in the pursuit of peace, believes consumer protection should be the priority when it comes to online “Digital Services”.

The EU established the Digital Services Act, laying out certain requirements for digital service providers, online intermediaries, and platforms, which would ensure “user safety, protect fundamental rights, and create a fair and open online platform environment.” It is fair to determine that Europe sees the internet as a topic too big for one nation to really figure out a way to manage it. By February 2024, the European Union determined the Act must apply to all types of platforms under the EU’s terminology.

The EU chooses to acknowledge that the internet and its growth are unpredictable. Due to its unpredictability, it is best to provide its users with some safety, as it is evident the internet is needed to complete everyday tasks. If the internet is something that individuals need to use to survive in the modern technological world, then it must be regulated. The most effective way to regulate global platforms is to have a group of nations such as the EU come together and decide on a unified form of regulations.

Russia makes its own attempts to regulate platforms in its own country. The way Russia has been handling Google is a prime example. According to Ty Roush of Forbes, “The Russian government is attempting to fine Google about $20 decillion, a figure with 34 zeros that’s exponentially larger than the world’s economy, over a decision by YouTube—owned by Google’s parent Alphabet—to block channels run by Russian state-run media, according to Russian officials.”

In the last few years, Russia has been holding Google, and its Russian Subsidiary accountable for the allegations that Russia has presented. In response, Google has decided to remove itself from Russia over time to the point its presence will eventually be nonexistent in Russia. Google will not and can not pay the fine that Russia is asking.

It seems to regulate a platform like Google, whose presence is all over the world in more than one nation, it will require more than one nation to come and determine a standard for the platforms. The platforms have established that they are here to stay for a while. One singular nation cannot handle a platform of the scale in nature as Google. Google can decide to go ahead and leave a large profitable market like Russia because it is still established in most nations across the globe outweighing the benefits of staying in Russia and fighting their rules to keep its presence.

North America/U.S.A

The 23 Countries in North America in Alphabetical Order - The Facts Institute

North America, where large influential nations such as Mexico, Canada, and the United States reside, does not have a unified committee such as the European Union. Therefore, regulation is up to the individual nation to come up with their own set of rules or if they want to come up with any at all. With section 230, the United States has relieved liability for the platforms. However, we are aware that individuals’ mental health worsened as the internet grew. It does not look like the issue of mental health in the nation is getting any better. It will definitely not get better with platforms providing places for individuals or even bots to spread harmful activity.

It is time for nations across the globe to come together and acknowledge the internet, and its platforms are a global matter where users are very susceptible. The only way to protect global citizens from the harm the platforms can provide is to establish a unified mindset on handling the internet. It is best to see just how effective the DSA is for the EU and perhaps, one day, the United Nations may establish a treaty amongst nations where platforms may be regulated with users’ safety as the priority.

Meta AI: Innovation, but at what cost?

Artificial Intelligence has become the cutting edge of technology for decades to come, and to this point, nobody knows its complete capabilities. AI is limitless. The more recent advancements include Social Media Companies developing their own AI configurations to enhance the user’s experiences and allows users to use AI to do tasks like text/image generation, assist users navigation through the app, and more.  So what’s the issue? Well, companies like Meta are creating their own AI for their platforms as open-sourced models which can pose significant privacy risks to their users.

What is Meta & Meta AI?

Meta which was formerly known as “The Facebook Inc.” rebranded to encompass a variety of platforms under one corporation which includes relevant social networks such as Instagram and WhatsApp. Both are commonly known social media platforms that also connect millions of people around the globe. Meta developed their AI platform “Meta AI” in April of 2024 which can do things like Answer questions, Generate photos, Search Instagram reels, Provide emotional support, Assist with tasks like solving scholastic problems, write emails, and more.

Photo Credits

Open-Source V. Closed-Source

Meta has established that their AI is an open-source model, but what’s the difference? Well, AI can be either an open-sourced or closed-sourced. An open-sourced AI Model means that the data and software are publicly available to anyone. By sharing code and data, developers can learn from each other and continue to innovate the AI model. Users of an open-source AI Model have the ability to examine the AI systems they use, which can promote transparency. However, there can be difficulties in regulating bad actors.

Closed-sourced models keep their data and software secret strictly to their owners and developers. By keeping their code and data secret, closed-source AI companies can protect their trade secrets and prevent unauthorized access or copying. Closed-source AI, however, tends to be less innovative as 3rd party developers cannot contribute to future technological advancements of the AI model. It is also difficult for users to examine and patrol the model because they  do not have access to the data inputted and the software.

The Cost:

In order to train this open sourced model Meta used a variety of users data. What data exactly Meta is taking from you? Well to highlight some of the controversial data they are taking, it includes: Content that users create, Messages users send and receive that aren’t end-to-end encrypted, users engagement with posts, Purchases users make through meta, users Contact Info, Device information, GPS location, IP Address, and Cookie Data. All of which according to their privacy policy are permitted for their use. Meta disclaims in their privacy policy that “Meta may share certain information about you that is processed when using the AI’s with third parties who help us provide you with more relevant or useful responses.”This includes personal information.

By Meta being committed to open-sourcing their AI they pose a great privacy risk to their users. While they have already noted that they may share personal information with 3rd parties in certain situations, outside developers have the opportunity to expose vulnerabilities within their algorithm by reverse-engineering the code to extract data that the Algorithm was trained with. Which in Meta’s case, can involve the personal information of the users that they used to train the model. Additionally, 3rd parties will also now have access to a wide variety of consumer information without consumers’ giving direct consent to them. Companies can then use this information to their commercial advantage. 

Meta has stated that they have taken exemplary steps in order to ensure the protection of their user’s data from third parties. This includes the development of third-party oversight and management programs that mitigate risk and implement what they believe to be the necessary steps to do so. To note, Facebook has been breached on more than one occasion, most notably in relation to the Cambridge Analytica Scandal. where Cambridge Analytica stole more than 10 million users of Facebook personal information for voter profiling and targeting.

Innovative:

Upon release, there were privacy concerns amongst users since Meta’s AI model was open-sourced. Mark Zuckerberg, CEO of Meta issued a public statement highlighting the benefits of their AI model being open-sourced, to summarize:

  1. Open-sourced AI is good for developers because it gives them the technological freedom to control the software, and open-source models are developing at a faster rate than closed models. 
  2. The model will all meta to continue to be competitive, allowing them to spend more money on research. 
  3. By being open-sourced it gives the world an opportunity for economic growth and better security for everyone because it will allow Meta to be at the forefront of AI advancement.

Effectively, Metas’ open-source model is beneficial to ensure consistent technological achievement for the company.

Photo Credits:

What Users Can Do:

In reality, it is difficult to regulate open-sourced AI from bad actors. Therefore, governmental action is needed to protect users personal data from being exploited. Recently 12 states have taken initiative to protect users. For example, the State of California amended the CCPA to protect users’ personal information for the usage of training AI models. Imposing that users must affirmatively authorize the usage of their info, otherwise, it is prohibited. As for the rest of the nation, there is little to none state or federal regulation regarding users’ privacy, The American Data and Protection Act failed to pass a congressional vote, therefore rendering millions of people defenseless.

For users who are looking to stop Meta from using their data, there is no sort of opt-out button across the United States. However, according to Meta, depending on a user’s setting preferences, a photo or post can be stopped from being used by making them Private. Unfortunately, this is not retroactive and all previous data will not be removed from the model. 

While Meta looks to be at the forefront of AI, their open-sourced model poses serious security risks for their users due to lack of regulation and questionable protection.

Parents Using Their Children for Clicks on YouTube to Make Money

With the rise of social media, an increasing number of people have turned to these platforms to earn money. A report from Goldman Sachs reveals that 50 million individuals are making a living as influencers, and this number is expected to grow by 10% to 20% annually through 2028. Alarmingly, some creators are exploiting their children in the process by not giving them fair compensation.

Photo Credits

How Do YouTubers Make Money? 

You might wonder how YouTubers make money from their videos. YouTube pays creators for views through ads that appear in their content. The more clicks they get the more money they make. Advertisers pay YouTube a set rate for every 1,000 ad views, YouTube keeps 45% of the revenue while creators receive the remaining 55%. To earn money from ads, creators must be eligible for the YouTube Partner Program (YPP). YYP allows revenue sharing from ads that are played on the influencer’s content. On average, a YouTuber earns about $0.018 per view, which totals approximately $18 for every 1,000 views. As of September 30, 2024, the average annual salary for a YouTube channel in the United States is $68,714, with well-known YouTubers earning between $48,500 and $70,500, and top earners making around $89,000. Some successful YouTubers even make millions annually. 

In addition to ad revenue, YouTubers can earn through other sources like AdSense, which also pays an average of $18 per 1,000 ad views. However, only 15% of total video views count toward the required 30 seconds of view time for the ad to qualify for payment. Many YouTubers also sell merchandise such as t-shirts, sweatshirts, hats, and phone cases. Channels with over 1 million subscribers often have greater opportunities for sponsorships and endorsements. Given the profit potential, parents may be motivated to create YouTube videos that attract significant views. Popular genres that feature kids include videos unboxing and reviewing new toys, demonstrating how certain toys work, participating in challenges or dares, creating funny or trick videos, and engaging in trending TikTok dances. 

Photo Credits 

Child Labor Laws Relating to Social Media 

Only a few states have established labor laws specifically for child content creators, with California and Illinois being worthy examples. Illinois was one of the first states to implement such regulations, started by 16-year-old Shreya Nallamothu. She brought attention to the issue of parents profiting from their children’s appearances in their content to Governor J.B. Pritzker. Shreya noted that she “kept seeing cases of exploitation” during her research and felt compelled to act. In a local interview, she explained her motivation for the change was triggered by “…very young children who may not understand what talking to a camera means, they can’t grasp what a million viewers look like. They don’t comprehend what they’re putting on the internet for profit, nor that it won’t just disappear, and their parents are making money off it.” 

As a result, Illinois passed Illinois Law SB 1782, which took effect on July 1, 2024. This law mandates that parent influencers compensate their children for appearing in their content. It amends the state’s Child Labor Law to include children featured in their parents’ or caregivers’ social media. Minors 16 years old and under must be paid 15% of the influencer’s gross earnings if they appear in at least 30% of monetized content. Additionally, they are entitled to 50% of the profits based on the time they are featured. The adult responsible for creating the videos is required to set aside the gross earnings in a trust account within 30 days for the child to access when they turn 18. The law also grants children the right to request the deletion of content featuring them. This part of the legislation is a significant step in ensuring that children have some control over the content that follows them into adulthood. If the adult fails to comply, the minor can sue for damages once they become adults. Generally, children who are not residents of Illinois can bring an action under this law as long as the alleged violation occurred within Illinois, the law applies to the case, and the court has jurisdiction over the parent (defendant).

California was the second state to pass a law on this. The California Content Creator Rights Act was authored by Senator Steve Padilla (D-San Diego) and passed in August 2024. This law requires influencers who feature minors in at least 30% of their videos to set aside a proportional percentage of their earnings in a trust for the minor to access upon reaching adulthood. This bill is broader than Illinois’s bill, but they both aim to ensure that creators who are minors receive fair financial benefits from the use of their image. 

There is hope that other states will see Illinois and California laws that give children influencers fair financial benefits for the use of their image in their parent’s videos and create similar laws. Parents should not be exploiting their kids by making a profit off of them. 

Photo Credits

Can Social Media Platforms Be Held Legally Responsible If Parents Do Not Pay Their Children? 

Social media platforms will probably not be held liable because of Section 230 of the Communications Decency Act of 1996. This law protects social media platforms from being held accountable for users’ actions and instead holds the user who made the post responsible for their own words and actions. For example, if a user posts defamatory content on Instagram, the responsibility lies with the user, not Instagram.  

Currently, the only states that have requirements for parent influencers to compensate their children featured on their social media accounts are Illinois and California. If a parent in these states fails to set aside money for their child as required by law, most likely only the parent will be held liable. It is unlikely that social media platforms will be held responsible for violations by the parent because of Section 230.

Privacy Please: Privacy Law, Social Media Regulation and the Evolving Privacy Landscape in the US

Social media regulation is a touchy subject in the United States.  Congress and the White House have proposed, advocated, and voted on various bills, aimed at protecting and guarding people from data misuse and misappropriation, misinformation, harms suffered by children, and for the implications of vast data collection. Some of the most potent concerns about social media stem from use and misuse of information by the platforms- from the method of collection, to notice of collection and use of collected information. Efforts to pass a bill regulating social media have been frustrated, primarily by the First Amendment right to free speech. Congress has thus far failed to enact meaningful regulation on social media platforms.

The way forward may well be through privacy law. Privacy laws give people some right to control their own personhood including their data, right to be left alone, and how and when people see and view them. Privacy laws originated in their current form in the late 1800’s with the impetus being one’s freedom from constant surveillance by paparazzi and reporters, and the right to control your own personal information. As technology mutated, our understanding of privacy rights grew to encompass rights in our likeness, our reputation, and our data. Current US privacy laws do not directly address social media, and a struggle is currently playing between the vast data collection practices of the platforms, immunity for platforms under Section 230, and private rights of privacy for users.

There is very little Federal Privacy law, and that which does exist is narrowly tailored to specific purposes and circumstances in the form of specific bills. Somes states have enacted their own privacy law scheme, California being on the forefront, Virginia, Colorado, Connecticut, and Utah following in its footsteps. In the absence of a comprehensive Federal scheme, privacy law is often judge-made, and offers several private rights of action for a person whose right to be left alone has been invaded in some way. These are tort actions available for one person to bring against another for a violation of their right to privacy.

Privacy Law Introduction

Privacy law policy in the United States is premised on three fundamental personal rights to privacy:

  1. Physical right to privacy- Right to control your own information
  2. Privacy of decisions– such as decisions about sexuality, health, and child-rearing. These are the constitutional rights to privacy. Typically not about information, but about an act that flows from the decision
  3. Proprietary Privacy – the ability to protect your information from being misused by others in a proprietary sense.

Privacy Torts

Privacy law, as it concerns the individual, gives rise to four separate tort causes of action for invasion of privacy:

  1. Intrusion upon Seclusion- Privacy law provides a tort cause of action for intrusion upon seclusion when someone intentionally intrudes upon the reasonable expectation of seclusion of another, physically or otherwise, and the intrusion is objectively highly offensive.
  2. Publication of Private Facts- One gives publicity To a matter concerning the Private life of another that is not of legitimate concern to the public, and the matter publicized would be objectively highly offensive. The first amendment provides a strong defense for publication of truthful matters when they are considered newsworthy.
  3. False Light – One who gives publicity to a matter concerning another that places the other before the public in a false light when The false light in which the other was placed would be objectively highly offensive and the actor had knowledge of or acted in reckless disregard as to the falsity of the publicized matter and the false light in which the other would be placed.
  4. Appropriation of name and likeness- Appropriation of one’s name or likeness to the defendant’s own use or benefit. There is no appropriation when a persona’s picture is used to illustrate a non-commercial, newsworthy article. This is usually commercial in nature but need not be. The appropriation could be of “identity”. It need not be misappropriation of name, it could be the reputation, prestige, social or commercial standing, public interest, or other value on the plaintiff’s likeness.

These private rights of action are currently unavailable for use against social media platforms because of Section 230 of the Decency in Communications Act, which provides broad immunity to online providers for posts on their platforms. Section 230 prevents any of the privacy torts from being raised against social media platforms.

The Federal Trade Commission (FTC) and Social Media

Privacy law can implicate social media platforms when their practices become unfair or deceptive to the public through investigation by the Federal Trade Commission (FTC). The FTC is the only federal agency with both consumer protection and competition jurisdiction in broad sectors of the economy. FTC investigates business practices where those practices are unfair or deceptive. FTC Act 15 U.S.C S 45- Act prohibits “unfair or deceptive acts or practices in or affecting commerce” and grants broad jurisdiction over privacy practices of businesses to the FTC. Trade practice is unfair if it causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and is not outweighed by countervailing benefits to consumers or competition. A deceptive act or practice is a material representation, omission, or practice that is likely to mislead the consumer acting reasonably in the circumstances, to the consumer’s detriment.

Critically, there is no private right of action in FTC enforcement. The FTC has no ability to enforce fines for S5 violations but can provide injunctive relief. By design, the FTC has very limited rulemaking authority, and looks to consent decrees and procedural, long-lasting relief as an ideal remedy. The FTC pursues several types of misleading or deceptive policy and practices that implicate social media platforms: notice and choice paradigms, broken promises, retroactive policy changes, inadequate notice, and inadequate security measures. Their primary objective is to negotiate a settlement where the company submits to certain measures of control of oversight by the FTC for a certain period of time. Violations of the agreements could yield additional consequences, including steep fines and vulnerability to class action lawsuits.

Relating to social media platforms, the FTC has investigated misleading terms and conditions, and violations of platform’s own policies. In Re Snapchat, the platform claimed that user’s posted information disappeared completely after a certain period of time, however, through third party apps and manipulation of user’s posts off of the platform, posts could be retained. The FTC and Snapchat settled, through a consent decree, to subject Snapchat to FTC oversight for 20 years.

The FTC has also investigated Facebook for violation of its privacy policy. Facebook has been ordered to pay a $5 billion penalty and to submit to new restrictions and a modified corporate structure that will hold the company accountable for the decisions it makes about its users’ privacy to settle FTC charges claiming that they violated a 2012 agreement with the agency.

Unfortunately, none of these measures directly give individuals more power over their own privacy. Nor do these policies and processes give individuals any right to hold platforms responsible for being misled by algorithms using their data, or for intrusion into their privacy by collecting data without allowing an opt-out.

Some of the most harmful social media practices today relate to personal privacy. Some examples include the collection of personal data, the selling and dissemination of data through the use of algorithms designed to subtly manipulate our pocketbooks and tastes, collection and use of data belonging to children, and the design of social media sites to be more addictive- all in service of the goal of commercialization of data.

No current Federal privacy scheme exists. Previous Bills on Privacy have been few and narrowly tailored to relatively specific circumstances and topics like healthcare and medical data protection by HIPPA, protection of data surrounding video rentals as in the Video Privacy Protection Act, and narrow protection for children’s data in Children’s Online Protection Act. All the schemes are outdated and fall short of meeting the immediate need of broad protection of widely collected and broadly utilized data from social media.

Current Bills on Privacy

Upon request from some of the biggest platforms, outcry from the public, and the White House’s request for Federal Privacy regulation, Congress appears poised to act. The 118th Congress has pushed privacy law as a priority in this term by introducing several bills related to social media privacy. There are at least ten Bills currently pending between the House of the Senate addressing a variety of issues and concerns from Children’s data privacy to the minimum age for use and designation of a new agency to monitor some aspects of privacy.

S744The Data Care Act of 2023 aims to protect social media user’s data privacy by imposing fiduciary duties on the platforms. The original iteration of the bill was introduced in 2021 and failed to receive a vote. It was re-introduced in March of 2023 and is currently pending. Under the act, social media platforms would have the duty to reasonably secure user’s data from access, refrain from using the data in a way that could foreseeably “benefit the online service provider to the detriment of the end user” and to prevent disclosure of user’s data unless the party is also bound by these duties. The bill authorizes the FTC and certain state officials to take enforcement actions upon breach of those duties. The states would be permitted to take their own legal action against companies for privacy violations. The bill would also allow the FTC to intervene in the enforcement efforts by imposing fines for violations.

H.R.2701 – Perhaps the most comprehensive piece of legislation on the House floor is the Online Privacy Act. In 2023, the bill was reintroduced by democrat Anna Eshoo after an earlier version on the bill failed to receive a vote and died in Congress. The Online Privacy Act aims to protect users by providing individuals rights relating to the privacy of their personal information. The bill would also provide privacy and security requirements for treatment of personal information. To accomplish this, the bill established a new agency – the Digital Privacy Agency- which would be responsible for enforcement of the rights and requirements. The new individual rights in privacy are broad and include the rights of access, correction, deletion, human review of automated decision, individual autonomy, right to be informed, and right to impermanence, amongst others. This would be the most comprehensive plan to date. The establishment of a new agency with a task specific to administration and enforcement of privacy laws would be incredibly powerful. The creation of this agency would be valuable irrespective of whether this bill is passed.

HR 821– The Social Media Child Protection Act is a sister bill to one by a similar name which originated in the Senate. This bill aims to protect children from the harms of social media by limiting children’s access to it. Under the bill, Social Media platforms are required to verify the age of every user before accessing the platform by submitting a valid identity document or by using another reasonable verification method. A social media platform will be prohibited from allowing users under the age of 16 to access the platform. The bill also requires platforms to establish and maintain reasonable procedures to protect personal data collected from users. The bill affords for a private right of action as well as state and FTC enforcement.

S 1291The Protecting Kids on Social Media Act is similar to its counterpart in the House, with slightly less tenacity. It similarly aims to protect children from social media’s harms. Under the bill, platforms must verify its user’s age, not allow the user to use the service unless their age has been verified, and must limit access to the platform for children under 12. The bill also prohibits retention and use of information collected during the age verification process. Platforms must take reasonable steps to require affirmative consent from the parent or guardian of a minor who is at least 13 years old for the creation of a minor account, and reasonably allow access for the parent to later revoke that consent. The bill also prohibits use of data collected from minors for algorithmic recommendations. The bill would require the Department of Commerce to establish a voluntary program for secure digital age verification for social media platforms. Enforcement would be through the FTC or state action.

S 1409– The Kids Online Safety Act, proposed by Senator Blumenthal of Connecticut, also aims to protect minors from online harms. This bill, as does the Online Safety Bill, establishes fiduciary duties for social media platforms regarding children using their sites. The bill requires that platforms act in the best interest of minors using their services, including mitigating harms that may arise from use, sweeping in online bullying and sexual exploitation. Social media sites would be required to establish and provide access to safeguards such as settings that restrict access to minor’s personal data and granting parents the tools to supervise and monitor minor’s use of the platforms. Critically, the bill establishes a duty for social media platforms to create and maintain research portals for non-commercial purposes to study the effect that corporations like the platforms have on society.

Overall, these bills indicate Congress’s creative thinking and commitment to broad privacy protection for users from social media harms. I believe the establishment of a separate body to govern, other than the FTC which lacks the powers needed to compel compliance, to be a necessary step. Recourse for violations on par with the EU’s new regulatory scheme, mainly fines in the billions, could help.

Many of the bills, for myriad aims, establish new fiduciary duties for the platforms in preventing unauthorized use and harms for children. There is real promise in this scheme- establishing duty of loyalty, diligence and care for one party has a sound basis in many areas of law and would be more easily understood in implementation.

The notion that platforms would need to be vigilant in knowing their content, studying its affects, and reporting those effects may do the most to create a stable future for social media.

The legal responsibility for platforms to police and enforce their policies and terms and conditions is another opportunity to further incentivize platforms. The FTC currently investigates policies that are misleading or unfair, sweeping in the social media sites, but there could be an opportunity to make the platforms legally responsible for enforcing their own policies, regarding age, against hate, and inappropriate content, for example.

What would you like to see considered in Privacy law innovation for social media regulation?

Social Media, Minors, and Algorithms, Oh My!

What is an algorithm and why does it matter?

Social media algorithms are intricately designed data organization systems aimed at maximizing user engagement by sorting and delivering content tailored to individual preferences. At their core, social media algorithms collect and subsequently use extensive user data, employing machine learning techniques to better understand and predict user behavior. Social media algorithms note and analyze hundreds of thousands of data points, including past interactions, likes, shares, content preferences, time spent viewing content, and social connections to curate a personalized feed for each user. Social media algorithms are designed this way to keep users on the site, thus giving the site more time to put advertisements on the user’s feed and drive more profits for the social media site in question. The fundamental objective of an algorithm is to capture and maintain user attention, expose the user to an optimal amount of advertisements, and use data from users to curate their feed to keep them engaged for longer.

Addiction comes in many forms

One key element contributing to the addictiveness of social media is the concept of variable rewards. Algorithms strategically present a mix of content, varying in type and engagement level, to keep users interested in their feed. This unpredictability taps into the psychological principle of operant conditioning, where intermittent reinforcement, such as receiving likes, comments, or discovering new content, reinforces habitual platform use. Every time a user sees an entertaining post or receives a positive notification, the brain releases dopamine, the main chemical associated with addiction and addictive behaviors. The constant stream of notifications and updates, fueled by algorithmic insights and carefully tailored content suggestions, can create a sense of anticipation in users for their next dopamine fix, which encourages users to frequently update and scan their feeds to receive the next ‘reward’ on their timeline. The algorithmic and numbers-driven emphasis on user engagement metrics, such as the amount of likes, comments, and shares on a post, further intensifies the competitive and social nature of social media platforms, promoting frequent use.

Algorithms know you too well

Furthermore, algorithms continuously adapt to user behavior through real-time machine learning. As users engage with content, algorithms will analyze and refine their predictions, ensuring that the content remains compelling and relevant to the user over time. This iterative feedback loop further deepens the platform’s understanding of individual users, creating a specially curated and highly addictive feed that the user can always turn to for a boost of dopamine. This heightened social aspect, coupled with the algorithms’ ability to surface content that resonates deeply with the user, enhances the emotional connection users feel to the platform and their specific feed, which keeps users coming back time after time. Whether it be from seeing a new, dopamine-producing post, or posting a status that receives many likes and shares, every time one opens a social media app or website, it can produce seemingly endless new content, further reinforcing regular, and often unhealthy use.

A fine line to tread

As explained above, social media algorithms are key to user engagement. They are able to provide seemingly endless bouts of personalized content and maintain users’ undivided attention through their ability to understand the user and the user’s preferences in content. This pervasive influence extends to children, who are increasingly immersed in digital environments from an early age. Social media algorithms can offer constructive experiences for children by promoting educational content discovery, creativity, and social connectivity that would otherwise be impossible without a social media platform. Some platforms, like YouTube Kids, leverage algorithms to recommend age-appropriate content tailored to a child’s developmental stage. This personalized curation of interest-based content can enhance learning outcomes and produce a beneficial online experience for children. However, while being exposed to age-appropriate content may not harm the child viewers, it can still cause problems related to content addiction.

‘Protected Development’

Children are generally known to be naïve and impressionable, meaning full access to the internet can be harmful for their development, as they may take anything they see at face value. The American Psychological Association has said that, “[d]uring adolescent development, brain regions associated with the desire for attention, feedback, and reinforcement from peers become more sensitive. Meanwhile, the brain regions involved in self-control have not fully matured.” Social media algorithms play a pivotal role in shaping the content children can encounter by prioritizing engagement metrics such as likes, comments, and shares. In doing this, social media sites create an almost gamified experience that encourages frequent and prolonged use amongst children. Children also have a tendency to intensely fixate on certain activities, interests, or characters during their early development, further increasing the chances of being addicted to their feed.

Additionally, the addictive nature of social media algorithms poses significant risks to children’s physical and mental well-being. The constant stream of personalized content, notifications, and variable rewards can contribute to excessive screen time, impacting sleep patterns and physical health. Likewise, the competitive nature of engagement metrics may result in a sense of inadequacy or social pressure among young users, leading to issues such as cyberbullying, depression, low self-esteem, and anxiety.

Stop Addictive Feeds Exploitation (SAFE) for Kids

The New York legislature has spotted the anemic state of internet protection for children and identified the rising mental health issues relating to social media in the youth.  Announced their intentions at passing laws to better protect kids online. The Stop Addictive Feeds Exploitation (SAFE) for Kids Act is aimed explicitly at social media companies and their feed-bolstering algorithms. The SAFE for Kids Act is intended to “protect the mental health of children from addictive feeds used by social media platforms, and from disrupted sleep due to night-time use of social media.”

Section 1501 of The Act would essentially prohibit operators of social media sites from providing addictive, algorithm-based feeds to minors without first obtaining parental permission. Instead the default feed on the program would be a chronologically sorted main timeline, one more popular in the infancy of social media sites. Section 1502 of The Act would also require social media platforms to obtain parental consent before allowing notifications between the hours of 12:00 AM and 6:00 AM and creates an avenue for opting out of access to the platform between the same hours. The Act would also provide a limit on the overall number of hours a minor can spend on a social media platform. Additionally, the Act would authorize the Office of the Attorney General to bring a legal action to enjoin or seek damages/civil penalties of up to $5,000 per violation and allow any parent/guardian of a covered minor to sue for damages of up to $5,000 per user per incident, or actual damages, whichever is greater.

A sign of the times

The Act accurately represents the growing concerns of the public in its justification section, where it details many of the above referenced problems with social media algorithms and the State’s role in curtailing the well-known negative effects they can have on a protected class. The New York legislature has identified the problems that social media addiction can present, and have taken necessary steps in an attempt to curtail it.

Social media algorithms will always play an intricate role in shaping user experiences. However, their addictive nature should rightfully subject them to scrutiny, especially in their effects among children. While social media algorithms offer personalized content and can produce constructive experiences, their addictive nature poses significant risks, prompting legislative responses like the Stop Addictive Feeds Exploitation (SAFE) for Kids Act.  Considering the profound impact of these algorithms on young users’ physical and mental well-being, a critical question arises: How can we effectively balance the benefits of algorithm-driven engagement with the importance of protecting children from potential harm in the ever evolving digital landscape? The SAFE for Kids Act is a step in the right direction, inspiring critical reflection on the broader responsibility of parents and regulatory bodies to cultivate a digital environment that nurtures healthy online experiences for the next generation.

 

Short & Not So Sweet

LIGHTS ON BUT NOBODY’S HOME

Social media clips have and continue to worry parents worldwide based on the impact this may have on their children’s brains—specifically, the effects on children’s attention span. TikTok, the prime example of short-form video content, had nearly 1.5 billion monthly active users in the third quarter of 2022. The reason behind this trend? This sudden and drastic usage is because users on this platform desire a short and sweet entertainment approach. Studies support the finding that TikTok has and continues to take over consumers’ time through its short-clip process. 

Research indicates that the average user in the United States spends roughly 45.8 minutes per day on TikTok, beating other forms of social media videos such as Instagram, Facebook, and Twitter. Globally, the viewing amount is up to nearly 53 minutes per day. Additionally, expanding beyond just TikTok, these short-term, vertical videos constitute a much higher watch rate than horizontal, longer videos. 

WRAP YOUR BRAIN AROUND THIS

Given that the brain’s prefrontal cortex – the part that accounts for decision- making and impulse control – does not fully develop until age 25, children worldwide struggle to regulate access to these shorter videos. The issue of shorter videos isn’t just present in the United States; studies in countries like China are also prevalent. China’s Zhejiang University determined that China’s version of TikTok, Douyin, found that parts of the brain responsible for addiction were present in students, and many dealt with difficulty in stopping to watch these clips. 

TikTok and other short-term videos on social media appeal directly to this generation’s desire to avoid sustaining attention. In turn, such an influence on children can impact their ability to function in the real world. Dr. Michael Manos of Cleveland Clinic Children’s Center for Attention states, “If kids’ brains become accustomed to constant changes, the brain finds it difficult to adapt to a non-digital activity where things don’t move quite as fast.” Although this is a relatively new scientific study, it has long been that the use of social media negatively impacts academic performance. The impact of social media has created what society recognizes as an attention deficit. The trend of short-term videos on social media and entertainment devices worldwide has and continues to impair society’s cognitive functions. 

QUALITY OVER QUANTITY

Although many factors are in consideration as to why society has progressed into this current state, one in particular that takes the sole blame away from the consumer is that consumer standards are high. Specifically, with the rise of these various new forms of social media, society has prioritized offering content that can appeal to different consumers and preferences. Consumers today have much more options in what they choose to watch. 

Accordingly, the consumer may not utilize their attention span for poor entertainment. Instead, viewers have opportunities in a competitive area that pressures society to develop ways to draw this attention and obtain viewership through this source of entertainment. Studies from the Technical University of Denmark determined a substantial decrease in attention space due to the “increasing production and consumption of content.” 

TOO LONG IS WRONG

Why society has become fascinated with this new form of entertainment is simple. Short films provide the same level of emotion within a much shorter period. The same sort of behavior is seen even outside of social media. Students who watch recorded lectures tend to adjust the speed to get through the material faster. Movies engineered for viewership in 2 hours stray from the regularity of older films and directors that portrayed their image through 5-hour screen times. Even certain songs created by various artists have seen a rapid drop in the number of lyrics accumulated to ensure that this type of content adjusts to shorter attention spans. 

What is the reason behind this new obsession amongst the youth Satisfaction. Studies show that almost 50% of users surveyed by TikTok said that videos lasting merely longer than a minute became “stressful.” Such studies directly correlate to the painful truth that an individual’s attention spans are minimal compared to life before social media and short media clips. Thus, creators of entertainment accommodate this ongoing concern not by attempting to provide a remedy but by adjusting to this current desire for short-film viewership. 

Through the appeal to this newly recognizable satisfaction, many creators of entertainment have further fueled this addiction rather than creating content to distract from the lure of short-form videos. Is this a wise business decision? Absolutely. The market of addictive children craves this type of entertainment. However, with any addiction, consequences don’t linger too far behind. 

ADDICTION TURNS INTO BRAINWASH

The question remains: what does this new obsession have to do with the law? It’s not illegal to provide children with entertainment, regardless of how bad the effect it may have on generations of children. The problem is the allegations that apps such as TikTok make no effort to ensure the platform is safe for children and teens. Between the inability to monitor the content and the addictive nature discussed above, the outcome has proved catastrophic for the youth across the globe. Not only is the mental capacity of children in jeopardy, but their physical well-being faces the consequences of association with the addictive nature of short-term videos. 

The Social Media Victims Law Group has filed a lawsuit, Case Number 22STCV21355 Smith et al. v. TikTok Inc., on behalf of the parents of two young girls who suddenly died after attempting one of the short clips trending at the time on this platform. Specifically, the dangerous sensation is known as the “blackout challenge.” 

The blackout challenge constitutes objects taken and used to strangle oneself to the point of losing consciousness. One of the victims, Lalani Erika Renee Walton, started watching TikTok at age 8. As mentioned above, the addictive nature of these short-clipped videos took control. Not long thereafter, Lalani became addicted to watching the videos and attempting to duplicate them. Due to her latest attempt at reproducing these short-clipped sensations, on July 15, 2021, Lalani died hanging from her bed with a rope around her neck after the disastrous success of this trending challenge.  

Similarly, Arriania Jaileen Arroyo, a seven-year-old girl, downloaded TikTok not long after receiving her first phone. Within two years, she, too, became addicted to this frenzy of short-clip social media such as TikTok. Eventually, the new trend of short-clipped videos was this very same blackout challenge. On February 26, 2021, Arriani died hanging from their family’s dog leash fastened to her bedroom door. All because of the opportunity to replicate these videos that jeopardize society’s mental and physical health. 

WHY IS THIS TIKTOK’S PROBLEM?

The question then becomes, why is this horrific trend TikTok’s problem? Section 230 of the Communications Decency Act dictates that responsibility does not fall on social media platforms for the content others have posted. Instead, platforms are to moderate as they deem necessary and appropriate. The benefit of 230 created an attempt to provide diversity and opportunity for cultural and intellectual development. 

Through the increase of casualties, many argue that such content of dangerous challenges associated with and spread through this program extend beyond the confinement of Section 230’s protections. The “blackout challenge” is only one of many examples that have caused harmful content to be spread and mimicked by others. Others include the Benadryl challenge (hallucinogenic effects) and the salt and ice challenge (chemical burns on one’s skin). 

Although Section 230 has and continues to protect individuals and platforms from having to suffer the consequences of others’ conduct, its protection is not obsolete. The act’s protections do not extend to companies that create illegal or harmful content. Although TikTok may not have made the content, this does not end the discussion of its exposure to the consequences of Section 230. 

Currently, in these ongoing lawsuits, the actions of the users are not in question, but instead, the one under review is TikTok’s actions. In addition to no protection for creating harmful content, Section 230’s defense is of no avail for failure to warn users. The lawsuit emphasizes TikTok’s omission of warning to parents and users of such foreseeable risks in connection with the product. Specifically, no ordinary and reasonable individual would presume this type of entertaining device, which directs itself to teens and young children, poses these dangers, including the effectiveness of its addictive qualities and ability to lead to a surplus of screen time. 

There are also arguments that the design of TikTok’s platform itself is flawed. Such design defects alleged include the creation of an addictive product and failure to verify the ages and identities of minor users. Under the Children’s Online Privacy Protection Act, allowing children under age 13 on a social media platform violates this statute.

To circumvent the legal repercussions of their viewer’s actions, TikTok has attempted to improve its safety and warning features to provide a greater understanding of the content shared on its platform. Specifically, TikTok has altered safety features and even offered ways for parents to monitor it. Under TikTok family pairing, this feature allows parents to dictate how much time children can view the application. It links the parent’s control with their child’s account. Furthermore, a section of TikTok provides a specific area for children 13 and under, which primarily shows child-safe content. 

A VIDEO A DAY KEEPS THE HARM IN PLAY: 

Everything ties back to TikTok’s appeal that led to this addictive nature in the first place. That is the appeal behind its short-clipped entertainment. Such addiction has impacted children’s cognitive ability and even caused the loss of lives in the process. The extent of this concern stretches far beyond the impact on the ability of children’s brain capacity. Instead, this trend has led to children taking this addictive behavior to a whole new extremity through mimicry.

Such conduct cannot continue progressing in the direction it has thus far. Such modifications to the platform, although an attempt for betterment, have failed to suffice and prevent this irreparable damage. This area of concern needs to be addressed before society not only loses its ability to think but its ability to act as well.  How many lives must be lost before TikTok takes affirmative action?

Skip to toolbar