Regulating the Scroll: How Lawmakers Are Redefining Social Media for Minors

In today’s digital world, the question is no longer if minors use social media but how they use it. 

Social media platforms don’t just host young users, they shape their experiences through algorithmic feeds and “addictive” design features that keep kids scrolling long after bedtime. As the mental health toll becomes increasingly clear, lawmakers are stepping in to limit how much control these platforms have over young minds.

What is an “addictive” feed and why target it? 

Algorithms don’t just show content, they promote it. By tracking what users click, watch, or like, these feeds are designed to keep attention flowing. For minors, that means endless scrolling and constant engagement which typically is at the expense of sleep, focus, and self-esteem.

Under New York’s Stop Addictive Feeds Exploitation (SAFE) for Kids Act, lawmakers found that:

 “social media companies have created feeds designed to keep minors scrolling for dangerously long periods of time.”

The Act defines an “addictive feed” as one that recommends or prioritizes content based on data linked to the user or their device.

The harms aren’t hypothetical. Studies link heavy social media use among teens with higher rates of depression, anxiety, and sleep disruption. Platforms often push notifications late at night or during school hours. Times when young users are most vulnerable. 

Features like autoplay, for you page, endless “you may also like” suggestions, and quick likes or comments can trap kids in an endless scroll. What begins as fun and harmless entertainment soon becomes a routine they struggle to escape.                              

 

Key Developments in Legislation 

It’s no surprise that minors exposure to social media algorithms sits at the center of today’s policy debates. Over the past two years, state and federal lawmakers have introduced laws seeking to rein in the “addictive” design features of online platforms. While many of these measures face ongoing rule making or constitutional challenges, together they signal a national shift toward stronger regulations of social media’s impact on youth. 

Let’s take a closer look at some of the major legal developments shaping this issue.

New York’s SAFE for Kids Act

New York’s Stop Addictive Feeds Exploitation (SAFE) for Kids Act represents one of the nation’s most ambitious efforts to regulate algorithmic feeds. The law prohibits platforms from providing “addictive feeds” to users under 18 unless the platform obtains verifiable parental consent or reasonably determines that the user is not a minor. It also bans push notifications and advertisements tied to those feeds between 12 a.m. and 6 a.m. unless parents explicitly consent. The rule making process remains ongoing, and enforcement will likely begin once these standards are finalized.

The Kids Off Social Media Act (KOSMA)

At the federal level, the Kids Off Social Media Act (KOSMA) seeks to create national baselines for youth protections online. Reintroduced to Congress, the bill would:

  • Ban social media accounts for children under 13.
  • Prohibit algorithmic recommendation systems for users under 17.
  • Restrict social media access in schools during instructional hours.

Supporters argue the bill is necessary to counteract the addictive nature of social media design. Critics, including digital rights advocates, question whether such sweeping restrictions could survive First Amendment scrutiny or prove enforceable at scale. 

KOSMA remains pending in Congress but continues to shape the national conversation about youth and online safety.

California’s SB 976 

California’s Protecting Our Kids from Social Media Addiction Act (SB 976) reflects a growing trend of regulating design features rather than content. The law requires platforms to:

  • Obtain parental consent before delivering addictive feeds to minors.
  • Mute notifications for minors between midnight and 6 a.m. and during school hours unless parents opt in.

The statute is currently under legal challenge for potential First Amendment violations, however, the Ninth Circuit allowed enforcement of key provisions to proceed suggesting that narrowly tailored design regulations aimed at protecting minors may survive early constitutional scrutiny.

Other State Efforts

Other states are following suit. According to the National Conference of State Legislatures (NCSL), at least 13 states have passed or proposed laws requiring age verification, parental consent, or restrictions on algorithmic recommendations for minors. Mississippi’s HB 1126, for example, requires both age verification and parental consent, and the U.S. Supreme Court allowed the law to remain in effect while litigation continues. 

Final Thoughts

We are at a pivotal moment. The era when children’s digital consumption went largely unregulated is coming to an end. The question now isn’t if  regulation is on the horizon, it’s how it will take shape, and whether it can strike the right balance between safety, free expression, and innovation.

As lawmakers, parents, and platforms navigate this evolving landscape, one challenge remains constant: ensuring that efforts to protect minors from harmful algorithmic design do not come at the expense of their ability to connect, learn, and express themselves online.

What do you think is the right balance between protecting minors from harmful algorithmic exposure and preserving their access to social media as a space for connection and expression?

From Cute to Concerning: The Legal and Emotional Costs of Sharenting

After a long day at work, most people now sit down for a nice relaxing…scroll. That’s right, most people have social media and enjoy going through the latest posts to wind down or pass the time. Whether it’s on Instagram, Facebook, or TikTok someone is looking at a post made by a parent displaying their child doing something adorable, funny, documenting a family trip or marking a milestone like the first day of school. What seems like an innocent post, can be something much darker.

What is Sharenting

As social media gained traction in recent years, so did sharenting. Sharenting is  when

a parent overshares or excessively posts information, pictures stories or updates about their child’s life.

A proud parent could post the smiling face of their child at a sporting event on their private account thinking only family and friends will see it. Some parents even post daily vlogs involving their children making money on filming their day to day with strangers. Most parents engage in sharenting to share details of their child because they are proud of them. Some want to build a digital archive, or want to connect with loved ones. Others are even trying to build camaraderie with other parents, and they could even be trying to help others. Most parents do this with the purest motives in mind; however, their content is not always received as it is intended.

The Risks of Sharenting

Legal Risks

As established in Troxel v. Grainville, parents have a fundamental right to raise their children as they see fit. This includes education, religion, and even social media. Parents have a First Amendment right to speech just as much as a child does when it comes to posting online. Parents are protected in their posting videos and pictures of their children under the First Amendment; however, this right is not unlimited. These restrictions apply in certain circumstances such as child explosion laws, or other compelling state interests.

Children also have a right to privacy that conflicts with their parents First Amendment right of speech and expression in the context of posting them online.  Under the Children’s Online Privacy Protection Act (COPPA), significant protections for children’s online privacy were established. COPPA imposes certain requirements on operators of websites or online services directed

to children under 13 years of age, and on operators of other websites or online services that have actual knowledge that they are collecting personal information online from a child under 13 years of age.

COPPA, however, only targets protecting children’s data not the actual child from the risks of being online.

Psychological Risks

 In addition to the legal risks of sharenting, there are also many psychological risks. What happens when a parent posts that one picture that comes back to haunt their child later on. These videos and images can be used by other students to bully the child down the road. Children can have a harder time developing their own image and identity when they are prescribed an online persona by their parents through their posts.

Even with pure motives, a survey of parents discussed by Dr. Albers of the Cleveland Health Clinic found that:

74% of parents using social media knew another parent engaging in sharenting behavior.

56% said the parents shared embarrassing information about their kid.

51% said the parent provided details that revealed their child’s location.

27% said the parent circulated inappropriate phots.

The impact that these posts that, once are made are always out there, can be detrimental to a child’s mental health. Social media, according to the Mayo Clinic, already amplifies adolescents’ anxiety and depression. Parents can add to this by sharenting.

Other Risks

These seemingly innocent posts can often lead to greater risks,  for their children than most parents realize. In addition to negative psychological impacts, sharenting can endanger the child’s mental health as well as their physical health. Sharenting is a window directly into a child’s life, one which a predator can abuse. Images can be taken from their parents accounts and shared to sites for pedophiles.

The taking of these images can also enable identity theft, harassment, bullying, exploitation and even violence.

Parents who have gotten famous from posting their kids like the Labrant Family and The Fisher’s have increased their kids risk of being subject to one of these crimes by constantly posting them online.

Sharenting can blur the line between a fun posts and advertising your child to strangers.  In extreme situations creating dangerous environments for internet famous children.

Parents are also contributing to their child’s digital identity which could impact their future educational and employment prospects. It could also lead to embarrassment that the content was shared, and they cannot get rid of it.

How Can Parents Protect their Kids

As social media continues to grow and be a part of our daily lives, parents can take action to protect their children going forward. One way parents can do this is by blurring or covering their child’s face with an emoji. Parents can still have the excitement of posting their child’s achievements or milestones without exposing their identity to the internet.

Parents can think before they post.

If you’re trying to decide whether a post counts as sharenting, ask yourself these questions:

What’s the content?

Why am I posting it?

Who’s my intended audience? Have I set my permissions accordingly?

Is my child old enough to understand the concept of a digital footprint? If they are, did I ask their consent? If not, do I think they’d be happy to see this online when they’re older?

Sharenting is not going to stop, but it can evolve to be done in a way that protects a parent’s right to post and their child’s safety.

 

Don’t Talk to Strangers! But if it’s Online, it’s Okay?

It is 2010.  You are in middle school and your parents let your best friend come over on a Friday night.  You gossip, talk about crushes, and go on all social media sites.  You decide to try the latest one, Omegle.  You automatically get paired with a stranger to talk to and video chat with.  You speak to a few random people, and then, with the next click, a stranger’s genitalia are on your screen.

Stranger Danger

Omegle is a free video-chatting social media platform.  Its primary function has become meeting new people and arranging “online sexual rendezvous.”  Registration is not required.  Omegle randomly pairs users for one-on-one video sessions.  These sessions are anonymous, and you can skip to a new person at any time.  Although there is a large warning on the home screen saying “you must be 18 or older to use Omegle”, no parental controls are available through the platform.  Should you want to install any parental controls, you must use a separate commercial program.

While the platform’s community guidelines illustrate the “dos and don’ts” of the site, it seems questionable that the platform can monitor millions of users, especially when users are not required to sign up, or to agree to any of Omegle’s terms and conditions.  It, therefore, seems that this site could harbor online predators, raising quite a few issues.

One recent case surrounding Omegle involved a pre-teen who was sexually abused, harassed, and blackmailed into sending a sexual predator obscene content.  In A.M. v. Omegle.com LLC, the open nature of Omegle ended up matching an 11-year-old girl with a sexual predator in his late thirties.  Being easily susceptible, he forced the 11-year-old into sending pornographic images and videos of herself, perform for him and other predators, and recruit other minors.  This predator was able to continue this horrific crime for three years by threatening to release these videos, pictures, and additional content publicly.  The 11-year-old plaintiff sued Omegle on two general claims of platform liability through Section 230, but only one claim was able to break through the law.

Unlimited Immunity Cards!

Under 47 U.S.C. § 230 (Section 230), social media platforms are immune from liability for content posted by third parties.  As part of the Communications Decency Act of 1996, Section 230 provides almost full protection against lawsuits for social media companies since no platform is seen as a publisher or speaker of user-generated content posted on the site.  Section 230 has gone so far to say that Google and Twitter were immune from liability for claims that their platforms were used to aid terrorist activities.  In May of 2023, these cases moved up to the Supreme Court.  Although the court declined to rule for the Google case, they ruled on the Twitter case.  Google was found not liable for the claim that they stimulated the growth of ISIS through targeted recommendations and inspired an attack that killed an American student.  Twitter was immune for the claim that the platform aided and abetted a terrorist group to raise funds and recruit members for a terrorist attack.

Wiping the Slate

In February of 2023, the District Court in Oregon for the Portland Division found that Section 230 immunity did not apply to Omegle in a products liability claim, and the platform was held liable for these predatory actions committed by the third party on the site.  By side-stepping the third-party freedom of speech issue that comes with Section 230 immunity for an online publisher, the district court found Omegle responsible under the Plaintiff’s products liability claim, which targeted the platforms’ defective design, defective warning, negligent design, and failure to warn.

Three prongs need to be proved to preclude a platform from liability under Section 230:

  1. A provider of an interactive site,
  2. Whom is sought to be treated as a publisher or speaker, and
  3. For information provided by a third-party.

It is clear that Omegle is an interactive site that fits into the definition provided by Section 230.  The issue then falls on the second and third prongs: if the cause of action treated Omegle as the speaker of third-party content.  The sole function of randomly pairing strangers causes the foreseen danger of pairing a minor with an adult. Shown in the present case, “the function occurs before the content occurs.” By designing the platform negligently and with knowing disregard for the possibility of harm, the court ultimately concluded that the liability of the platform’s function does not pertain to third-party published content and that the claim targeted specific functions rather than users’ speech on the platform.  Section 230 immunity did not apply for this first claim and Omegle was held liable.

Not MY Speech

The plaintiff’s last claim dealing with immunity under Section 230 is that Omegle negligently failed to apply reasonable precautions to provide a safe platform.  There was a foreseeable risk of harm when marketing the service to children and adults and randomly pairing them.  Unlike the products liability claim, the negligence claim was twofold: the function of matching people and publishing their communications to each other, both of which fall directly into Section 230’s immunity domain.  The Oregon District Court drew a distinct line between the two claims, so although Omegle was not liable under Section 230 here through negligent service, they were liable through products liability.

If You Cannot Get In Through the Front Door, Try the Back Door!

For almost 30 years, social media platforms have been nearly immune from liability pertaining to Section 230 issues.  In the last few years, with the growth of technology on these platforms, judges have been trying to find loopholes in the law to hold companies liable.  A.M. v. Omegle has just moved through the district court level.  If appealed, it will be an interesting case to follow and see if the ruling will stand or be overruled in conjunction with the other cases that have been decided.  

How do you think a higher court will rule on issues like these?

Social Media: a pedophile’s digital playground.

A few years ago, as I got lost in the wormhole that is YouTube, I stumbled upon a family channel, “The ACE Family.” They had posted a cute video where the mom was playing a prank on her boyfriend by dressing their infant daughter in a cute and tiny crochet bikini. I didn’t think much of it at the time as it seemed innocent and cute, but then I pondered and thought about it. I stumbled on this video without any malintent, but how easy would it be for someone looking for content like this with other disgusting intent.

When you Google “Social media child pornography,” you get many articles from 2019. In 2019 a YouTuber using the name “MattsWhatItIs” posted a YouTube video titled “YouTube is Facilitating the Sexual Exploitation of Children, and it’s Being Monetized(2019)”; this video has 4,305,097 views to date and has not been removed from the platform. The author of the video discusses a potential child pornography ring on YouTube that was being facilitated due to a glitch in the algorithm. He demonstrates how with a brand-new account on a VPN, all it takes is two clicks to end up in this ring. The search started with a “Bikini Haul” search. After two clicks on the recommended videos section, he stumbles upon an innocent-looking homemade video. The video looks innocent, but he scrolls down to the comments to expose the dark side. Multiple random account comment time stamps, those timestamps are linked to slots on the video where the children are in comprising implicit sexual positions. The most disturbing part is that the algorithm glitches once you enter the wormhole, and you get stuck on these “child pornography” videos. Following the vast attention this video received, YouTube has created an algorithm that is supposed to catch this predatory behavior; when the video was posted, it didn’t seem to be doing much.

YouTube has since implemented a “Child Safety Policy,” which details all the content which the social media platform has aimed to protect. It also includes recommended steps for parents or agents posting content with children being the focus. “To protect minors on YouTube, content that doesn’t violate our policies but features children may have some features disabled at both the channel and video level. These features may include:

  • Comments
  • Live chat
  • Live streaming
  • Video recommendations (how and when your video is recommended)
  • Community posts”

Today when you look up the news on this topic, you don’t find much. There are forums and such exposing the many methods these predators use to get around the algorithms set up by platforms to detect their activity. Many predators leave links to child pornography in the comments section of specific videos. Others used generic terms with the initials “C.P.,” a common abbreviation for “child pornography,” and codes like “caldo de pollo,” which means “chicken soup” in Spanish. Many dedicated and concerned parents have formed online communities that scour the Internet for this disgusting content and report it to social media platforms. Online communities scan the Internet for this activity and report it, but why haven’t social media platforms created departments for this issue? Most technology companies use automated tools to detect images and videos that law enforcement has already categorized as child sexual exploitation material. Still, they struggle to identify new, previously unknown material and rely heavily on user reports.

The Child Rescue Coalition has created the “Child Protection System Software.” This tool has a growing database of more than a million hashed images and videos, which it uses to find computers that have downloaded them. The software can track I.P. addresses, which are shared by people connected to the same Wi-Fi network and individual devices. According to the Child Rescue Coalition, the system can follow widgets even if the owners move or use virtual private networks or VPNs to mask the I.P. addresses. Last year they expressed interest in partnering with social media platforms to combine resources to crack down on child pornography. Unfortunately, some are against this as it would allow social media companies access to this unregulated database of suspicious I.P. addresses. Thankfully, many law enforcement departments have partnered up and used this software and as the president of the Child Rescue Coalition said: “Our system is not open-and-shut evidence of a case. It’s for probable cause.”

The United States Department of Justice has created a “Citizen’s Guide to U.S. Federal Law on Child Pornography.” The first line on this page reads, “Images of child pornography are not protected under First Amendment rights, and are illegal contraband under federal law.” Commonly federal jurisdiction is applied if the child pornography offense occurred in interstate or foreign commerce. In today’s digital era, federal law almost always applies when the Internet is used to commit child pornography offenses. The United States has implemented multiple laws that define child pornography and what constitutes a crime related to child pornography.

Whose job is it to protect children from these predators? Should social media have to regulate this? Should parents be held responsible for contributing to the distribution of these media?

 

“Unfortunately, we’ve also seen a historic rise in the distribution of child pornography, in the number of images being shared online, and in the level of violence associated with child exploitation and sexual abuse crimes. Tragically, the only place we’ve seen a decrease is in the age of victims.

 This is – quite simply – unacceptable.”

-Attorney General Eric Holder Jr. speaks at the National Strategy Conference on Combating Child Exploitation in San Jose, California, May 19, 2011.

Can Social Media Be Regulated?

In 1996 Congress passed what is known as Section 230 of the Communications Decency Act (CDA) which provides immunity to website publishers for third-party content posted on their websites. The CDA holds that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This Act passed in 1996, was created in a different time and era, one that could hardly envision how fast the internet would grow in the coming years. In 1996, social media for instance consisted of a little-known social media website called Bolt, the idea of a global world wide web, was still very much in its infancy. The internet was still largely based on dial-up technology, and the government was looking to expand the reach of the internet. This Act is what laid the foundation for the explosion of Social Media, E-commerce, and a society that has grown tethered to the internet.

The advent of Smart-Phones in the late 2000s, coupled with the CDA, set the stage for a society that is constantly tethered to the internet and has allowed companies like Facebook, Twitter, YouTube, and Amazon to carve out niches within our now globally integrated society.   Facebook alone in the 2nd quarter of 2021 has averaged over 1.9 billion daily users.

Recent studs conducted by the Pew Research Center show that “[m]ore than eight in ten Americans get news from digital services”

Large majority of Americans get news on digital devices

While older members of society still rely on news media online, the younger generation, namely those 18-29 years of age, receive their news via social media.

Online, most turn to news websites except for the youngest, who are more likely to use social media

The role Social Media plays in the lives of the younger generation needs to be recognized. Social Media has grown at a far greater rate than anyone could imagine. Currently, Social Media operates under its modus operandi, completely free of government interference due to its classification as a private entity, and its protection under Section 230.

Throughout the 20th century when Television News Media dominated the scenes, laws were put into effect to ensure that television and radio broadcasters would be monitored by both the courts and government regulatory commissions. For example, “[t]o maintain a license, stations are required to meet a number of criteria. The equal-time rule, for instance, states that registered candidates running for office must be given equal opportunities for airtime and advertisements at non-cable television and radio stations beginning forty-five days before a primary election and sixty days before a general election.”

What these laws and regulations were put in place for was to ensure that the public interest in broadcasting was protected. To give substance to the public interest standard, Congress has from time to time enacted requirements for what constitutes the public interest in broadcasting. But Congress also gave the FCC broad discretion to formulate and revise the meaning of broadcasters’ public interest obligations as circumstances changed.

The Federal Communications Commission (FCC) authority is constrained by the first amendment but acts as an intermediary that can intervene to correct perceived inadequacies in overall industry performance, but it cannot trample on the broad editorial discretion of licensees. The Supreme Court has continuously upheld the public trustee model of broadcast regulation as constitutional. The criticisms of regulating social media center on the notion that they are purely private entities that do not fall under the purviews of the government, and yet these same issues are what presented themselves in the precedent-setting case of Red Lion Broadcasting Co. v. Federal Communications Commission (1969.  In this case, the court held that “rights of the listeners to information should prevail over those of the broadcasters.” The Court’s holding centered on the public right to information over the rights of a broadcast company to choose what it will share, this is exactly what is at issue today when we look at companies such as Facebook, Twitter, and Snapchat censuring political figures who post views that they feel may be inciteful of anger or violence.

In essence, what these organizations are doing is keeping information and views from the attention of the present-day viewer. The vessel for the information has changed, it is no longer found in television or radio but primarily through social media. Currently, television and broadcast media are restricted by Section 315(a) of the Communications Act and Section 73.1941 of the Commission’s rules which “require that if a station allows a legally qualified candidate for any public office to use its facilities (i.e., make a positive identifiable appearance on the air for at least four seconds), it must give equal opportunities to all other candidates for that office to also use the station.” This is a restriction that is nowhere to be found for Social Media organizations. 

This is not meant to argue for one side or the other but merely to point out that there is a political discourse being stifled by these social media entities, that have shrouded themselves in the veils of a private entity. However, what these companies fail to mention is just how political they truly are. For instance, Facebook proclaims itself to be an unbiased source for all parties, and yet what it fails to mention is that currently, Facebook employs one of the largest lobbyist groups in Washington D.C. Four Facebooks lobbyist have worked directly in the office of House Speaker Pelosi. Pelosi herself has a very direct connection to Facebook, she and her husband own between $550,000 to over $1,000,000 in Facebook stock. None of this is illegal, however, it raises the question of just how unbiased is Facebook.

If the largest source of news for the coming generation is not television, radio, or news publications themselves, but rather Social Media such as Facebook, then how much power should they be allowed to wield without there being some form of regulation? The question being presented here is not a new one, but rather the same question asked in 1969, simply phrased differently. How much information is a citizen entitled to, and at what point does access to that information outweigh the rights of the organization to exercise its editorial discretion? I believe that the answer to that question is the same now as it was in 1969 and that the government ought to take steps similar to those taken with radio and television. What this looks like is ensuring that through Social Media, that the public has access to a significant amount of information on public issues so that its members can make rational political decisions. At the end of that day that it was at stake, the public’s ability to make rational political decisions.

These large Social Media conglomerates such as Facebook and Twitter have long outgrown their place as a private entity, they have grown into a public medium that has tethered itself to the realities of billions of people. Certain aspects of it need to be regulated, mainly those that interfere with the Public Interest, there are ways to regulate this without interfering with the overall First Amendment right of Free Speech for all Americans. Where however Social Media blends being a private forum for all people to express their ideas under firmly stated “terms and conditions”, and being an entity that strays into the political field whether it be by censoring heads of state, or by hiring over $50,000,000 worth of lobbyist in Washington D.C, there need to be some regulations put into place that draw the line that ensures the public still maintains the ability to make rational political decisions. Rational decisions that are not influenced by anyone organization. The time to address this issue is now when there is still a middle ground on how people receive their news and formulate opinions.

Government seems to be taking bigger steps toward regulating social media

Privacy is finally catching the real attention of the Government.  In a moved aimed at keeping our social media traffic private, the FTC is urging social media companies to include a do-not-track feature in their software and apps.  A NYTimes article, which is available at  http://tinyurl.com/algljc8 discusses the very real concern’s of government officials and highlights a recent FTC fine  of $800,000 issued against the neophyte social networking app, Path for violating federal regulations against collecting personal information on underaged users.  While the move seems like a good one, it also smacks of a little too much government regulation, even for this seemingly staunch anti-libertarian.

Kids’ Facebook “depression”

I don’t know if I should laugh at this article as ridiculous or be shocked and paranoid for the future of America.  The article discusses the responsibility of doctors to discuss the facebook effect with their patients.  Has social media infiltrated society so much that a discussion regarding its influence will now be included along with an eye exam and height measurement?  What does this say about us?

 

 

Skip to toolbar