Privacy Please: Privacy Law, Social Media Regulation and the Evolving Privacy Landscape in the US

Social media regulation is a touchy subject in the United States.  Congress and the White House have proposed, advocated, and voted on various bills, aimed at protecting and guarding people from data misuse and misappropriation, misinformation, harms suffered by children, and for the implications of vast data collection. Some of the most potent concerns about social media stem from use and misuse of information by the platforms- from the method of collection, to notice of collection and use of collected information. Efforts to pass a bill regulating social media have been frustrated, primarily by the First Amendment right to free speech. Congress has thus far failed to enact meaningful regulation on social media platforms.

The way forward may well be through privacy law. Privacy laws give people some right to control their own personhood including their data, right to be left alone, and how and when people see and view them. Privacy laws originated in their current form in the late 1800’s with the impetus being one’s freedom from constant surveillance by paparazzi and reporters, and the right to control your own personal information. As technology mutated, our understanding of privacy rights grew to encompass rights in our likeness, our reputation, and our data. Current US privacy laws do not directly address social media, and a struggle is currently playing between the vast data collection practices of the platforms, immunity for platforms under Section 230, and private rights of privacy for users.

There is very little Federal Privacy law, and that which does exist is narrowly tailored to specific purposes and circumstances in the form of specific bills. Somes states have enacted their own privacy law scheme, California being on the forefront, Virginia, Colorado, Connecticut, and Utah following in its footsteps. In the absence of a comprehensive Federal scheme, privacy law is often judge-made, and offers several private rights of action for a person whose right to be left alone has been invaded in some way. These are tort actions available for one person to bring against another for a violation of their right to privacy.

Privacy Law Introduction

Privacy law policy in the United States is premised on three fundamental personal rights to privacy:

  1. Physical right to privacy- Right to control your own information
  2. Privacy of decisions– such as decisions about sexuality, health, and child-rearing. These are the constitutional rights to privacy. Typically not about information, but about an act that flows from the decision
  3. Proprietary Privacy – the ability to protect your information from being misused by others in a proprietary sense.

Privacy Torts

Privacy law, as it concerns the individual, gives rise to four separate tort causes of action for invasion of privacy:

  1. Intrusion upon Seclusion- Privacy law provides a tort cause of action for intrusion upon seclusion when someone intentionally intrudes upon the reasonable expectation of seclusion of another, physically or otherwise, and the intrusion is objectively highly offensive.
  2. Publication of Private Facts- One gives publicity To a matter concerning the Private life of another that is not of legitimate concern to the public, and the matter publicized would be objectively highly offensive. The first amendment provides a strong defense for publication of truthful matters when they are considered newsworthy.
  3. False Light – One who gives publicity to a matter concerning another that places the other before the public in a false light when The false light in which the other was placed would be objectively highly offensive and the actor had knowledge of or acted in reckless disregard as to the falsity of the publicized matter and the false light in which the other would be placed.
  4. Appropriation of name and likeness- Appropriation of one’s name or likeness to the defendant’s own use or benefit. There is no appropriation when a persona’s picture is used to illustrate a non-commercial, newsworthy article. This is usually commercial in nature but need not be. The appropriation could be of “identity”. It need not be misappropriation of name, it could be the reputation, prestige, social or commercial standing, public interest, or other value on the plaintiff’s likeness.

These private rights of action are currently unavailable for use against social media platforms because of Section 230 of the Decency in Communications Act, which provides broad immunity to online providers for posts on their platforms. Section 230 prevents any of the privacy torts from being raised against social media platforms.

The Federal Trade Commission (FTC) and Social Media

Privacy law can implicate social media platforms when their practices become unfair or deceptive to the public through investigation by the Federal Trade Commission (FTC). The FTC is the only federal agency with both consumer protection and competition jurisdiction in broad sectors of the economy. FTC investigates business practices where those practices are unfair or deceptive. FTC Act 15 U.S.C S 45- Act prohibits “unfair or deceptive acts or practices in or affecting commerce” and grants broad jurisdiction over privacy practices of businesses to the FTC. Trade practice is unfair if it causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and is not outweighed by countervailing benefits to consumers or competition. A deceptive act or practice is a material representation, omission, or practice that is likely to mislead the consumer acting reasonably in the circumstances, to the consumer’s detriment.

Critically, there is no private right of action in FTC enforcement. The FTC has no ability to enforce fines for S5 violations but can provide injunctive relief. By design, the FTC has very limited rulemaking authority, and looks to consent decrees and procedural, long-lasting relief as an ideal remedy. The FTC pursues several types of misleading or deceptive policy and practices that implicate social media platforms: notice and choice paradigms, broken promises, retroactive policy changes, inadequate notice, and inadequate security measures. Their primary objective is to negotiate a settlement where the company submits to certain measures of control of oversight by the FTC for a certain period of time. Violations of the agreements could yield additional consequences, including steep fines and vulnerability to class action lawsuits.

Relating to social media platforms, the FTC has investigated misleading terms and conditions, and violations of platform’s own policies. In Re Snapchat, the platform claimed that user’s posted information disappeared completely after a certain period of time, however, through third party apps and manipulation of user’s posts off of the platform, posts could be retained. The FTC and Snapchat settled, through a consent decree, to subject Snapchat to FTC oversight for 20 years.

The FTC has also investigated Facebook for violation of its privacy policy. Facebook has been ordered to pay a $5 billion penalty and to submit to new restrictions and a modified corporate structure that will hold the company accountable for the decisions it makes about its users’ privacy to settle FTC charges claiming that they violated a 2012 agreement with the agency.

Unfortunately, none of these measures directly give individuals more power over their own privacy. Nor do these policies and processes give individuals any right to hold platforms responsible for being misled by algorithms using their data, or for intrusion into their privacy by collecting data without allowing an opt-out.

Some of the most harmful social media practices today relate to personal privacy. Some examples include the collection of personal data, the selling and dissemination of data through the use of algorithms designed to subtly manipulate our pocketbooks and tastes, collection and use of data belonging to children, and the design of social media sites to be more addictive- all in service of the goal of commercialization of data.

No current Federal privacy scheme exists. Previous Bills on Privacy have been few and narrowly tailored to relatively specific circumstances and topics like healthcare and medical data protection by HIPPA, protection of data surrounding video rentals as in the Video Privacy Protection Act, and narrow protection for children’s data in Children’s Online Protection Act. All the schemes are outdated and fall short of meeting the immediate need of broad protection of widely collected and broadly utilized data from social media.

Current Bills on Privacy

Upon request from some of the biggest platforms, outcry from the public, and the White House’s request for Federal Privacy regulation, Congress appears poised to act. The 118th Congress has pushed privacy law as a priority in this term by introducing several bills related to social media privacy. There are at least ten Bills currently pending between the House of the Senate addressing a variety of issues and concerns from Children’s data privacy to the minimum age for use and designation of a new agency to monitor some aspects of privacy.

S744The Data Care Act of 2023 aims to protect social media user’s data privacy by imposing fiduciary duties on the platforms. The original iteration of the bill was introduced in 2021 and failed to receive a vote. It was re-introduced in March of 2023 and is currently pending. Under the act, social media platforms would have the duty to reasonably secure user’s data from access, refrain from using the data in a way that could foreseeably “benefit the online service provider to the detriment of the end user” and to prevent disclosure of user’s data unless the party is also bound by these duties. The bill authorizes the FTC and certain state officials to take enforcement actions upon breach of those duties. The states would be permitted to take their own legal action against companies for privacy violations. The bill would also allow the FTC to intervene in the enforcement efforts by imposing fines for violations.

H.R.2701 – Perhaps the most comprehensive piece of legislation on the House floor is the Online Privacy Act. In 2023, the bill was reintroduced by democrat Anna Eshoo after an earlier version on the bill failed to receive a vote and died in Congress. The Online Privacy Act aims to protect users by providing individuals rights relating to the privacy of their personal information. The bill would also provide privacy and security requirements for treatment of personal information. To accomplish this, the bill established a new agency – the Digital Privacy Agency- which would be responsible for enforcement of the rights and requirements. The new individual rights in privacy are broad and include the rights of access, correction, deletion, human review of automated decision, individual autonomy, right to be informed, and right to impermanence, amongst others. This would be the most comprehensive plan to date. The establishment of a new agency with a task specific to administration and enforcement of privacy laws would be incredibly powerful. The creation of this agency would be valuable irrespective of whether this bill is passed.

HR 821– The Social Media Child Protection Act is a sister bill to one by a similar name which originated in the Senate. This bill aims to protect children from the harms of social media by limiting children’s access to it. Under the bill, Social Media platforms are required to verify the age of every user before accessing the platform by submitting a valid identity document or by using another reasonable verification method. A social media platform will be prohibited from allowing users under the age of 16 to access the platform. The bill also requires platforms to establish and maintain reasonable procedures to protect personal data collected from users. The bill affords for a private right of action as well as state and FTC enforcement.

S 1291The Protecting Kids on Social Media Act is similar to its counterpart in the House, with slightly less tenacity. It similarly aims to protect children from social media’s harms. Under the bill, platforms must verify its user’s age, not allow the user to use the service unless their age has been verified, and must limit access to the platform for children under 12. The bill also prohibits retention and use of information collected during the age verification process. Platforms must take reasonable steps to require affirmative consent from the parent or guardian of a minor who is at least 13 years old for the creation of a minor account, and reasonably allow access for the parent to later revoke that consent. The bill also prohibits use of data collected from minors for algorithmic recommendations. The bill would require the Department of Commerce to establish a voluntary program for secure digital age verification for social media platforms. Enforcement would be through the FTC or state action.

S 1409– The Kids Online Safety Act, proposed by Senator Blumenthal of Connecticut, also aims to protect minors from online harms. This bill, as does the Online Safety Bill, establishes fiduciary duties for social media platforms regarding children using their sites. The bill requires that platforms act in the best interest of minors using their services, including mitigating harms that may arise from use, sweeping in online bullying and sexual exploitation. Social media sites would be required to establish and provide access to safeguards such as settings that restrict access to minor’s personal data and granting parents the tools to supervise and monitor minor’s use of the platforms. Critically, the bill establishes a duty for social media platforms to create and maintain research portals for non-commercial purposes to study the effect that corporations like the platforms have on society.

Overall, these bills indicate Congress’s creative thinking and commitment to broad privacy protection for users from social media harms. I believe the establishment of a separate body to govern, other than the FTC which lacks the powers needed to compel compliance, to be a necessary step. Recourse for violations on par with the EU’s new regulatory scheme, mainly fines in the billions, could help.

Many of the bills, for myriad aims, establish new fiduciary duties for the platforms in preventing unauthorized use and harms for children. There is real promise in this scheme- establishing duty of loyalty, diligence and care for one party has a sound basis in many areas of law and would be more easily understood in implementation.

The notion that platforms would need to be vigilant in knowing their content, studying its affects, and reporting those effects may do the most to create a stable future for social media.

The legal responsibility for platforms to police and enforce their policies and terms and conditions is another opportunity to further incentivize platforms. The FTC currently investigates policies that are misleading or unfair, sweeping in the social media sites, but there could be an opportunity to make the platforms legally responsible for enforcing their own policies, regarding age, against hate, and inappropriate content, for example.

What would you like to see considered in Privacy law innovation for social media regulation?

Social Media: a pedophile’s digital playground.

A few years ago, as I got lost in the wormhole that is YouTube, I stumbled upon a family channel, “The ACE Family.” They had posted a cute video where the mom was playing a prank on her boyfriend by dressing their infant daughter in a cute and tiny crochet bikini. I didn’t think much of it at the time as it seemed innocent and cute, but then I pondered and thought about it. I stumbled on this video without any malintent, but how easy would it be for someone looking for content like this with other disgusting intent.

When you Google “Social media child pornography,” you get many articles from 2019. In 2019 a YouTuber using the name “MattsWhatItIs” posted a YouTube video titled “YouTube is Facilitating the Sexual Exploitation of Children, and it’s Being Monetized(2019)”; this video has 4,305,097 views to date and has not been removed from the platform. The author of the video discusses a potential child pornography ring on YouTube that was being facilitated due to a glitch in the algorithm. He demonstrates how with a brand-new account on a VPN, all it takes is two clicks to end up in this ring. The search started with a “Bikini Haul” search. After two clicks on the recommended videos section, he stumbles upon an innocent-looking homemade video. The video looks innocent, but he scrolls down to the comments to expose the dark side. Multiple random account comment time stamps, those timestamps are linked to slots on the video where the children are in comprising implicit sexual positions. The most disturbing part is that the algorithm glitches once you enter the wormhole, and you get stuck on these “child pornography” videos. Following the vast attention this video received, YouTube has created an algorithm that is supposed to catch this predatory behavior; when the video was posted, it didn’t seem to be doing much.

YouTube has since implemented a “Child Safety Policy,” which details all the content which the social media platform has aimed to protect. It also includes recommended steps for parents or agents posting content with children being the focus. “To protect minors on YouTube, content that doesn’t violate our policies but features children may have some features disabled at both the channel and video level. These features may include:

  • Comments
  • Live chat
  • Live streaming
  • Video recommendations (how and when your video is recommended)
  • Community posts”

Today when you look up the news on this topic, you don’t find much. There are forums and such exposing the many methods these predators use to get around the algorithms set up by platforms to detect their activity. Many predators leave links to child pornography in the comments section of specific videos. Others used generic terms with the initials “C.P.,” a common abbreviation for “child pornography,” and codes like “caldo de pollo,” which means “chicken soup” in Spanish. Many dedicated and concerned parents have formed online communities that scour the Internet for this disgusting content and report it to social media platforms. Online communities scan the Internet for this activity and report it, but why haven’t social media platforms created departments for this issue? Most technology companies use automated tools to detect images and videos that law enforcement has already categorized as child sexual exploitation material. Still, they struggle to identify new, previously unknown material and rely heavily on user reports.

The Child Rescue Coalition has created the “Child Protection System Software.” This tool has a growing database of more than a million hashed images and videos, which it uses to find computers that have downloaded them. The software can track I.P. addresses, which are shared by people connected to the same Wi-Fi network and individual devices. According to the Child Rescue Coalition, the system can follow widgets even if the owners move or use virtual private networks or VPNs to mask the I.P. addresses. Last year they expressed interest in partnering with social media platforms to combine resources to crack down on child pornography. Unfortunately, some are against this as it would allow social media companies access to this unregulated database of suspicious I.P. addresses. Thankfully, many law enforcement departments have partnered up and used this software and as the president of the Child Rescue Coalition said: “Our system is not open-and-shut evidence of a case. It’s for probable cause.”

The United States Department of Justice has created a “Citizen’s Guide to U.S. Federal Law on Child Pornography.” The first line on this page reads, “Images of child pornography are not protected under First Amendment rights, and are illegal contraband under federal law.” Commonly federal jurisdiction is applied if the child pornography offense occurred in interstate or foreign commerce. In today’s digital era, federal law almost always applies when the Internet is used to commit child pornography offenses. The United States has implemented multiple laws that define child pornography and what constitutes a crime related to child pornography.

Whose job is it to protect children from these predators? Should social media have to regulate this? Should parents be held responsible for contributing to the distribution of these media?

 

“Unfortunately, we’ve also seen a historic rise in the distribution of child pornography, in the number of images being shared online, and in the level of violence associated with child exploitation and sexual abuse crimes. Tragically, the only place we’ve seen a decrease is in the age of victims.

 This is – quite simply – unacceptable.”

-Attorney General Eric Holder Jr. speaks at the National Strategy Conference on Combating Child Exploitation in San Jose, California, May 19, 2011.

Can Social Media Be Regulated?

In 1996 Congress passed what is known as Section 230 of the Communications Decency Act (CDA) which provides immunity to website publishers for third-party content posted on their websites. The CDA holds that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This Act passed in 1996, was created in a different time and era, one that could hardly envision how fast the internet would grow in the coming years. In 1996, social media for instance consisted of a little-known social media website called Bolt, the idea of a global world wide web, was still very much in its infancy. The internet was still largely based on dial-up technology, and the government was looking to expand the reach of the internet. This Act is what laid the foundation for the explosion of Social Media, E-commerce, and a society that has grown tethered to the internet.

The advent of Smart-Phones in the late 2000s, coupled with the CDA, set the stage for a society that is constantly tethered to the internet and has allowed companies like Facebook, Twitter, YouTube, and Amazon to carve out niches within our now globally integrated society.   Facebook alone in the 2nd quarter of 2021 has averaged over 1.9 billion daily users.

Recent studs conducted by the Pew Research Center show that “[m]ore than eight in ten Americans get news from digital services”

Large majority of Americans get news on digital devices

While older members of society still rely on news media online, the younger generation, namely those 18-29 years of age, receive their news via social media.

Online, most turn to news websites except for the youngest, who are more likely to use social media

The role Social Media plays in the lives of the younger generation needs to be recognized. Social Media has grown at a far greater rate than anyone could imagine. Currently, Social Media operates under its modus operandi, completely free of government interference due to its classification as a private entity, and its protection under Section 230.

Throughout the 20th century when Television News Media dominated the scenes, laws were put into effect to ensure that television and radio broadcasters would be monitored by both the courts and government regulatory commissions. For example, “[t]o maintain a license, stations are required to meet a number of criteria. The equal-time rule, for instance, states that registered candidates running for office must be given equal opportunities for airtime and advertisements at non-cable television and radio stations beginning forty-five days before a primary election and sixty days before a general election.”

What these laws and regulations were put in place for was to ensure that the public interest in broadcasting was protected. To give substance to the public interest standard, Congress has from time to time enacted requirements for what constitutes the public interest in broadcasting. But Congress also gave the FCC broad discretion to formulate and revise the meaning of broadcasters’ public interest obligations as circumstances changed.

The Federal Communications Commission (FCC) authority is constrained by the first amendment but acts as an intermediary that can intervene to correct perceived inadequacies in overall industry performance, but it cannot trample on the broad editorial discretion of licensees. The Supreme Court has continuously upheld the public trustee model of broadcast regulation as constitutional. The criticisms of regulating social media center on the notion that they are purely private entities that do not fall under the purviews of the government, and yet these same issues are what presented themselves in the precedent-setting case of Red Lion Broadcasting Co. v. Federal Communications Commission (1969.  In this case, the court held that “rights of the listeners to information should prevail over those of the broadcasters.” The Court’s holding centered on the public right to information over the rights of a broadcast company to choose what it will share, this is exactly what is at issue today when we look at companies such as Facebook, Twitter, and Snapchat censuring political figures who post views that they feel may be inciteful of anger or violence.

In essence, what these organizations are doing is keeping information and views from the attention of the present-day viewer. The vessel for the information has changed, it is no longer found in television or radio but primarily through social media. Currently, television and broadcast media are restricted by Section 315(a) of the Communications Act and Section 73.1941 of the Commission’s rules which “require that if a station allows a legally qualified candidate for any public office to use its facilities (i.e., make a positive identifiable appearance on the air for at least four seconds), it must give equal opportunities to all other candidates for that office to also use the station.” This is a restriction that is nowhere to be found for Social Media organizations. 

This is not meant to argue for one side or the other but merely to point out that there is a political discourse being stifled by these social media entities, that have shrouded themselves in the veils of a private entity. However, what these companies fail to mention is just how political they truly are. For instance, Facebook proclaims itself to be an unbiased source for all parties, and yet what it fails to mention is that currently, Facebook employs one of the largest lobbyist groups in Washington D.C. Four Facebooks lobbyist have worked directly in the office of House Speaker Pelosi. Pelosi herself has a very direct connection to Facebook, she and her husband own between $550,000 to over $1,000,000 in Facebook stock. None of this is illegal, however, it raises the question of just how unbiased is Facebook.

If the largest source of news for the coming generation is not television, radio, or news publications themselves, but rather Social Media such as Facebook, then how much power should they be allowed to wield without there being some form of regulation? The question being presented here is not a new one, but rather the same question asked in 1969, simply phrased differently. How much information is a citizen entitled to, and at what point does access to that information outweigh the rights of the organization to exercise its editorial discretion? I believe that the answer to that question is the same now as it was in 1969 and that the government ought to take steps similar to those taken with radio and television. What this looks like is ensuring that through Social Media, that the public has access to a significant amount of information on public issues so that its members can make rational political decisions. At the end of that day that it was at stake, the public’s ability to make rational political decisions.

These large Social Media conglomerates such as Facebook and Twitter have long outgrown their place as a private entity, they have grown into a public medium that has tethered itself to the realities of billions of people. Certain aspects of it need to be regulated, mainly those that interfere with the Public Interest, there are ways to regulate this without interfering with the overall First Amendment right of Free Speech for all Americans. Where however Social Media blends being a private forum for all people to express their ideas under firmly stated “terms and conditions”, and being an entity that strays into the political field whether it be by censoring heads of state, or by hiring over $50,000,000 worth of lobbyist in Washington D.C, there need to be some regulations put into place that draw the line that ensures the public still maintains the ability to make rational political decisions. Rational decisions that are not influenced by anyone organization. The time to address this issue is now when there is still a middle ground on how people receive their news and formulate opinions.

Free speech, should it be so free?

In the United States everybody is entitled to free speech; however, we must not forget that the First Amendment of the Constitution only protects individuals from federal and state actions. With that being said, free speech is not protected from censorship by private entities, like social media platforms. In addition, Section 230 of the Communications Decency Act (CDA) provides technology companies like Twitter, YouTube, Facebook, Snapchat, Instagram as well as other social media giants, immunity from liabilities arising from the content posted on their websites. The question becomes whether it is fair for an individual who desires to freely express himself or herself to be banned from certain social media websites by doing so? What is the public policy behind this? What are the standards employed by these social media companies when determining who should or should not be banned? On the other hand, are social media platforms being used as tools or weapons when it comes to politics? Do they play a role in how the public vote? Are the users truly seeing what they think they have chosen to see or are the contents being displayed targeted to the users and may ultimately create biases?

As we have seen earlier this year, former President Trump was banned from several social media platforms as a result of the January 6, 2021 assault at the U.S. Capitol by Trump supporters. It is no secret that our former president is not shy about his comments on a variety of topics. Some audiences view him as outspoken, direct, or perhaps provocative. When Twitter announced its permanent suspension of former President Trump’s account, its rationale was to prevent further incitement of violence. By falsely claiming that the 2020 election had been stolen from him, thousands of Trump supporters gathered in Washington, D.C. on January 5 and January 6 which ultimately led to violence and chaos. As a public figure and a politician, our former president should have known that his actions or viewpoints on social media are likely to trigger a significant impact on the public. Public figures and politicians should be held to a higher standard as they represent citizens who voted for them. As such, they are influential. Technology companies like Twitter saw the former president’s tweets as potential threats to the public as well as a violation of their company policies; hence, it was justified to ban his account. The ban was an instance of private action as opposed to government action. In other words, former President Trump’s First Amendment rights were not violated.

Spare Me Your Outrage, Your Shock. This Is America | Cognoscenti

First, let us discuss the fairness aspect of censorship. Yes, individuals possess rights to free speech; however, if the public’s safety is at stake, actions are required to avoid chaos. For example, you cannot scream “fire”  out of nowhere in a dark movie theater as it would cause panic and unnecessary disorder. There are rules you must comply with in order to use the facility and these rules are in place to protect the general welfare. As a user, if you don’t like the rules set forth by that facility, you can simply avoid using it. It does not necessarily mean that your idea or speech is strictly prohibited, just not on that particular facility. Similar to social media platforms, if users fail to follow their company policies, the companies reserve the right to ban them. Public policy probably outweighs individual freedom. As for the standards employed by these technology companies, there is no bright line. As I previously mentioned, Section 230 grants them immunity from liabilities. That being said, the contents are unregulated and therefore, these social media giants are free to implement and execute policies as they seem appropriate.

The Dangers of Social Networking - TurboFuture

In terms of politics, I believe social media platforms do play a role in shaping their users’ perspectives in some way. This is because the contents that are being displayed are targeted, if not tailored, as they collect data based on the user’s preferences and past habits. The activities each user engages in are being monitored, measured, and analyzed. In a sense, these platforms are being used as a weapon as they may manipulate users without the users even knowing. A lot of times we are not even aware that the videos or pictures that we see online are being presented to us because of past contents we had seen or selected. In other words, these social media companies may be censoring what they don’t want you to see or what they may think you don’t want to see.  For example, some technology companies are pro-vaccination. They are more likely to post information about facts about COVID-19 vaccines or perhaps publish posts that encourage their users to get vaccinated.  We think we have control over what we see or watch, but do we really?

How to Avoid Misinformation About COVID-19 | Science | Smithsonian Magazine

There are advantages and disadvantages to censorship. Censorship can reduce the negative impact of hate speech, especially on the internet. By limiting certain speeches, we create more opportunities for equality. In addition, censorship prevents the spread of racism. For example, posts and videos of racial comments could be blocked by social media companies if deemed necessary. Censorship can also protect minors from seeing harmful content. Because children can be manipulated easily, it helps promote safety.  Moreover, censorship can be a vehicle to stop false information. During unprecedented times like this pandemic, misinformation can be fatal. On the other hand, censorship may not be good for the public as it creates a specific narrative in society. This can potentially cause biases. For example, many blamed Facebook for the outcome of an election as it’s detrimental to our democracy.

Overall, I believe that some sort of social media censorship is necessary. The cyber-world is interrelated to the real world. We can’t let people do or say whatever they want as it may have dramatic detrimental effects. The issue is how do you keep the best of both worlds?

 

Off Campus Does Still Exist: The Supreme Court Decision That Shaped Students Free Speech

We currently live in a world centered around social media. I grew up in a generation where social media apps like Facebook, Snapchat and Instagram just became popular. I remember a time when Facebook was limited to college students, and we did not communicate back and forth with pictures that simply disappear. Currently many students across the country use social media sites as a way to express themselves, but when does that expression go too far? Is it legal to bash other students on social media? What about teachers after receiving a bad test score? Does it matter who sees this post or where the post was written? What if the post disappears after a few seconds? These are all questions that in the past we had no answer to. Thankfully, in the past few weeks the Supreme court has guided us on how to answer these important questions. In Mahanoy Area School District v B.L, the supreme court decided how far a student’s right to free speech can go and how much control a school district has in restricting a student’s off campus speech.

The question presented in the case of Mahanoy Area School District v. B.L was whether a public school has the authority to discipline a student over something they posted on social media while off campus. The student in this case was a girl named Levy. Levy was a sophomore who attended the Mahanoy Area School District. Levy was hoping to make the varsity cheerleading team that year but unfortunately, she did not.  She was very upset when she found out a freshman got the position instead and decided to express her anger about this decision on social media. Levy was in town with her friend at a local convenience store when she sent “F- School, F- Softball, F- Cheerleading, F Everything” to her list of friends on snapchat in addition to posting this on her snapchat story. One of these friends screenshotted the post and sent it to the cheerleading coach. The school district investigated this post and it resulted in Levy being suspended from cheerleading for one year. Levy, along with her parents were extremely upset with this decision and it resulted in a lawsuit that would shape a student’s right to free speech for a long time.

In the lawsuit, Levy and her parents, claimed that Levy’s cheerleading suspension violated her First Amendment right to free speech. They sued Mahanoy Area School District under 42 U.S.C § 1983 claiming that (1) her suspension from the team violated the First Amendment; (2) the school and team rules were overbroad and viewpoint discriminatory; and (3) those rules were unconstitutionally vague. The district court granted summary judgment in favor of Levy, stating that the school had violated her First Amendment rights. The U.S. Court of Appeals for the Third Circuit affirmed the district court decision. The Mahoney School District petitioned for a writ of certiorari. Finally, the case was heard by the Supreme Court.

Mahanoy School District argued that previous ruling in the case, Tinker v. Des Moines Independent Community School District, acknowledges that public schools do not possess absolute authority over students and that students possess First Amendment speech protections at school so long as the students’ expression does not become substantially disruptive to the proper functioning of school. Mahanoy emphasized that the Court intended for Tinker to extend beyond the schoolhouse gates and include not just on-campus speech, but any type of speech that was likely to result in on-campus harm. Levy countered by arguing that the ruling in Tinker only applies to speech protections on school grounds.

In an 8-1 decision the court ruled against Mahanoy. The Supreme Court held that Mahanoy School District violated Levy’s First Amendment Right by punishing her for posting a vulgar story on her snapchat while off campus.  The court ruled that the speech used did not result in severe bullying, nor was substantially disruptive to the school itself. The court also noted that this post was only visible to her friends list on snapchat and would disappear within 24 hours. It is not the school’s job to act as a parent, but it is their job to make sure actions off campus will not result in danger to the school. The Supreme Court also stated that although the student’s expression was unfavorable, if they did not protect the student’s opinions it would limit the students’ ability to think for themselves.

It is remarkably interesting to think about how the minor facts of this case determined the ruling. What if this case was posted on Facebook? One of the factors to consider that helped the court make their decision was that the story was only visible to about 200 of her friends on snapchat and would disappear within a day. One can assume that if Levy made this a Facebook status visible to all with no posting time frame the court could have ruled very differently. Another factor to consider, is that where the Snapchat post was uploaded ended up being another major factor in this case. Based on the Tinker ruling, if Levy posted this on school grounds Mahanoy School District could have the authority to discipline her for her post.

Technology is advancing each day and I am sure that in the future as more social media platforms come out the court will have to set a new precedent. I believe that the Supreme Court made the right decision regarding this case. I feel that speech which is detrimental to another individual should be monitored whether it is Off Campus Speech or On Campus Speech despite the platform that the speech is posted on. In Levy’s case no names were listed, she was expressing frustration for not making a team. I do believe that this speech was vulgar, but do not believe that the school suffered, nor any other students suffered severe detriment from this post.

If you were serving as a Justice on the Supreme Court, would you rule against Mahoney School District? Do you believe it matters which platform the speech is posted on? What about the location of where it was posted?

How One Teenager’s Snapchat Shaped Students Off-Campus Free Speech Rights

Did you ever not make your high school sports team or get a bad grade on an exam? What did you do to blow off steam? Did you talk to your friends or parents about it or write in your journal about it? When I was in High school- some of my classmates would use Twitter or Snapchat to express themselves. However, the rates for the use of smartphones and social media were much lower than they are today. For instance, today high school students use their smartphones and social media at an incredibly high rate compared to when I was in high school almost ten years ago. In fact, according to Pew Research Center, 95% of teenagers have access to smartphones and 69% of teenagers use Snapchat. This is exactly why the recent Supreme Court decision on Mahanoy Area School District v. B.L. is more important than ever, as it pertains to student’s free speech rights and how much power schools have in controlling their student’s off-campus speech.  Further, this decision is even more necessary because the last time the Supreme Court ruled on student’s free speech was over fifty years ago in Tinker v. Des Moines, way before anyone had smartphones or social media. Therefore, the latest decision by the Supreme Court will shape the future of the power of school districts and the first Amendment rights for students for maybe the next fifty years.

 

The main issue in Mahanoy Area School District v. B.L. is whether public schools can discipline students over something they said off-campus. The facts in this case, occurred when Levy, was a sophomore at Mahoney Area School District. Levy didn’t make the varsity cheerleading team; naturally, she was upset and frustrated about the situation. So, that weekend, Levy was at the convenience store in town with a friend. Levy and the friend took a Snap Chat with their middle finger raised with the caption “F- School, F-Softball, F-Cheerleading, F-Everything” and sent it to her Snap Chat friends. Then, the picture was screenshotted and shown to the cheerleading coach. Which lead to Levy being suspended from the cheerleading team for one year.

 

Furthermore, Levy and her parents did not agree with the suspension and the school’s involvement in Levy’s off-campus speech. Therefore, Levy and her parents filed a lawsuit claiming their suspension violated Levy’s First Amendment free speech rights. Levy sued the school under 42 U.S.C. § 1983 alleging (1) that her suspension from the team violated the First Amendment; (2) that the school and team rules were overbroad and viewpoint discriminatory; and (3) that those rules were unconstitutionally vague. The district court granted summary judgment in favor of Levy, stating that the school had violated her First Amendment rights. The U.S. Court of Appeals for the Third Circuit affirmed the district court decision. The Mahoney School District petitioned for a writ of certiorari.

 

In an 8-1 decision the Supreme Court ruled in favor of Levy. The Supreme Court held that the Mahoney Area School District violated Levy’s First Amendment rights by punishing her for using vulgar language that criticized the school on social media. The Supreme Court noted numerous reasons why they ruled in favor of Levy. Additionally, The Supreme Court noted the importance of schools monitoring and punishing some off-campus speech. Such as, speech and behavior that is “serious or severe bullying or harassment targeting particular individuals; threats aimed at teachers or other students.” This is more necessary than ever before due to the increase in online bullying and harassment; that can impact the day-to-day activities of the school and the development of minors.

 

While it’s important in some circumstances for schools to monitor and address off-campus speech. The Supreme Court noted three reasons that would limit schools from interfering with student’s off-campus speech. First, a school, concerning off-campus speech, will rarely stand in loco parentis. Therefore, schools do not have more authority than parents. Especially not for off-campus speech. The parent is the authority figure; and will decide to discipline or not in most activities in their child’s life, especially what happens outside of school. This is important because parents have the authority to raise and discipline their children the way they believe, not based on the school district’s beliefs.

 

Second, “from the student perspective, regulations of off-campus speech, when coupled with regulations of on-campus speech, include all the speech a student utters during the full 24-hour day.” There would be no boundaries or limitations to what the school district would be allowed to discipline their students on. For instance, what if a group of students on a Saturday night decided to make a Tik Tok, and during the Tik Tok, the students curse and use vulgar language, would they be in trouble? If there were no limits to what the school could punish for off-campus speech, then those students could be in trouble for their Tik Tok video. Therefore, it’s important that the Supreme Court made this distinction to protect the student Frist Amendment rights.

 

Finally, the third reason is the school itself has an interest in protecting a student’s unpopular expression, especially when the expression takes place off-campus.” For instance, the Supreme Court stated that if schools did not protect their students’ unpopular opinions, this would limit and ruin the student’s ability to express themselves and schools are a place for students to learn and create their own opinion- even if that opinion differs from the school’s. To conclude, this would severely impact the student’s ability to think for themselves and create their own opinion, and respect other’s opinions that differ from their own.

 

Overall, I agree with the Supreme Court’s decision in this case. I believe it’s essential to separate in-school speech and off-campus speech. However, the only time off-campus speech should be monitored and addressed by the school is if there is bullying, harassing, or threatening language against the school, groups, or individuals at the school. With that being said, the Supreme Court noted three very important reasons as to why the public schools cannot have full control of students’ off-campus speech. All three of these reasons are fair and justifiable to protect the parents and students from being overly controlled by the school. To conclude, there is still a lot of questions and uncertainty, especially since technology is rapidly advancing and new social media platforms emerging frequently. I am curious if the Supreme Court will rule on a similar within the next fifty years and how this will impact schools in the next few years.

 

Do you agree with the Supreme Court decision and how do you see this ruling impacting public schools over the next few years?

The First Amendment Is Still Great For The United States…Or Is It?

In the traditional sense, of course it is. The idea of free speech should always be upheld, without question. However, when it comes to the 21st century, this two and a half centuries old amendment poses extreme roadblocks. Here, I will be discussing how the First Amendment inhibits the ability to tackle extremism and hatred on social media platforms.

One of the things I will be highlighting is how other countries are able to enact legislation to try and deal with the ever-growing hate that festers on social media. They’re able to do so because they do not have a “First Amendment.” The idea of free speech is simply engrained into democracies; they do not need an archaic document in which they are forever bound to tell them that. Here in the U.S., as we all know, congress can be woefully slow and inefficient, with a particular emphasis on refusing to update outdated laws.

The First Amendment successfully blocks any government attempt to regulate social media platforms. Any attempt to do so is met by mostly conservatives, yelling about the government wanting to take away free speech, and the courts will/would not allow the legislation to stand. This in turn means Facebook, Snapchat, Instagram, Reddit, and all the other platform never have to worry about the white supremist and other extremist rhetoric that is prevalent on their platform. Even further than that, most, if not all their algorithms, push those vile posts to hundreds of thousands of people. We are “not allowed” to introduce laws that will come up with a baseline to regulate platforms, in order to crack down on the terrorism that flourishes there. Just as you are not allowed to scream fire in a move theatre, it should not be allowed to post and form groups to spread misinformation, white supremacy, racism, etc. Those topics do not serve the interests of greater society. Yes, it would make it a lot harder for people to be able to easily share their thoughts, no matter how appalling they may be. However, not allowing it to spread online where in 30 seconds millions of people can see it, is not taking away someone’s free speech right. Platforms don’t even necessarily have to delete the posts; just change their algorithms to stop promoting misinformation and hate, promote truth instead even if the truth is boring. They won’t do that though because promoting lies is what makes them money, and it’s always money over the good of the people.  Another reason why this doesn’t limit people’s free speech is because they can still form in person groups, talk about it in private, start an email chain etc. The idea behind trying to regulate what can be posted on social media websites is to make the world a better place for all; to make it harder for racist ideas and terrorism to spread, especially to young, impressionable children/young adults. This shouldn’t be a political issue; shouldn’t we all want to limit the spread of hate?

It is hard for me to imagine the January 6th insurrection on our capital occurring had we had regulations on social media in place. A lot of the groups that planned the insurrection had “stop the steal” groups and other related election-fraud conspiracy pages on Facebook. Imagine if we had in place a law that said social media platforms had to take down posts and pages eliciting false information that could be inciteful or detrimental to the security of the United States? I realize that is broad discretion, the legislation would have to be worded very narrowly, and those decisions to remove posts should be made with the highest level of scrutiny. Had we had a regulation like that in place, these groups would not have been able to reach as wide of an audience. I think Ashley Babbitt and Officer Sicknick would still be alive had Facebook been obligated to take those pages and posts down.

Alas, we are unable to even consider legislation to help address this cause because the courts and a lot of congress people refuse to acknowledge that we must update our laws and redefine how we read the First Amendment. The founders could never have imagined the world we live in today. Congress and the Courts need to stop pretending that a piece of paper written over a hundred years ago is some untouchable work from god. The founders wrote the First Amendment to ensure no one would be thrown in jail for speaking their mind, so that people who hold different political views could not be persecuted, to give people the ability to express themselves. Enacting legislation to prevent blatant lies, terrorism, racism, and white supremacy from spreading as easily online does not go against the First Amendment. It is not telling people they can’t have those views; it is not throwing anyone in prison or handing out fines for those views, and white supremacist or other racist ideas are not “political discourse.” Part of the role of government is to protect the people, to do what is right for society as a whole, and I fail to see how telling social media platforms they need to take down these appalling posts is outweighed by this idea that “nearly everything is free speech, even if it poisons the minds of our youth and perpetuates violence because that’s what the First Amendment says.”

Let’s now look at the United Kingdom and what they are able to do because they do not have any law comparable to the First Amendment. In May of 2021, the British Parliament introduced the Online Safety Bill. If passed into law, the bill will place a duty of care on social media firms and websites to ensure they take swift action to remove illegal content, such as hate crimes, harassment and threats directed at individuals, including abuse which falls below the criminal threshold. As currently written, the bill would also require the social media companies to limit the spread of and remove terroristic material, suicidal content, and child sexual abuse. The companies would be mandated to report postings of those kind to the authorities. Lastly, the Online Safety Bill would require companies to safeguard freedom of expression, and reinstate material unfairly removed. This includes forbidding tech firms from discriminating against particular political viewpoints. The bill reserves the right for Ofcom (the UK’s communications regulator) to hold them accountable for the arbitrary removal of journalistic content.

The penalties for not complying with the proposed law would be significant. Social Media companies that do not comply could be fined up to 10% of their net profits or $25 million. Further, the bill would allow Ofcom to bring criminal actions against named senior managers whose company does not comply with Ofcom’s request for information.

It will be interesting to see how the implementation of this bill will go if it is passed. I believe it is a good steppingstone to reign in the willful ignorance displayed by these companies. Again, it is important these bills be carefully scrutinized, otherwise you may end up with a bill like the one proposed in India. While I will not be discussing their bill at length in this post, you can read more about it here. In short, India’s bill is widely seen as autocratic in nature; giving the government the ability to fine and or criminally prosecute social media companies and their employees if they fail to remove content that the government does not like (for instance, people who are criticizing their new agriculture regulations).

Bringing this ship back home, can you imagine a bill like Britain’s ever passing in the US, let alone even being introduced? I certainly can’t because we still insist on worshiping an amendment that is 230 years old. The founders wrote the bill based on the circumstances of their time, they could never have imagined what today would look like. Ultimately, the decision to allow us to move forward and adopt our own laws to start regulating social media companies is up to the Supreme Court. Until the Supreme Court wakes up and decides to allow a modern reading/interpretation of the First Amendment, any law to hold companies accountable is doomed to fail. It is illogical to put a piece of paper over the safety and well being of Americans, yet we consistently do just that. We will keep seeing reports of how red flags were missed and as a result people were murdered or how Facebook pages helped spread another “Big Lie” which results in another capital sieged. All because we cannot move away from our past to brighten our futures.

 

What would you do to help curtail this social dilemma?

A Slap in the Face(book)?

Social media law has become somewhat of a contentious issue in recent years. While most people nowadays could not imagine life without it, many realize too, that it’s influence on our daily lives may not be a great thing. As the technology has advanced to unimaginable levels and the platforms have boomed in popularity, it seems as though our smart phones and Big Tech know our every move. The leading social media platform, Facebook, has around 1.82 billion active users a day, with people volunteering all sorts of personal information to be stored in the internet database. Individual profiles hold pictures of our children, our friends, our family, meals we eat, locations we visit. “What’s on your mind?” is the opening invite to any Facebook page, and one can only hazard a guess as to how many people actually answer that question on a daily basis.  Social media sites know our likes, our dislikes, our preferences, our moods, the shoes we want to buy for that dress we are thinking of wearing to the party we are looking forward to in three weeks!

With all that knowledge, comes enormous power, and through algorithmic design, social media can manipulate our thoughts and beliefs by controlling what we see and don’t see. With all that power, therefore, should come responsibility, but Section 230 of the Communications Decency Act (CDA) has created a stark disconnect between the two. What started out as a worthy protection for internet service providers for the content posted by others, has more recently drawn criticism for the lack of accountability held by social media oligarchs such as Jack Dorsey (Twitter) and Mark Zuckerberg (Facebook).

However, that could all be about to change.

On May 28, 2017, three friends lost their lives in a deadly car accident in which the 17-year-old driver, Jason Davis, crashed into a tree at an estimated speed of 113 mph. Landen Brown, 20, and Hunter Morby, 17, were passengers. Tragic accident? Or wrongful death?

Parents of the deceased lay blame on the Snapchat App, which offered a ‘Speed Filter’ that would clock how fast you were moving, and allowed users to snap and share videos of their movements in progress.

You see where this is going.

As quickly became the trend, the three youths used the app to see how fast they could record the speed of their car. Just moments before their deaths, Davis had posted a ‘snap’ clocking the car’s speed at 123 mph. In Lemmon v Snap, the parents of two of the boys brought suit against the social media provider, Snap, Inc., claiming that the app feature encouraged reckless driving and ultimately served to “entice” the young users to their death.

Until now, social media platforms and other internet service providers have enjoyed the protection of near absolute immunity from liability. Written in 1996, Section 230 was designed to protect tech companies from liability, for suits such as defamation, for third party posts. In the early days, it was small tech companies, or an online business with a ‘comments’ feature that generally saw the benefits of the Code. 25 years later, many people are questioning the role of Section 230 within the vastly developing era of social media and the powerful pass it grants Big Tech in many of its societal shortcomings.

Regarded more as an open forum than the publisher or speaker, social media platforms such as Facebook, Twitter, TikTok, Instagram and Snapchat, have been shielded by Section 230 from any legal claims of harm caused by the content posted on their sites.

Applied broadly, it is argued that Section 230 prevents Snap, Inc. from being held legally responsible for the deaths of the three boys in this case, which is the defense the tech company relied upon. The district court dismissed the case on those grounds, holding that the captured speeds fall into the category of content published by a third party, for which the service provider cannot be held liable. The Ninth Circuit however, disagrees. The Court’s interesting swerve of such immunity, is that the speed filter resulted in the deaths of the boys regardless of whether or not their captured speeds were posted. In other words, it did not matter if the vehicle’s speed was shared with others in the app; the fact that the app promotes, and rewards, high speed (although the award system within the app is not entirely clear), is enough.

The implications of this could be tremendous. At a time when debate over 230 reevaluations is already heavy, this precedential interpretation of Section 230 could lead to some cleverly formulated legal arguments for holding internet service providers accountable for some of the highly damaging effects of internet, social media and smart phone usage.

For the many benefits the internet has to offer, it can no longer be denied that there is another, very ugly side to internet usage, in particular with social media.

It is somewhat of an open secret that social media platforms such as Facebook and Instagram, purposely design their apps to be addictive by its users. It is also no secret that there is a growing association between social media usage and suicides, depression and other mental health issues. Cyber bullying has long been a very real problem. In addition, studies have shown that smart device screen time in very young children has shockingly detrimental impacts on a child’s social and emotional developments,  not to mention the now commonly known damage it can have on a person’s eyesight.

An increased rate of divorces has been linked to smart phones, and distracted driving – whether it be texting or keeping tabs on your Twitter retweets, or Facebook ‘likes’– is on the increase. Even an increase in accidents while walking has been linked to distractions caused by the addictive smart devices.

With the idea of accountability being the underlying issue, it can of course be stated that almost all of these problems should be a matter of personal responsibility. Growing apart from your spouse? Ditch your cell phone and reinvent date night. Feeling depressed about your life as you ‘heart’ a picture of your colleague’s wine glass in front of a perfect sunset beach backdrop? Close your laptop and stop comparing yourself to everyone else’s highlights. Step in front of a cyclist while LOL’ing in a group text? Seriously….put your Apple Watch hand in your pocket and look where you are going! The list of personal-blame is endless. But then we hear about three young friends, two still in their teens, who lose their lives engaged with social media, and suddenly it’s not so easy to blame them for their own devastating misfortune.

While social media sites cannot be held responsible for the content posted by others, no matter how hurtful it might be to some, or no matter what actions it leads others to take, should they be held responsible for negligently making their sites so addictive, so emotionally manipulative and so targeted towards individual users, that such extensive and compulsive use leads to dire consequences? According to the Ninth Circuit, negligent app design can in fact be a cause of action for wrongful death.

With a potential crack in the 230-armor, the questions many lawyers will be scrambling to ask are:

      • What duties do the smart device producers and/or internet service providers owe to their users?
      • Are these duties breached by continuing to design, produce, and provide products that are now known to create such disturbing problems?
      • What injuries have occurred and where those injuries foreseeably caused by any such breaches of duty?

For the time being, it is unlikely that any substantial milestone will be reached with regards to Big Tech accountability, but the Ninth Circuit decision in this case has certainly delivered a powerful blow to the Big Tech apparent untouchability in the courtroom.

As awareness of all these social media related issues grow, could this court decision open the door to further suits of defective or negligent product design resulting in death or injury? Time will tell…..stay tuned.

Snapchat’s “Speed Filter” Fuels Fatalities

Upon its launch in 2011, the mobile app known as “Snapchat” quickly gained downloads, now totaling 265 million daily active Snapchat users worldwide. Snapchat revolutionized the social media world with the introduction of filters – debuting “smart filters” to capture time, speed, and temperature in 2013, followed by “Geofilters” in August 2014 and “Discover” and “Lenses” in January 2015.

Snapchat in 2013

While filters can provide fun visual effects and cool color edits, the “speed filter” drew criticism early on for encouraging yet another distraction on the road for young drivers. Newly licensed teens could hardly wait to get in the driver’s seat and snap a selfie overlayed with vehicle speed in real time. The widespread belief is that users would earn a virtual trophy through the apps reward system for snapping speeds over 100 miles per hour (mph) – further fueling the recklessness.

img: The Odyssey

Concerns were raised early on regarding the dangers of the speed filter, and Snap responded by attaching a “Do Not Snap and Drive” disclaimer in 2016. Despite the company’s minimal efforts to limit the use of the feature while driving, life-threatening and fatal car accidents linked to the filter prevailed.

 

Studies indicate that Snapchat leads the list of apps most distracting for young drivers, and more than a third of teens surveyed admitted to Snapping while driving. The National Highway Transportation Safety Administration reports nearly 26,004 deaths due to distracted driving accidents between 2012 and 2019. By 2018, distraction-related fatalities increased by 10% – killing 2,841 people and injuring 400,000 more. Drivers under the age of 19 account for the largest proportion of distracted driving fatalities.

One of the earliest accidents involving the filter occurred in September 2015, with 18-year-old Christal McGee behind the wheel of her father’s Mercedes. McGee admitted to grabbing her phone and using the filter to see how fast she could go. The Atlanta-teen doubled the speed limit at roughly 113 mph before colliding with an Uber driver who was just beginning his night shift. As a result of the accident, the Uber driver was hospitalized for months and suffered a traumatic brain injury. He sued both McGee and Snapchat for negligence damages, alleging equal responsibility by Snapchat for the crash because they failed to delete the miles per hour filter after it was cited in similar accidents prior to the September 2015 crash.

Likewise, an incident occurred in late 2016 when 22-year-old Pablo Cortes posted a Snapchat video with the speed filter, accelerating from 82 mph to 115.6 mph. Just nine minutes later, Cortes lost control and struck a minivan – killing both himself and his 19-year-old passenger, Jolie Bartolome, as well as a mother and two of her children.

In the past, Snapchat has not faced liability for incidents arising out of the speed filter due to the Communications Decency Act (CDA). Section 230 of the CDA states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” (47 U.S.C. § 230). Congress established the CDA in 1996, with the intent to better regulate pornographic material on the Internet. With the growth of social media, it serves as a powerful tool that shields tech companies and social media platforms from potential liability for content posted by their users.

However, just last month the Court of Appeals for the Ninth Circuit unanimously held that the CDA does not shield the creators of Snapchat from claims. The lawsuit in Lemmon v. Snap arises out of an incident that occurred May of 2017, fatally wounding three young boys. The 17-year-old driver and his two buddies used the speed filter to record a high of 123 mph, just before hitting a tree at 113 mph. The parents of the deceased teens filed a lawsuit in 2019, alleging the “negligent design” of the Snap Inc. app contributed to the crash by encouraging speeding. The trial judge erroneously dismissed the case in 2020, citing the immunity social media companies enjoy under the CDA.

In departing from the district court’s decision, the Ninth Circuit applied the three-prong test set forth in Barnes v. Yahoo!, Inc. (2009) to assess whether Section 230 would apply to immunize Snap from the claims. As such, CDA immunity will shield Snap from liability only if  “(1) a provider or user of an interactive computer service (2) whom a plaintiff seeks to treat, under a state law cause of action, as a publisher or speaker (3) of information provided by another information content provider.” (quoting Barnes). In thoughtfully analyzing each of the three prongs, the Court reversed the district court’s dismissal of the lawsuit and remanded it for further proceedings.

This new recognition rests on the fact that the suit is not about what someone posted to Snapchat, but rather negligence in the design of the app overall. The decision is a huge turning point in Internet law and regulation because it establishes that an internet company can be held liable for products with a defective design. Although the language of Section 230 grants broad discretion, Lemmon is a clear demonstration that Internet immunity has its limits and is not guaranteed. While the ruling is among the minority that have rejected CDA immunity to design claims against internet platforms, this radical departure from earlier decisions opens the door to future legal challenges to CDA immunity by alleging injury based on how the website’s design affected the user, rather than how the user’s content affected a third party.

Is There Such a Thing as Off-Campus Anymore?

The Supreme Court will soon decide Mahanoy Area School District v. B.L., which raises the issue of whether the First Amendment prohibits public school officials from regulating off-campus student speech.   The issue arose from an incident involving Brandi Levy (B.L.), who, after learning she had not made her school’s Varsity Cheerleading squad, posted a picture of herself on Snapchat with the caption “Fuck school fuck softball fuck cheer fuck everything.”  She made the post on a weekend while hanging out at a local convenience store.
Levy thought the post would disappear after 24 hours, and only about 250 people saw the post during that time. But one person took a screenshot of the post and showed it to the school’s cheerleading coaches.  The coaches decided Levy’s snap violated team and school rules, which Levy had acknowledged before joining the team, and she was suspended from the school’s junior varsity cheerleading team for a year.Levy and her parents sued the school under 42 U.S.C. § 1983, arguing that the school’s suspension violated her First Amendment right to free speech and that the school disciplinary rules were overly broad. The district court granted summary judgment in B.L.’s favor, ruling that the school had violated her First Amendment rights. The U.S. Court of Appeals for the Third Circuit affirmed.   On January 8, 2021, the Supreme Court took certiorari.  It heard the case on April 28, 2021.The case presents the first post-Internet decision concerning regulated school speech.  The last time the Court heard a case concerning the regulation of speech on school property was in 1969 when in Tinker v. Des Moines Independent Community School District, the Court ruled that students’ First Amendment Rights do not end when they enter the school-house door.  In that case, the Court overruled a high school policy that prohibited students from wearing armbands on campus in protest of the Vietnam War.  According to the Tinker Court, schools cannot regulate student speech unless there is a material and substantial disruption to the school or student body.When framed in the context of Tinker, Mahanoy School District seems a pretty straightforward case for the court to decide.  The question under Tinker becomes whether Levy’s Snapchat posed a substantial disruption to the school.  And quite frankly, although disrespectful, the post was not disruptive.The issue, however, is much bigger!

The Internet has given rise to considerable cyberbullying among students.  Quite often the bullying occurs off-campus but is targeted at fellow students or administrators.  The Third Circuit has previously considered and found in favor of free speech in two instances where students bullied school principals.  Lisa S. Blatt, the attorney for the School Board, summed it up best during oral arguments; “When it comes to the Internet,” Blatt argued, “things like time and geography are meaningless.”   Levy’s case presents the Court with the thorny issue of where the school steps start in our current virtual world.

Levy posted her Snapchat in 2017.  At that time, schools were grappling with how to handle off-campus cyberbullying between classmates.  Many authorities agree that under the Tinker standard, school officials can intervene if the off-campus speech has created or could create a substantial disruption or interference at school.  Students have a right to feel secure on campus, and therefore a school has the power to discipline off-campus speech, even at the expense of a student’s right to free speech.  Courts have applied this holding in a way that was favorable to the school to instances involving Internet chatter.  In Rosario v. Clark County School Dist., a 2013 District Court upheld a school administration’s decision to discipline and punish a student for tweets a minor made while at a restaurant about a basketball coach who dismissed him from the team.  In Kowalski v. Berkeley Cnty. Schs.,  the Fourth Circuit ruled that a school did not violate a student’s free speech rights by suspending her for creating and posting to a webpage that ridiculed fellow students.

On the other hand, in instances where students could prove in court that their off-campus social media did not substantially disrupt the school the student has prevailed.  Consider, for example, Layshock v. Hermitage School Dist., in which the full Third Circuit ruled that the school infringed on a student’s First Amendment rights by suspending him for posting an online parody of the principle.  The Court ruled the same way on almost the same set of facts in J.S. v. Blue Mountain School Dist. But to date, among Federal Circuit Courts, only the Third Circuit has sided with the school in instances of off-campus online speech. And even those cases suggest that there are instances where a school can appropriately infringe on a student’s First Amendment Rights.  In response to J.S. and Layshock, Judge Kent Jordan of the Third Circuit stated: “The issue is whether the Supreme Court’s decision in Tinker, can be applied to off-campus speech. I believe it can, and no ruling coming out today is to the contrary.”

The Supreme Court could easily punt in this case; decide whether Levy’s Snapchat disrupted on-campus activities and leave it at that.  But in this instance, the Court should not miss the opportunity to discuss the more significant issue of what rules should apply given the very real issue of blurred school boundaries.  Especially since these boundaries have become even more blurred with the Pandemic.  Living rooms and bedrooms across the country have become virtual classrooms.  It seems impossible to suggest in today’s wired world that, as attorney Blatt suggests, there are any geographical boundaries to school. Prohibiting schools from regulating speech outside brick-and-mortar school buildings provides schools with the opportunity to prevent the severest of cyberbullying.   On the other hand, expanding a schools’ reach threatens the very foundation of our constitution.

The Supreme Court decided Tinker well before the Internet was integral to our homes. Mahanoy Area School Dist. v. B.L., offers the Court the opportunity to provide much-needed guidance to school administrators who walk a tight balance between respecting First Amendment Rights and protecting the right of their students to learn in a conducive educational environment.  Defining that guidance is the difficult part and with three new members of SCOTUS,  it is hard to decide which way they may rule.

How do you think the Court should rule and what would your ruling be if you were a Supreme Court Justice?

 

Skip to toolbar