Privacy Please: Privacy Law, Social Media Regulation and the Evolving Privacy Landscape in the US

Social media regulation is a touchy subject in the United States.  Congress and the White House have proposed, advocated, and voted on various bills, aimed at protecting and guarding people from data misuse and misappropriation, misinformation, harms suffered by children, and for the implications of vast data collection. Some of the most potent concerns about social media stem from use and misuse of information by the platforms- from the method of collection, to notice of collection and use of collected information. Efforts to pass a bill regulating social media have been frustrated, primarily by the First Amendment right to free speech. Congress has thus far failed to enact meaningful regulation on social media platforms.

The way forward may well be through privacy law. Privacy laws give people some right to control their own personhood including their data, right to be left alone, and how and when people see and view them. Privacy laws originated in their current form in the late 1800’s with the impetus being one’s freedom from constant surveillance by paparazzi and reporters, and the right to control your own personal information. As technology mutated, our understanding of privacy rights grew to encompass rights in our likeness, our reputation, and our data. Current US privacy laws do not directly address social media, and a struggle is currently playing between the vast data collection practices of the platforms, immunity for platforms under Section 230, and private rights of privacy for users.

There is very little Federal Privacy law, and that which does exist is narrowly tailored to specific purposes and circumstances in the form of specific bills. Somes states have enacted their own privacy law scheme, California being on the forefront, Virginia, Colorado, Connecticut, and Utah following in its footsteps. In the absence of a comprehensive Federal scheme, privacy law is often judge-made, and offers several private rights of action for a person whose right to be left alone has been invaded in some way. These are tort actions available for one person to bring against another for a violation of their right to privacy.

Privacy Law Introduction

Privacy law policy in the United States is premised on three fundamental personal rights to privacy:

  1. Physical right to privacy- Right to control your own information
  2. Privacy of decisions– such as decisions about sexuality, health, and child-rearing. These are the constitutional rights to privacy. Typically not about information, but about an act that flows from the decision
  3. Proprietary Privacy – the ability to protect your information from being misused by others in a proprietary sense.

Privacy Torts

Privacy law, as it concerns the individual, gives rise to four separate tort causes of action for invasion of privacy:

  1. Intrusion upon Seclusion- Privacy law provides a tort cause of action for intrusion upon seclusion when someone intentionally intrudes upon the reasonable expectation of seclusion of another, physically or otherwise, and the intrusion is objectively highly offensive.
  2. Publication of Private Facts- One gives publicity To a matter concerning the Private life of another that is not of legitimate concern to the public, and the matter publicized would be objectively highly offensive. The first amendment provides a strong defense for publication of truthful matters when they are considered newsworthy.
  3. False Light – One who gives publicity to a matter concerning another that places the other before the public in a false light when The false light in which the other was placed would be objectively highly offensive and the actor had knowledge of or acted in reckless disregard as to the falsity of the publicized matter and the false light in which the other would be placed.
  4. Appropriation of name and likeness- Appropriation of one’s name or likeness to the defendant’s own use or benefit. There is no appropriation when a persona’s picture is used to illustrate a non-commercial, newsworthy article. This is usually commercial in nature but need not be. The appropriation could be of “identity”. It need not be misappropriation of name, it could be the reputation, prestige, social or commercial standing, public interest, or other value on the plaintiff’s likeness.

These private rights of action are currently unavailable for use against social media platforms because of Section 230 of the Decency in Communications Act, which provides broad immunity to online providers for posts on their platforms. Section 230 prevents any of the privacy torts from being raised against social media platforms.

The Federal Trade Commission (FTC) and Social Media

Privacy law can implicate social media platforms when their practices become unfair or deceptive to the public through investigation by the Federal Trade Commission (FTC). The FTC is the only federal agency with both consumer protection and competition jurisdiction in broad sectors of the economy. FTC investigates business practices where those practices are unfair or deceptive. FTC Act 15 U.S.C S 45- Act prohibits “unfair or deceptive acts or practices in or affecting commerce” and grants broad jurisdiction over privacy practices of businesses to the FTC. Trade practice is unfair if it causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and is not outweighed by countervailing benefits to consumers or competition. A deceptive act or practice is a material representation, omission, or practice that is likely to mislead the consumer acting reasonably in the circumstances, to the consumer’s detriment.

Critically, there is no private right of action in FTC enforcement. The FTC has no ability to enforce fines for S5 violations but can provide injunctive relief. By design, the FTC has very limited rulemaking authority, and looks to consent decrees and procedural, long-lasting relief as an ideal remedy. The FTC pursues several types of misleading or deceptive policy and practices that implicate social media platforms: notice and choice paradigms, broken promises, retroactive policy changes, inadequate notice, and inadequate security measures. Their primary objective is to negotiate a settlement where the company submits to certain measures of control of oversight by the FTC for a certain period of time. Violations of the agreements could yield additional consequences, including steep fines and vulnerability to class action lawsuits.

Relating to social media platforms, the FTC has investigated misleading terms and conditions, and violations of platform’s own policies. In Re Snapchat, the platform claimed that user’s posted information disappeared completely after a certain period of time, however, through third party apps and manipulation of user’s posts off of the platform, posts could be retained. The FTC and Snapchat settled, through a consent decree, to subject Snapchat to FTC oversight for 20 years.

The FTC has also investigated Facebook for violation of its privacy policy. Facebook has been ordered to pay a $5 billion penalty and to submit to new restrictions and a modified corporate structure that will hold the company accountable for the decisions it makes about its users’ privacy to settle FTC charges claiming that they violated a 2012 agreement with the agency.

Unfortunately, none of these measures directly give individuals more power over their own privacy. Nor do these policies and processes give individuals any right to hold platforms responsible for being misled by algorithms using their data, or for intrusion into their privacy by collecting data without allowing an opt-out.

Some of the most harmful social media practices today relate to personal privacy. Some examples include the collection of personal data, the selling and dissemination of data through the use of algorithms designed to subtly manipulate our pocketbooks and tastes, collection and use of data belonging to children, and the design of social media sites to be more addictive- all in service of the goal of commercialization of data.

No current Federal privacy scheme exists. Previous Bills on Privacy have been few and narrowly tailored to relatively specific circumstances and topics like healthcare and medical data protection by HIPPA, protection of data surrounding video rentals as in the Video Privacy Protection Act, and narrow protection for children’s data in Children’s Online Protection Act. All the schemes are outdated and fall short of meeting the immediate need of broad protection of widely collected and broadly utilized data from social media.

Current Bills on Privacy

Upon request from some of the biggest platforms, outcry from the public, and the White House’s request for Federal Privacy regulation, Congress appears poised to act. The 118th Congress has pushed privacy law as a priority in this term by introducing several bills related to social media privacy. There are at least ten Bills currently pending between the House of the Senate addressing a variety of issues and concerns from Children’s data privacy to the minimum age for use and designation of a new agency to monitor some aspects of privacy.

S744The Data Care Act of 2023 aims to protect social media user’s data privacy by imposing fiduciary duties on the platforms. The original iteration of the bill was introduced in 2021 and failed to receive a vote. It was re-introduced in March of 2023 and is currently pending. Under the act, social media platforms would have the duty to reasonably secure user’s data from access, refrain from using the data in a way that could foreseeably “benefit the online service provider to the detriment of the end user” and to prevent disclosure of user’s data unless the party is also bound by these duties. The bill authorizes the FTC and certain state officials to take enforcement actions upon breach of those duties. The states would be permitted to take their own legal action against companies for privacy violations. The bill would also allow the FTC to intervene in the enforcement efforts by imposing fines for violations.

H.R.2701 – Perhaps the most comprehensive piece of legislation on the House floor is the Online Privacy Act. In 2023, the bill was reintroduced by democrat Anna Eshoo after an earlier version on the bill failed to receive a vote and died in Congress. The Online Privacy Act aims to protect users by providing individuals rights relating to the privacy of their personal information. The bill would also provide privacy and security requirements for treatment of personal information. To accomplish this, the bill established a new agency – the Digital Privacy Agency- which would be responsible for enforcement of the rights and requirements. The new individual rights in privacy are broad and include the rights of access, correction, deletion, human review of automated decision, individual autonomy, right to be informed, and right to impermanence, amongst others. This would be the most comprehensive plan to date. The establishment of a new agency with a task specific to administration and enforcement of privacy laws would be incredibly powerful. The creation of this agency would be valuable irrespective of whether this bill is passed.

HR 821– The Social Media Child Protection Act is a sister bill to one by a similar name which originated in the Senate. This bill aims to protect children from the harms of social media by limiting children’s access to it. Under the bill, Social Media platforms are required to verify the age of every user before accessing the platform by submitting a valid identity document or by using another reasonable verification method. A social media platform will be prohibited from allowing users under the age of 16 to access the platform. The bill also requires platforms to establish and maintain reasonable procedures to protect personal data collected from users. The bill affords for a private right of action as well as state and FTC enforcement.

S 1291The Protecting Kids on Social Media Act is similar to its counterpart in the House, with slightly less tenacity. It similarly aims to protect children from social media’s harms. Under the bill, platforms must verify its user’s age, not allow the user to use the service unless their age has been verified, and must limit access to the platform for children under 12. The bill also prohibits retention and use of information collected during the age verification process. Platforms must take reasonable steps to require affirmative consent from the parent or guardian of a minor who is at least 13 years old for the creation of a minor account, and reasonably allow access for the parent to later revoke that consent. The bill also prohibits use of data collected from minors for algorithmic recommendations. The bill would require the Department of Commerce to establish a voluntary program for secure digital age verification for social media platforms. Enforcement would be through the FTC or state action.

S 1409– The Kids Online Safety Act, proposed by Senator Blumenthal of Connecticut, also aims to protect minors from online harms. This bill, as does the Online Safety Bill, establishes fiduciary duties for social media platforms regarding children using their sites. The bill requires that platforms act in the best interest of minors using their services, including mitigating harms that may arise from use, sweeping in online bullying and sexual exploitation. Social media sites would be required to establish and provide access to safeguards such as settings that restrict access to minor’s personal data and granting parents the tools to supervise and monitor minor’s use of the platforms. Critically, the bill establishes a duty for social media platforms to create and maintain research portals for non-commercial purposes to study the effect that corporations like the platforms have on society.

Overall, these bills indicate Congress’s creative thinking and commitment to broad privacy protection for users from social media harms. I believe the establishment of a separate body to govern, other than the FTC which lacks the powers needed to compel compliance, to be a necessary step. Recourse for violations on par with the EU’s new regulatory scheme, mainly fines in the billions, could help.

Many of the bills, for myriad aims, establish new fiduciary duties for the platforms in preventing unauthorized use and harms for children. There is real promise in this scheme- establishing duty of loyalty, diligence and care for one party has a sound basis in many areas of law and would be more easily understood in implementation.

The notion that platforms would need to be vigilant in knowing their content, studying its affects, and reporting those effects may do the most to create a stable future for social media.

The legal responsibility for platforms to police and enforce their policies and terms and conditions is another opportunity to further incentivize platforms. The FTC currently investigates policies that are misleading or unfair, sweeping in the social media sites, but there could be an opportunity to make the platforms legally responsible for enforcing their own policies, regarding age, against hate, and inappropriate content, for example.

What would you like to see considered in Privacy law innovation for social media regulation?

Destroying Defamation

The explosion of Fake News spread among social media sites is destroying a plaintiff’s ability to succeed in a defamation action. The recent proliferation of rushed journalism, online conspiracy theories, and the belief that most stories are, in fact, “Fake News” have created a desert of veracity. Widespread public skepticism about even the most mainstream social media reporting means plaintiffs need help convincing jurors that third parties believed any reported statement to be true. Such proof is necessary for a plaintiff to prove the elements of defamation.

Fake News Today

Fake News is any journalistic story that knowingly and intentionallyincludes untrue factual statements. Today, many speak of Fake News as a noun. There is no shortage of examples of Fake News and its impact.

      • Pizzagate: During the 2016 Presidential Election, Edgar Madison Welch, 28, read a story on (then) Facebook that Hilary Clinton was running a child trafficking ring out of the basement of a pizzeria. Welch, a self-described vigilante, shot open a locked door of the pizzeria with his AR-15.
      • A study by three MIT scholars found that false news stories spread faster on Twitter than true stories, with the former being 70% more likely to be retweeted than the latter.
      • During the defamation trial of Amber Heard and Johnny Depp, a considerable number of “Fake News” reports circulated across social media platforms, particularly TikTok, Twitter, and YouTube, attacking Ms. Heard at a disproportionality more significant rate than Mr. Depp.

 

What is Defamation?

To establish defamation, a plaintiff must show the defendant published a false assertion of fact that damages the plaintiff’s reputation. Hyperbolic language or other indications that a statement was not meant to be taken seriously are not actionable. Today’s understanding that everything on the Internet is susceptible to manipulation destroys defamation.

Because the factuality of a statement is a question of law, a plaintiff must first convince a judge that the offending statement is fact and not opinion. Courts often find that Internet and social media statements are hyperbole or opinion. If a plaintiff succeeds in persuading the judge, then the issue of whether the statement defamed the plaintiff heads to the jury. A jury faced with defamation must determine whether the statement of fact harmed the defendant’s reputation or livelihood to the extent that it caused the plaintiff to incur damages. The prevalence of Fake News creates another layer of difficulty for the Internet plaintiff, who must convince the jury that the statement was true.

Defamation’s Slow and Steady Erosion

Since the 1960s, the judiciary has limited plaintiffs’ ability to succeed in defamation claims. The decisions in Sullivan v. New York Times and Gertz increased the difficulty for public figures, and those with limited public figure status, to succeed by requiring them to prove actual malice against a defendant, a standard higher than the mere negligence standard allowed for individuals who are not of community interest.

The rise of Internet use, mainly social media, presents plaintiffs with yet another hurdle. Plaintiffs can only succeed if the challenged statement is fact, not opinion. However, judges find that statements made on the Internet are opinions and not points. The combined effect of Supreme Court limitations on proof and the increased belief that social media posts are mostly opinions has limited the plaintiff’s ability to succeed in a defamation claim.

Destroying Defamation

If the Supreme Court and social media have eroded defamation, Fake News has destroyed it. Today, convincing a jury that a false statement purporting to be fact has defamed a plaintiff is difficult given the dual issues of society’s objective mistrust of the media and the understanding that information on the Internet is generally opinion, not fact. Fake News sows confusion and makes it almost impossible for jurors to believe any statement has the credibility necessary to cause harm.

To be clear, in some instances, fake News is so intolerable that a jury will find for the plaintiffs. A Connecticut jury found conspiracy theorist Alex Jones liable for defamation based on his assertion that the government had faked the Sandy Hook shootings.

But often, plaintiffs are unsuccessful where the challenged language is conflated with untruths. Fox News successfully defended itself against a lawsuit claiming that it had aired false and deceptive content about the coronavirus, even though its reporting was, in fact, untrue.

Similarly, a federal judge dismissed a defamation case against Fox News for Tucker Carlson’s report that the plaintiff had extorted then-President Donald Trump. In reaching its conclusion, the judge observed that Carlson’s comments were rhetorical hyperbole and that the reasonable viewer “‘arrive[s] with the appropriate amount of skepticism.”‘ Reports of media success in defending against defamation claims further fuel media mistrust.

The current polarization caused by identity politics is furthering the tendency for Americans to mistrust the media. Sarah Palin announced that the goal of her recent defamation case against The New York Times was to reveal that the “lamestream media” publishes “fake news.”

If jurors believe that no reasonable person could credit a challenged statement as accurate, they cannot find that the statement the plaintiff asserts is defamatory caused harm. An essential element of defamation is that the defendant’s remarks damaged the plaintiff’s reputation. The large number of people who believe News is fake, the media’s rush to publish, and external attacks on credible journalism have created a problematization of truth among members of society. The potential for defamatory harm is minimal when every news story is questionable. Ultimately, the presence of Fake News is a blight on the tort of defamation and, like the credibility of present-day news organizations, will erode it to the point of irrelevance.

Is there any hope for a world without Fake News?

 

Jonesing For New Regulations of Internet Speech

From claims that the moon landing was faked to Area 51, the United States loves its conspiracy theories. In fact, a study sponsored by the University of Chicago found that more than half of Americans believe at least one conspiracy theory. While this is not a new phenomenon, the increasing use and reliance on social media has allowed misinformation and harmful ideas to spread with a level of ease that wasn’t possible even twenty years ago.

Individuals with a large platform can express an opinion that creates a harm to the people that are personally implicated in the ‘information’ being spread. Presently, a plaintiff’s best option to challenge harmful speech is through a claim for defamation. The inherent problem is that opinions are protected by the First Amendment and, thus, not actionable as defamation.

This leaves injured plaintiffs limited in their available remedies because statements in the context of the internet are more likely to be seen as an opinion. The internet has created a gap where we have injured plaintiffs and no available remedy. With this brave new world of communication, interaction, and the spread of information by anyone with a platform comes a need to ensure that injuries sustained by this speech will have legal recourse.

Recently, Alex Jones lost a defamation claim and was ordered to pay $965 million to the families of the Sandy Hook victims after claiming that the Sandy Hook shooting that occurred in 2012 was a “hoax.” Despite prevailing at trial, the statements that were the subject of the suit do not fit neatly into the well-established law of defamation, which makes reversal on appeal likely.

The elements of defamation require that the defendant publish a false statement purporting it to be true, which results in some harm to the plaintiff. However, just because a statement is false does not mean that the plaintiff can prove defamation because, as the Supreme Court has recognized, false statements still receive certain First Amendment protections. In Milkovich v. Lorain Journal Co., the Court held that “imaginative expression” and “loose, figurative, or hyperbolic language” is protected by the First Amendment.

The characterization of something as a “hoax” has been held by courts to fall into this category of protected speech. In Montgomery v. Risen, a software developer brought a defamation action against an author who made a statement claiming that plaintiff’s software was a “hoax.” The D.C. Circuit held that characterization of something as an “elaborate and dangerous hoax” is hyperbolic speech, which creates no basis for liability. This holding was mirrored by several courts including the District Court of Kansas in Yeagar v. National Public Radio, the District Court of Utah in Nunes v. Rushton, and the Superior Court of Delaware in Owens v. Lead Stories, LLC.

The other statements made by Alex Jones regarding Sandy Hook are also hyperbolic language. These statements include: “[i]t’s as phony as a $3 bill”, “I watched the footage, it looks like a drill”, and “my gut is… this is staged. And you know I’ve been saying the last few months, get ready for big mass shootings, and then magically, it happens.” While these statements are offensive and cruel to the suffering families, it is really difficult to characterize them as something objectively claimed to be true. ‘Phony’, ‘my gut is’, ‘looks like’, and ‘magically’ are qualifying the statement he is making as a subjective opinion based on his interpretation of the events that took place.

It is indisputable that the statements Alex Jones made caused harm to these families. They have been subjected to harassment, online abuse, and death threats from his followers. However, no matter how harmful these statements are, that does not make it defamation. Despite this, a reasonable jury was so appalled by this conduct that they found for the plaintiffs. This is essentially reverse jury nullification. They decided that Jones was culpable and should be held legally responsible even if there is no adequate basis for liability.

The jury’s determination demonstrates that current legal remedies are inadequate to regulate potentially harmful speech that can spread like wildfire on the internet. The influence that a person like Alex Jones has over his followers establishes a need for new or updated laws that hold public figures to a higher standard even when they are expressing their opinion.

A possible starting point for regulating harmful internet speech at a federal level might be through the commerce clause, which allows Congress to regulate instrumentalities of commerce. The internet, by its design, is an instrumentality of interstate commerce by enabling for the communication of ideas across state lines.

Further, the Federal Anti-Riot Act, which was passed in 1968 to suppress civil rights protestors might be an existing law that can serve this purpose. This law makes it a felony to use a facility of interstate commerce to (1) incite a riot; or (1) to organize, promote, encourage, participate in, or carry on a riot. Further, the act defines riot as:

 [A] public disturbance involving (1) an act or acts of violence by one or more persons part of an assemblage of three or more persons, which act or acts shall constitute a clear and present danger of, or shall result in, damage or injury to the property of any other person or to the person of any other individual or (2) a threat or threats of the commission of an act or acts of violence by one or more persons part of an assemblage of three or more persons having, individually or collectively, the ability of immediate execution of such threat or threats, where the performance of the threatened act or acts of violence would constitute a clear and present danger of, or would result in, damage or injury to the property of any other person or to the person of any other individual.

Under this definition, we might have a basis for holding Alex Jones accountable for organizing, promoting, or encouraging a riot through a facility (the internet) of interstate commerce. The acts of his followers in harassing the families of the Sandy Hook victims might constitute a public disturbance within this definition because it “result[ed] in, damage or injury… to the person.” While this demonstrates one potential avenue of regulating harmful internet speech, new laws might also need to be drafted to meet the evolving function of social media.

In the era of the internet, public figures have an unprecedented ability to spread misinformation and incite lawlessness. This is true even if their statements would typically constitute an opinion because the internet makes it easier for groups to form that can act on these ideas. Thus, in this internet age, it is crucial that we develop a means to regulate the spread of misinformation that has the potential to harm individual people and the general public.

States are ready to challenge Section 230

On January 8, 2021, Twitter permanently suspended @realDonaldTrump.  The decision followed an initial warning to the then-president and conformed to its published standards as defined in its public interest framework.   The day before, Meta (then Facebook) restricted President Trump’s ability to post content on Facebook or Instagram.   Both companies cited President Trump’s posts praising those who violently stormed the U.S. Capitol on January 6, 2021 in support of their decisions.

Members of the Texas and Florida legislatures, together with their governors, were seemingly enraged that these sites would silence President Trump’s voice.  In response, each immediately passed laws aiming to limit the scope of social media sites.   Although substantively different, the Texas and Florida laws are theoretically the same; they both seek to punish social media sites that regulate forms of conservative content that they argue liberal social media sites silence, regardless of whether the posted content violates the site’s published standards.

Shortly after each law’s adoption, two tech advocacy groups, NetChoice and Computer and Communication Industry Association, filed suits in federal district courts challenging the laws as violative of the First Amendment.  Each case has made its way through the federal courts on procedural grounds; the Eleventh Circuit upheld a lower court preliminary injunction prohibiting Florida from enforcing the statute until the case is decided on its merits.   In contrast, the Fifth Circuit overruled a lower court preliminary injunction.  Texas appealed the Fifth Circuit ruling to the Supreme Court of the United States, which, by a vote of 5-4, voted to reinstate the injunction.  The Supreme Court’s decision made clear that these cases are headed to the Supreme Court on the merits.

Don’t Throw Out the Digital Baby with the Cyber Bathwater: The Rest of the Story

This article is in response to Is Cyberbullying the Newest Form of Police Brutality?” which discussed law enforcement’s use of social media to apprehend people. The article provided a provocative topic, as seen by the number of comments.

I believe that discussion is healthy for society; people are entitled to their feelings and to express their beliefs. Each person has their own unique life experiences that provide a basis for their beliefs and perspectives on issues. I enjoy discussing a topic with someone because I learn about their experiences and new facts that broaden my knowledge. Developing new relationships and connections is so important. Relationships and new knowledge may change perspectives or at least add to understanding each other better. So, I ask readers to join the discussion.

My perspectives were shaped in many ways. I grew up hearing Paul Harvey’s radio broadcast “The Rest of the Story.” His radio segment provided more information on a topic than the brief news headline may have provided. He did not imply that the original story was inaccurate, just that other aspects were not covered. In his memory, I will attempt to do the same by providing you with more information on law enforcement’s use of social media. 

“Is Cyberbullying the Newest Form of Police Brutality?

 The article title served its purpose by grabbing our attention. Neither cyberbullying or police brutality are acceptable. Cyberbullying is typically envisioned as teenage bullying taking place over the internet. The U.S. Department of Health and Human Services states that “Cyberbullying includes sending, posting, or sharing negative, harmful, false, or mean content about someone else. It can include sharing personal or private information about someone else causing embarrassment or humiliation”. Similarly, police brutality occurs when law enforcement (“LE”) officers use illegal and excessive force in a situation that is unreasonable, potentially resulting in a civil rights violation or a criminal prosecution.

While the article is accurate that 76% of the surveyed police departments use social media for crime-solving tips, the rest of the story is that more departments use social media for other purposes. 91% notified the public regarding safety concerns. 89% use the technology for community outreach and citizen engagement, 86% use it for public relations and reputation management. Broad restrictions should not be implemented, which would negate all the positive community interactions increasing transparency.   

Transparency 

In an era where the public is demanding more transparency from LE agencies across the country, how is the disclosure of the public’s information held by the government considered “Cyberbullying” or “Police Brutality”? Local, state, and federal governments are subject to Freedom of Information Act laws requiring agencies to provide information to the public on their websites or release documents within days of requests or face civil liability.

New Jersey Open Public Records

While the New Jersey Supreme Court has not decided if arrest photographs are public, the New Jersey Government Records Council (“GRC”) has decided in Melton v. City of Camden, GRC 2011-233 (2013) that arrest photographs are not public records under NJ Open Public Records Act (“OPRA”) because of Governor Whitmer’s Executive Order 69 which exempts fingerprint cards, plates and photographs and similar criminal investigation records from public disclosure. It should be noted that GRC decisions are not precedential and therefore not binding on any court.

However, under OPRA, specifically 47:1A-3 Access to Records of Investigation in Progress, specific arrest information is public information and must be disclosed to the public within 24 hours of a request to include the:

  • Date, time, location, type of crime, and type of weapon,
  • Defendant’s name, age, residence, occupation, marital status, and similar background information.
  • Identity of the complaining party,
  • Text of any charges or indictment unless sealed,
  • Identity of the investigating and arresting officer and agency and the length of the investigation,
  • Time, location, and the arrest circumstances (resistance, pursuit, use of weapons),
  • Bail information.

For years, even before Melton, I believed that an arrestee’s photograph should not be released to the public. As a police chief, I refused numerous media requests for arrestee photographs protecting their rights and believing in innocence until proven guilty. Even though they have been arrested, the arrestee has not received due process in court.

New York’s Open Public Records

In New York under the Freedom of Information Law (“FOIL”), Public Officers Law, Article 6, §89(2)(b)(viii) (General provisions relating to access to records; certain cases) The disclosure of LE arrest photographs would constitute an unwarranted invasion of an individual’s personal privacy unless the public release would serve a specific LE purpose and the disclosure is not prohibited by law.

California’s Open Public Records

Under the California Public Records Act (CPRA) a person has the statutory right to be provided or inspect public records, unless a record is exempt from disclosure. Arrest photographs are inclusive in arrest records along with other personal information, including the suspect’s full name, date of birth, sex, physical characteristics, occupation, time of arrest, charges, bail information, any outstanding warrants, and parole or probation holds.

Therefore under New York and California law, the blanket posting of arrest photographs is already prohibited.

Safety and Public Information

 Recently in Ams. for Prosperity Found. V. Bonta, the compelled donor disclosure case, while invalidating the law on First Amendment grounds, Justice Alito’s concurring opinion briefly addressed the parties personal safety concerns that supporters were subjected to bomb threats, protests, stalking, and physical violence. He cited Doe v Reed  which upheld disclosures containing home addresses under Washington’s Public Records Act despite the growing risks by anyone accessing the information with a computer. 

Satisfied Warrant

I am not condoning Manhattan Beach Police Department’s error of posting information on a satisfied warrant along with a photograph on their “Wanted Wednesday” in 2020. However, the disclosed information may have been public information under CPRA then and even now. On July 23, 2021, Governor Newsom signed a law amending Section 13665 of the CPRA prohibiting LE agencies from posting photographs of an arrestee accused of a non-violent crime on social media unless:

  • The suspect is a fugitive or an imminent threat, and disseminating the arrestee’s image will assist in the apprehension.
  • There is an exigent circumstance and an urgent LE interest.
  • A judge orders the release or dissemination of the suspect’s image based on a finding that the release or dissemination is in furtherance of a legitimate LE interest.

The critical error was that the posting stated the warrant was active when it was not. A civil remedy exists and was used by the party to reach a settlement for damages. Additionally, it could be argued that the agency’s actions were not the proximate cause when vigilantes caused harm.

Scope of Influence

LE’s reliance on the public’s help did not start with social media or internet websites. The article pointed out that “Wanted Wednesday” had a mostly local following of 13,600. This raised the question if there is much of a difference between the famous “Wanted Posters” from the wild west or the “Top 10 Most Wanted” posters the Federal Bureau of Investigations (“FBI”) used to distribute to Post Offices, police stations and businesses to locate fugitives. It can be argued that this exposure was strictly localized. However, the weekly TV show America’s Most Wanted, made famous by John Walsh, aired from 1988 to 2013, highlighting fugitive cases nationally. The show claims it helped capture over 1000 criminals through their tip-line. However, national media publicity can be counter-productive by generating so many false leads that obscure credible leads.

The FBI website contains pages for Wanted People, Missing People, and Seeking Information on crimes. “CAPTURED” labels are added to photographs showing the results of the agency’s efforts. Local LE agencies should follow FBI practices. I would agree with the article that social media and websites should be updated; however, I don’t agree that the information must be removed because it is available elsewhere on the internet.

Time

Vernon Gebeth, the leading police homicide investigation instructor, believes time is an investigator’s worst enemy.  Eighty-five percent of abducted children are killed within the first five hours. Almost all are killed within the first twenty-four hours. Time is also critical because, for each hour that passed, the distance a suspect’s vehicle can travel expands by seventy-five miles in either direction. In five hours, the area can become larger than 17,000 square miles. Like Amber Alerts, social media can be used to quickly transmit information to people across the country in time-sensitive cases.

Live-Streaming Drunk Driving Leads to an Arrest

When Whitney Beall, a Florida woman, used a live streaming app to show her drinking at a bar then getting into her vehicle. The public dialed 911, and a tech-savvy officer opened the app, determined her location, and pulled her over. She was arrested after failing a DWI sobriety test.  After pleading guilty to driving under the influence, she was sentenced to 10 days of weekend work release, 150 hours of community service, probation, and a license suspension. In 2019 10,142 lives were lost to alcohol impaired driving crashes.

Family Advocating

Social media is not limited to LE. It also provides a platform for victim’s families to keep attention on their cases. The father of a seventeen-year-old created a series of Facebook Live videos about a 2011 murder resulting in the arrest of Charles Garron. He was to a fifty-year prison term.

Instagram Selfies with Drugs, Money and Stolen Guns 

Police in Palm Beach County charged a nineteen-year-old man with 142 felony charges, including possession of a weapon by a convicted felon, while investigating burglaries and jewel thefts in senior citizen communities. An officer found his Instagram account with incriminating photographs. A search warrant was executed, seizing stolen firearms and $250,000 in stolen property from over forty burglaries.

Bank Robbery Selfies


Police received a tip and located a social media posting by John E. Mogan II of himself with wads of cash in 2015. He was charged with robbing an Ashville, Ohio bank. He pled guilty and was sentenced to three years in prison. According to news reports, Morgan previously  served prison time for another bank robbery.

Food Post Becomes the Smoking Gun

LE used Instagram to identify an ID thief who posted photographs of his dinner at a high-end steakhouse with a confidential informant (“CI”).  The man who claimed he had 700,000 stolen identities and provided the CI a flash drive of stolen identities. The agents linked the flash drive to a “Troy Maye,” who the CI identified from Maye’s profile photograph. Authorities executed a search warrant on his residence and located flash drives containing the personal identifying information of thousands of ID theft victims. Nathaniel Troy Maye, a 44-year-old New York resident, was sentenced to sixty-six months in federal prison after pleading guilty to aggravated identity theft.

 

Wanted Man Turns Himself in After Facebook Challenge With Donuts

A person started trolling Redford Township Police during a Facebook Live community update. It was determined that he was a 21-year-old wanted for a probation violation for leaving the scene of a DWI collision. When asked to turn himself in, he challenged the PD to get 1000 shares and he would bring in donuts. The PD took the challenge. It went viral and within an hour reached that mark acquiring over 4000 shares. He kept his word and appeared with a dozen donuts. He faced 39 days in jail and had other outstanding warrants.

The examples in this article were readily available on the internet and on multiple news websites, along with photographs.

Under state Freedom of Information Laws, the public has a statutory right to know what enforcement actions LE is taking. Likewise, the media exercises their First Amendment rights to information daily across the country when publishing news. Cyber journalists are entitled to the same information when publishing news on the internet and social media. Traditional news organizations have adapted to online news to keep a share of the news market. LE agencies now live stream agency press conferences to communicating directly with the communities they serve.

Therefore the positive use of social media by LE should not be thrown out like bathwater when legal remedies exist when damages are caused.

“And now you know…the rest of the story.”

Private or not private, that is the question.

Section 230 of the Communications Decency Act (CDA), protects private online companies from liability for content posted by others. This immunity also grants internet service providers the freedom to regulate what is posted onto their sites. What has faced much criticism of late however, is social media’s immense power to silence any voices the platform CEOs disagree with.

Section 230(c)(2), known as the Good Samaritan clause, states that no provider shall be held liable for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”

When considered in the context of a ‘1996’ understanding of internet influence (the year the CDA was created) this law might seem perfectly reasonable. Fast forward 25 years though, with how massively influential social media has become on society and the spread of political information, there has developed a strong demand for a repeal, or at the very least, a review of Section 230.

The Good Samaritan clause is what shields Big Tech from legal complaint. The law does not define obscene, lewd, lascivious, filthy, harassing or excessively violent. And “otherwise objectionable” leaves the internet service providers’ room for discretion all the more open-ended. The issue at the heart of many critics of Big Tech, is that the censorship companies such as Facebook, Twitter, and YouTube (owned by Google) impose on particular users is not fairly exercised, and many conservatives feel they do not receive equal treatment of their policies.

Ultimately, there is little argument around the fact that social media platforms like Facebook and Twitter are private companies, therefore curbing any claims of First Amendment violations under the law. The First Amendment of the US Constitution only prevents the government from interfering with an individual’s right to free speech. There is no constitutional provision that dictates any private business owes the same.

Former President Trump’s recent class action lawsuits however, against Facebook, Twitter, Google, and each of their CEOs, challenges the characterization of these entities as being private.

In response to the January 6th  Capitol takeover by Trump supporters, Facebook and Twitter suspended the accounts of the then sitting president of the United States – President Trump.

The justification was that President Trump violated their rules by inciting violence and encouraged an insurrection following the disputed election results of 2020. In the midst of the unrest, Twitter, Facebook and Google also removed a video posted by Trump, in which he called for peace and urged protestors to go home. The explanation given was that “on balance we believe it contributes to, rather than diminishes the risk of ongoing violence” because the video also doubled down on the belief that the election was stolen.

Following long-standing contentions with Big Tech throughout his presidency, the main argument in the lawsuit is that the tech giants Facebook, Twitter and Google, should no longer be considered private companies because their respective CEOs, Mark Zuckerberg, Jack Dorsey, and Sundar Pichai, actively coordinate with the government to censor politically oppositional posts.

For those who support Trump, probably all wish to believe this case has a legal standing.

For anyone else who share concerns about the almost omnipotent power of Silicon Valley, many may admit that Trump makes a valid point. But legally, deep down, it might feel like a stretch. Could it be? Should it be? Maybe. But will Trump see the outcome he is looking for? The initial honest answer was “probably not.”

However, on July 15th 2021, White House press secretary, Jen Psaki, informed the public that the Biden administration is in regular contact with Facebook to flag “problematic posts” regarding the “disinformation” of Covid-19 vaccinations.

Wait….what?!? The White House is in communication with social media platforms to determine what the public is and isn’t allowed to hear regarding vaccine information? Or “disinformation” as Psaki called it.

Conservative legal heads went into a spin. Is this allowed? Or does this strengthen Trump’s claim that social media platforms are working as third-party state actors?

If it is determined that social media is in fact acting as a strong-arm agent for the government, regarding what information the public is allowed to access, then they too should be subject to the First Amendment. And if social media is subject to the First Amendment, then all information, including information that questions, or even completely disagrees with the left-lean policies of the current White House administration, is protected by the US Constitution.

Referring back to the language of the law, Section 230(c)(2) requires actions to restrict access to information be made in good faith. Taking an objective look at some of the posts that are removed from Facebook, Twitter and YouTube, along with many of the posts that are not removed, it begs the question of how much “good faith” is truly exercised. When a former president of the United States is still blocked from social media, but the Iranian leader Ali Khamenei is allowed to post what appears nothing short of a threat to that same president’s life, it can certainly make you wonder. Or when illogical insistence for unquestioned mass emergency vaccinations, now with continued mask wearing is rammed down our throats, but a video showing one of the creators of the mRNA vaccine expressing his doubts regarding the safety of the vaccine for the young is removed from YouTube, it ought to have everyone question whose side is Big Tech really on? Are they really in the business of allowing populations to make informed decisions of their own, gaining information from a public forum of ideas? Or are they working on behalf of government actors to push an agenda?

One way or another, the courts will decide, but Trump’s class action lawsuit could be a pivotal moment in the future of Big Tech world power.

How Defamation and Minor Protection Laws Ultimately Shaped the Internet

Kyiv, Ukraine – September 5, 2019: A paper cubes collection with printed logos of world-famous social networks and online messengers, such as Facebook, Instagram, YouTube, Telegram and others.

The Communications Decency Act (CDA) was originally enacted with the intention of shielding minors from indecent and obscene online material. Despite its origins, Section 230 of the Communications Decency Act is now commonly used as a broad legal safeguard for social media platforms to shield themselves from legal liability for content posted on their sites by third parties. Interestingly, the reasoning behind this safeguard arises both from defamation common law, and constitutional free speech laws. As the internet has grown, however, this legal safeguard has gained increasing criticism. However, is this legislation actually undesirable? Many would disagree as section 230 contains “the 26 words that created the internet.”

 

Origin of the Communications Decency Act

The CDA was introduced and enacted as an attempt to shield minors from obscene or indecent content online. Although parts of the Act were later struck down for first amendment free speech violations, the Court left section 230 intact. The creation of section 230 was influenced by two landmark court decisions of defamation lawsuits.

The first case was in 1991, and involved an Internet site that hosted around 150 online forums. A claim was brought against the internet provider when a columnist of one of the online forums posted a defamatory comment about his competitor. The competitor sued the online distributor for the published defamation. The courts categorized the internet service provider as a distributor because they did not review any content of the forums before the content was posted to the site. As a distributor, there was no legal liability, and the case was dismissed.

 

Distributor Liability

Distributor Liability refers to the limited legal consequences that a distributor is exposed to for defamation. A common example of a distributor, is a bookstore or library. The theory behind distributor liability is that it would be impossible for distributors to moderate and censor every piece of content that they disperse because of the sheer volume, and the impossibility of knowing whether something is false or not.

The second case that influenced the creation of section 230, was Stratton Oakmont, Inc. v. Prodigy Servs. Co., in which the court used publisher liability theory to find the internet provider liable for the third party defamatory postings published on its site.  The court deemed the website a publisher because they moderated and deleted certain posts, regardless of the fact that there were far too many postings a day to regulate each one.

 

Publisher Liability

Under common law principles, a person who publishes a third-party’s defamatory statement bears the same legal responsibility as the creator of that statement. This liability is often referred to as “publisher liability,” and is based in theory that a publisher has the knowledge, opportunity, and ability to exercise control over the publication. For example, a newspaper publisher could face legal consequences for the content located within it. The court’s decision was significant because it meant that if a website attempted to moderate certain posts, it would be held liable for all posts.

 

Section 230’s Creation

In response to the Stratton-Oakmond case, and the ambiguous court decisions regarding internet services provider’s liability, members of Congress introduced an amendment to the CDA that later became Section 230. The Amendment was specifically introduced and passed with the goal of encouraging the development of unregulated, free speech online by relieving internet providers from any liability for their content.

 

Text of the Act- Subsection (c)(1) 

“No Provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

 Section 230 further provides that…

“No cause of action may be brought and no liability may be imposed under any State or local law that is inconsistent with this section.”

 The language above removes legal consequences arising from content posted on their forum. Courts have interpreted this subsection as providing broad immunity to online platforms from suits over content of third parties. Because of this, section 230 has become the principal legal safeguard from lawsuits over sites content.

 

The Good

  •  Section 230 can be viewed as being one of the most important pieces of legislation that protects free speech online. One of the unique aspects of this legislation is that it essentially extends free speech protection, applying it to private, non-governmental companies.
  • Without CDA 230, the internet would be a very different place. This section influenced some of the internet’s most distinctive characteristics. The internet promotes free speech and offers the ability for worldwide connectivity.
  • The CDA 230 does not fully eliminate liability or court remedies for victims of online defamation. Rather, it makes only the creator themselves liable for their speech, instead of the speaker and the publisher.

 

 

The Bad

  •  Because of the legal protections section 230 provides, social media networks have less of an incentive to regulate false or deceptive posts. Deceptive online posts can have an enormous impact on society. False posts have the ability to alter election results, or lead to dangerous misinformation campaigns, like the QAnon conspiracy theory, and the anti-vaccination movement.
  • Section 230 is twenty-five years old, and has not been updated to match the internet’s extensive growth.
  • Big Tech companies have been left largely unregulated regarding their online marketplaces.

 

 The Future of 230

While section 230 is still successfully used by social media platforms, concerns over the archaic legislation have mounted. Just recently, Justice Thomas, who is infamous for being a quiet Justice, wrote a concurring opinion articulating his view that the government should regulate content providers as common carriers, like utilities companies. What implications could that have on the internet? With the growing level of criticism surrounding section 230, will Congress will finally attempt to fix this legislation? If not, will the Supreme Court be left to tackle the problem themselves?

Off Campus Does Still Exist: The Supreme Court Decision That Shaped Students Free Speech

We currently live in a world centered around social media. I grew up in a generation where social media apps like Facebook, Snapchat and Instagram just became popular. I remember a time when Facebook was limited to college students, and we did not communicate back and forth with pictures that simply disappear. Currently many students across the country use social media sites as a way to express themselves, but when does that expression go too far? Is it legal to bash other students on social media? What about teachers after receiving a bad test score? Does it matter who sees this post or where the post was written? What if the post disappears after a few seconds? These are all questions that in the past we had no answer to. Thankfully, in the past few weeks the Supreme court has guided us on how to answer these important questions. In Mahanoy Area School District v B.L, the supreme court decided how far a student’s right to free speech can go and how much control a school district has in restricting a student’s off campus speech.

The question presented in the case of Mahanoy Area School District v. B.L was whether a public school has the authority to discipline a student over something they posted on social media while off campus. The student in this case was a girl named Levy. Levy was a sophomore who attended the Mahanoy Area School District. Levy was hoping to make the varsity cheerleading team that year but unfortunately, she did not.  She was very upset when she found out a freshman got the position instead and decided to express her anger about this decision on social media. Levy was in town with her friend at a local convenience store when she sent “F- School, F- Softball, F- Cheerleading, F Everything” to her list of friends on snapchat in addition to posting this on her snapchat story. One of these friends screenshotted the post and sent it to the cheerleading coach. The school district investigated this post and it resulted in Levy being suspended from cheerleading for one year. Levy, along with her parents were extremely upset with this decision and it resulted in a lawsuit that would shape a student’s right to free speech for a long time.

In the lawsuit, Levy and her parents, claimed that Levy’s cheerleading suspension violated her First Amendment right to free speech. They sued Mahanoy Area School District under 42 U.S.C § 1983 claiming that (1) her suspension from the team violated the First Amendment; (2) the school and team rules were overbroad and viewpoint discriminatory; and (3) those rules were unconstitutionally vague. The district court granted summary judgment in favor of Levy, stating that the school had violated her First Amendment rights. The U.S. Court of Appeals for the Third Circuit affirmed the district court decision. The Mahoney School District petitioned for a writ of certiorari. Finally, the case was heard by the Supreme Court.

Mahanoy School District argued that previous ruling in the case, Tinker v. Des Moines Independent Community School District, acknowledges that public schools do not possess absolute authority over students and that students possess First Amendment speech protections at school so long as the students’ expression does not become substantially disruptive to the proper functioning of school. Mahanoy emphasized that the Court intended for Tinker to extend beyond the schoolhouse gates and include not just on-campus speech, but any type of speech that was likely to result in on-campus harm. Levy countered by arguing that the ruling in Tinker only applies to speech protections on school grounds.

In an 8-1 decision the court ruled against Mahanoy. The Supreme Court held that Mahanoy School District violated Levy’s First Amendment Right by punishing her for posting a vulgar story on her snapchat while off campus.  The court ruled that the speech used did not result in severe bullying, nor was substantially disruptive to the school itself. The court also noted that this post was only visible to her friends list on snapchat and would disappear within 24 hours. It is not the school’s job to act as a parent, but it is their job to make sure actions off campus will not result in danger to the school. The Supreme Court also stated that although the student’s expression was unfavorable, if they did not protect the student’s opinions it would limit the students’ ability to think for themselves.

It is remarkably interesting to think about how the minor facts of this case determined the ruling. What if this case was posted on Facebook? One of the factors to consider that helped the court make their decision was that the story was only visible to about 200 of her friends on snapchat and would disappear within a day. One can assume that if Levy made this a Facebook status visible to all with no posting time frame the court could have ruled very differently. Another factor to consider, is that where the Snapchat post was uploaded ended up being another major factor in this case. Based on the Tinker ruling, if Levy posted this on school grounds Mahanoy School District could have the authority to discipline her for her post.

Technology is advancing each day and I am sure that in the future as more social media platforms come out the court will have to set a new precedent. I believe that the Supreme Court made the right decision regarding this case. I feel that speech which is detrimental to another individual should be monitored whether it is Off Campus Speech or On Campus Speech despite the platform that the speech is posted on. In Levy’s case no names were listed, she was expressing frustration for not making a team. I do believe that this speech was vulgar, but do not believe that the school suffered, nor any other students suffered severe detriment from this post.

If you were serving as a Justice on the Supreme Court, would you rule against Mahoney School District? Do you believe it matters which platform the speech is posted on? What about the location of where it was posted?

Is Cyberbullying the Newest Form of Police Brutality?

Police departments across the country are calling keyboard warriors into action to help them solve crimes…but at what cost?

In a survey of 539 police departments in the U.S., 76% of departments said that they used their social media accounts to solicit tips on crimes. Departments post “arrested” photos to celebrate arrests, surveillance footage for suspect identification, and some even post themed wanted posters, like the Harford County Sheriff’s Office.

The process for using social media as an investigative tool is dangerously simple and the consequences can be brutal. A detective thinks posting on social media might help an investigation, so the department posts a video or picture asking for information. The community, armed with full names, addresses, and other personal information, responds with some tips and a lot of judgmental, threatening, and bigoted comments. Most police departments have no policy for removing posts after information has been gathered or cases are closed, even if the highlighted person is found to be innocent. A majority of people who are arrested are not even convicted of a crime.

Law enforcement’s use of social media in this way threatens the presumption of innocence, creates a culture of public humiliation, and often results in a comment section of bigoted and threatening comments.

On February 26, 2020, the Manhattan Beach Police Department posted a mugshot of Matthew Jacques on their Facebook and Instagram pages for their “Wanted Wednesday” social media series. The pages have 4,500 and 13,600, mostly local, followers, respectively. The post equated Matthew to a fugitive and commenters responded publicly with information about where he worked. Matthew tried to call off work out of fear of a citizen’s arrest. The fear turned out to be warranted when two strangers came to find him at his workplace. Matthew eventually lost his job because he was too afraid to return to work.

You may be thinking this is not a big deal. This guy was probably wanted for something really bad and the police needed help. After all, the post said the police had a warrant. Think again.

There was no active warrant for Matthew at the time, his only (already resolved) warrant came from taking too long to schedule remedial classes for a 2017 DUI. Matthew was publicly humiliated by the local police department. The department even refused to remove the social media posts after being notified of the truth. The result?

Matthew filed a complaint against the department for defamation (as well as libel per se and false light invasion of privacy). Typically, defamation requires the plaintiff to show:

1) a false statement purporting to be fact; 2) publication or communication of that statement to a third person; 3) fault amounting to at least negligence; and 4) damages, or some harm caused to the person or entity who is the subject of the statement.

Here, the department made a false statement – that there was a warrant. They published it on their social media, satisfying the second element. They did not check readily available public records that showed Matthew did not have a warrant. Finally, Matthew lived in fear and lost his job. Clearly, he was harmed.

The police department claimed their postings were protected by the California Constitution, governmental immunity, and the 1st Amendment. Fortunately, the court denied the department’s anti-SLAPP motion. Over a year after postings, the department took down the posting and settled the lawsuit with Matthew.

Some may think that Matthew’s case is an anomaly and that, usually, the negative attention is warranted and perhaps even socially beneficial because it further de-incentivizes criminal activity via humiliation and social stigma. However, most arrests don’t result in convictions, many of the police’s cyberbullying victims are likely innocent. Even if they are guilty, leaving these posts up can increase the barrier to societal re-entry, which can increase recidivism rates. A negative digital record can make finding jobs and housing more difficult. Many commenters assume the highlighted individual’s guilt and take to their keyboards to shame them.

Here’s one example of a post and comment section from the Toledo Police Department Facebook page:

Unless departments change their social media use policies, they will continue to face defamation lawsuits and continue to further the degradation of the presumption of innocence.

Police departments should discontinue the use of social media in the humiliating ways described above. At the very least, they should consider using this tactic only for violent, felonious crimes. Some departments have already changed their policies.

The San Francisco Police Department has stopped posting mugshots for criminal suspects on social media. According to Criminal Defense Attorney Mark Reichel, “The decision was made in consultation with the San Francisco Public Defender’s Office who argued that the practice of posting mugshots online had the potential to taint criminal trials and follow accused individuals long after any debt to society is paid.” For a discussion of some of the issues social media presents to maintaining a fair trial, see Social Media, Venue and the Right to a Fair Trial.

Do you think police departments should reconsider their social media policies?

How One Teenager’s Snapchat Shaped Students Off-Campus Free Speech Rights

Did you ever not make your high school sports team or get a bad grade on an exam? What did you do to blow off steam? Did you talk to your friends or parents about it or write in your journal about it? When I was in High school- some of my classmates would use Twitter or Snapchat to express themselves. However, the rates for the use of smartphones and social media were much lower than they are today. For instance, today high school students use their smartphones and social media at an incredibly high rate compared to when I was in high school almost ten years ago. In fact, according to Pew Research Center, 95% of teenagers have access to smartphones and 69% of teenagers use Snapchat. This is exactly why the recent Supreme Court decision on Mahanoy Area School District v. B.L. is more important than ever, as it pertains to student’s free speech rights and how much power schools have in controlling their student’s off-campus speech.  Further, this decision is even more necessary because the last time the Supreme Court ruled on student’s free speech was over fifty years ago in Tinker v. Des Moines, way before anyone had smartphones or social media. Therefore, the latest decision by the Supreme Court will shape the future of the power of school districts and the first Amendment rights for students for maybe the next fifty years.

 

The main issue in Mahanoy Area School District v. B.L. is whether public schools can discipline students over something they said off-campus. The facts in this case, occurred when Levy, was a sophomore at Mahoney Area School District. Levy didn’t make the varsity cheerleading team; naturally, she was upset and frustrated about the situation. So, that weekend, Levy was at the convenience store in town with a friend. Levy and the friend took a Snap Chat with their middle finger raised with the caption “F- School, F-Softball, F-Cheerleading, F-Everything” and sent it to her Snap Chat friends. Then, the picture was screenshotted and shown to the cheerleading coach. Which lead to Levy being suspended from the cheerleading team for one year.

 

Furthermore, Levy and her parents did not agree with the suspension and the school’s involvement in Levy’s off-campus speech. Therefore, Levy and her parents filed a lawsuit claiming their suspension violated Levy’s First Amendment free speech rights. Levy sued the school under 42 U.S.C. § 1983 alleging (1) that her suspension from the team violated the First Amendment; (2) that the school and team rules were overbroad and viewpoint discriminatory; and (3) that those rules were unconstitutionally vague. The district court granted summary judgment in favor of Levy, stating that the school had violated her First Amendment rights. The U.S. Court of Appeals for the Third Circuit affirmed the district court decision. The Mahoney School District petitioned for a writ of certiorari.

 

In an 8-1 decision the Supreme Court ruled in favor of Levy. The Supreme Court held that the Mahoney Area School District violated Levy’s First Amendment rights by punishing her for using vulgar language that criticized the school on social media. The Supreme Court noted numerous reasons why they ruled in favor of Levy. Additionally, The Supreme Court noted the importance of schools monitoring and punishing some off-campus speech. Such as, speech and behavior that is “serious or severe bullying or harassment targeting particular individuals; threats aimed at teachers or other students.” This is more necessary than ever before due to the increase in online bullying and harassment; that can impact the day-to-day activities of the school and the development of minors.

 

While it’s important in some circumstances for schools to monitor and address off-campus speech. The Supreme Court noted three reasons that would limit schools from interfering with student’s off-campus speech. First, a school, concerning off-campus speech, will rarely stand in loco parentis. Therefore, schools do not have more authority than parents. Especially not for off-campus speech. The parent is the authority figure; and will decide to discipline or not in most activities in their child’s life, especially what happens outside of school. This is important because parents have the authority to raise and discipline their children the way they believe, not based on the school district’s beliefs.

 

Second, “from the student perspective, regulations of off-campus speech, when coupled with regulations of on-campus speech, include all the speech a student utters during the full 24-hour day.” There would be no boundaries or limitations to what the school district would be allowed to discipline their students on. For instance, what if a group of students on a Saturday night decided to make a Tik Tok, and during the Tik Tok, the students curse and use vulgar language, would they be in trouble? If there were no limits to what the school could punish for off-campus speech, then those students could be in trouble for their Tik Tok video. Therefore, it’s important that the Supreme Court made this distinction to protect the student Frist Amendment rights.

 

Finally, the third reason is the school itself has an interest in protecting a student’s unpopular expression, especially when the expression takes place off-campus.” For instance, the Supreme Court stated that if schools did not protect their students’ unpopular opinions, this would limit and ruin the student’s ability to express themselves and schools are a place for students to learn and create their own opinion- even if that opinion differs from the school’s. To conclude, this would severely impact the student’s ability to think for themselves and create their own opinion, and respect other’s opinions that differ from their own.

 

Overall, I agree with the Supreme Court’s decision in this case. I believe it’s essential to separate in-school speech and off-campus speech. However, the only time off-campus speech should be monitored and addressed by the school is if there is bullying, harassing, or threatening language against the school, groups, or individuals at the school. With that being said, the Supreme Court noted three very important reasons as to why the public schools cannot have full control of students’ off-campus speech. All three of these reasons are fair and justifiable to protect the parents and students from being overly controlled by the school. To conclude, there is still a lot of questions and uncertainty, especially since technology is rapidly advancing and new social media platforms emerging frequently. I am curious if the Supreme Court will rule on a similar within the next fifty years and how this will impact schools in the next few years.

 

Do you agree with the Supreme Court decision and how do you see this ruling impacting public schools over the next few years?

Skip to toolbar