Privacy Please: Privacy Law, Social Media Regulation and the Evolving Privacy Landscape in the US

Social media regulation is a touchy subject in the United States.  Congress and the White House have proposed, advocated, and voted on various bills, aimed at protecting and guarding people from data misuse and misappropriation, misinformation, harms suffered by children, and for the implications of vast data collection. Some of the most potent concerns about social media stem from use and misuse of information by the platforms- from the method of collection, to notice of collection and use of collected information. Efforts to pass a bill regulating social media have been frustrated, primarily by the First Amendment right to free speech. Congress has thus far failed to enact meaningful regulation on social media platforms.

The way forward may well be through privacy law. Privacy laws give people some right to control their own personhood including their data, right to be left alone, and how and when people see and view them. Privacy laws originated in their current form in the late 1800’s with the impetus being one’s freedom from constant surveillance by paparazzi and reporters, and the right to control your own personal information. As technology mutated, our understanding of privacy rights grew to encompass rights in our likeness, our reputation, and our data. Current US privacy laws do not directly address social media, and a struggle is currently playing between the vast data collection practices of the platforms, immunity for platforms under Section 230, and private rights of privacy for users.

There is very little Federal Privacy law, and that which does exist is narrowly tailored to specific purposes and circumstances in the form of specific bills. Somes states have enacted their own privacy law scheme, California being on the forefront, Virginia, Colorado, Connecticut, and Utah following in its footsteps. In the absence of a comprehensive Federal scheme, privacy law is often judge-made, and offers several private rights of action for a person whose right to be left alone has been invaded in some way. These are tort actions available for one person to bring against another for a violation of their right to privacy.

Privacy Law Introduction

Privacy law policy in the United States is premised on three fundamental personal rights to privacy:

  1. Physical right to privacy- Right to control your own information
  2. Privacy of decisions– such as decisions about sexuality, health, and child-rearing. These are the constitutional rights to privacy. Typically not about information, but about an act that flows from the decision
  3. Proprietary Privacy – the ability to protect your information from being misused by others in a proprietary sense.

Privacy Torts

Privacy law, as it concerns the individual, gives rise to four separate tort causes of action for invasion of privacy:

  1. Intrusion upon Seclusion- Privacy law provides a tort cause of action for intrusion upon seclusion when someone intentionally intrudes upon the reasonable expectation of seclusion of another, physically or otherwise, and the intrusion is objectively highly offensive.
  2. Publication of Private Facts- One gives publicity To a matter concerning the Private life of another that is not of legitimate concern to the public, and the matter publicized would be objectively highly offensive. The first amendment provides a strong defense for publication of truthful matters when they are considered newsworthy.
  3. False Light – One who gives publicity to a matter concerning another that places the other before the public in a false light when The false light in which the other was placed would be objectively highly offensive and the actor had knowledge of or acted in reckless disregard as to the falsity of the publicized matter and the false light in which the other would be placed.
  4. Appropriation of name and likeness- Appropriation of one’s name or likeness to the defendant’s own use or benefit. There is no appropriation when a persona’s picture is used to illustrate a non-commercial, newsworthy article. This is usually commercial in nature but need not be. The appropriation could be of “identity”. It need not be misappropriation of name, it could be the reputation, prestige, social or commercial standing, public interest, or other value on the plaintiff’s likeness.

These private rights of action are currently unavailable for use against social media platforms because of Section 230 of the Decency in Communications Act, which provides broad immunity to online providers for posts on their platforms. Section 230 prevents any of the privacy torts from being raised against social media platforms.

The Federal Trade Commission (FTC) and Social Media

Privacy law can implicate social media platforms when their practices become unfair or deceptive to the public through investigation by the Federal Trade Commission (FTC). The FTC is the only federal agency with both consumer protection and competition jurisdiction in broad sectors of the economy. FTC investigates business practices where those practices are unfair or deceptive. FTC Act 15 U.S.C S 45- Act prohibits “unfair or deceptive acts or practices in or affecting commerce” and grants broad jurisdiction over privacy practices of businesses to the FTC. Trade practice is unfair if it causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and is not outweighed by countervailing benefits to consumers or competition. A deceptive act or practice is a material representation, omission, or practice that is likely to mislead the consumer acting reasonably in the circumstances, to the consumer’s detriment.

Critically, there is no private right of action in FTC enforcement. The FTC has no ability to enforce fines for S5 violations but can provide injunctive relief. By design, the FTC has very limited rulemaking authority, and looks to consent decrees and procedural, long-lasting relief as an ideal remedy. The FTC pursues several types of misleading or deceptive policy and practices that implicate social media platforms: notice and choice paradigms, broken promises, retroactive policy changes, inadequate notice, and inadequate security measures. Their primary objective is to negotiate a settlement where the company submits to certain measures of control of oversight by the FTC for a certain period of time. Violations of the agreements could yield additional consequences, including steep fines and vulnerability to class action lawsuits.

Relating to social media platforms, the FTC has investigated misleading terms and conditions, and violations of platform’s own policies. In Re Snapchat, the platform claimed that user’s posted information disappeared completely after a certain period of time, however, through third party apps and manipulation of user’s posts off of the platform, posts could be retained. The FTC and Snapchat settled, through a consent decree, to subject Snapchat to FTC oversight for 20 years.

The FTC has also investigated Facebook for violation of its privacy policy. Facebook has been ordered to pay a $5 billion penalty and to submit to new restrictions and a modified corporate structure that will hold the company accountable for the decisions it makes about its users’ privacy to settle FTC charges claiming that they violated a 2012 agreement with the agency.

Unfortunately, none of these measures directly give individuals more power over their own privacy. Nor do these policies and processes give individuals any right to hold platforms responsible for being misled by algorithms using their data, or for intrusion into their privacy by collecting data without allowing an opt-out.

Some of the most harmful social media practices today relate to personal privacy. Some examples include the collection of personal data, the selling and dissemination of data through the use of algorithms designed to subtly manipulate our pocketbooks and tastes, collection and use of data belonging to children, and the design of social media sites to be more addictive- all in service of the goal of commercialization of data.

No current Federal privacy scheme exists. Previous Bills on Privacy have been few and narrowly tailored to relatively specific circumstances and topics like healthcare and medical data protection by HIPPA, protection of data surrounding video rentals as in the Video Privacy Protection Act, and narrow protection for children’s data in Children’s Online Protection Act. All the schemes are outdated and fall short of meeting the immediate need of broad protection of widely collected and broadly utilized data from social media.

Current Bills on Privacy

Upon request from some of the biggest platforms, outcry from the public, and the White House’s request for Federal Privacy regulation, Congress appears poised to act. The 118th Congress has pushed privacy law as a priority in this term by introducing several bills related to social media privacy. There are at least ten Bills currently pending between the House of the Senate addressing a variety of issues and concerns from Children’s data privacy to the minimum age for use and designation of a new agency to monitor some aspects of privacy.

S744The Data Care Act of 2023 aims to protect social media user’s data privacy by imposing fiduciary duties on the platforms. The original iteration of the bill was introduced in 2021 and failed to receive a vote. It was re-introduced in March of 2023 and is currently pending. Under the act, social media platforms would have the duty to reasonably secure user’s data from access, refrain from using the data in a way that could foreseeably “benefit the online service provider to the detriment of the end user” and to prevent disclosure of user’s data unless the party is also bound by these duties. The bill authorizes the FTC and certain state officials to take enforcement actions upon breach of those duties. The states would be permitted to take their own legal action against companies for privacy violations. The bill would also allow the FTC to intervene in the enforcement efforts by imposing fines for violations.

H.R.2701 – Perhaps the most comprehensive piece of legislation on the House floor is the Online Privacy Act. In 2023, the bill was reintroduced by democrat Anna Eshoo after an earlier version on the bill failed to receive a vote and died in Congress. The Online Privacy Act aims to protect users by providing individuals rights relating to the privacy of their personal information. The bill would also provide privacy and security requirements for treatment of personal information. To accomplish this, the bill established a new agency – the Digital Privacy Agency- which would be responsible for enforcement of the rights and requirements. The new individual rights in privacy are broad and include the rights of access, correction, deletion, human review of automated decision, individual autonomy, right to be informed, and right to impermanence, amongst others. This would be the most comprehensive plan to date. The establishment of a new agency with a task specific to administration and enforcement of privacy laws would be incredibly powerful. The creation of this agency would be valuable irrespective of whether this bill is passed.

HR 821– The Social Media Child Protection Act is a sister bill to one by a similar name which originated in the Senate. This bill aims to protect children from the harms of social media by limiting children’s access to it. Under the bill, Social Media platforms are required to verify the age of every user before accessing the platform by submitting a valid identity document or by using another reasonable verification method. A social media platform will be prohibited from allowing users under the age of 16 to access the platform. The bill also requires platforms to establish and maintain reasonable procedures to protect personal data collected from users. The bill affords for a private right of action as well as state and FTC enforcement.

S 1291The Protecting Kids on Social Media Act is similar to its counterpart in the House, with slightly less tenacity. It similarly aims to protect children from social media’s harms. Under the bill, platforms must verify its user’s age, not allow the user to use the service unless their age has been verified, and must limit access to the platform for children under 12. The bill also prohibits retention and use of information collected during the age verification process. Platforms must take reasonable steps to require affirmative consent from the parent or guardian of a minor who is at least 13 years old for the creation of a minor account, and reasonably allow access for the parent to later revoke that consent. The bill also prohibits use of data collected from minors for algorithmic recommendations. The bill would require the Department of Commerce to establish a voluntary program for secure digital age verification for social media platforms. Enforcement would be through the FTC or state action.

S 1409– The Kids Online Safety Act, proposed by Senator Blumenthal of Connecticut, also aims to protect minors from online harms. This bill, as does the Online Safety Bill, establishes fiduciary duties for social media platforms regarding children using their sites. The bill requires that platforms act in the best interest of minors using their services, including mitigating harms that may arise from use, sweeping in online bullying and sexual exploitation. Social media sites would be required to establish and provide access to safeguards such as settings that restrict access to minor’s personal data and granting parents the tools to supervise and monitor minor’s use of the platforms. Critically, the bill establishes a duty for social media platforms to create and maintain research portals for non-commercial purposes to study the effect that corporations like the platforms have on society.

Overall, these bills indicate Congress’s creative thinking and commitment to broad privacy protection for users from social media harms. I believe the establishment of a separate body to govern, other than the FTC which lacks the powers needed to compel compliance, to be a necessary step. Recourse for violations on par with the EU’s new regulatory scheme, mainly fines in the billions, could help.

Many of the bills, for myriad aims, establish new fiduciary duties for the platforms in preventing unauthorized use and harms for children. There is real promise in this scheme- establishing duty of loyalty, diligence and care for one party has a sound basis in many areas of law and would be more easily understood in implementation.

The notion that platforms would need to be vigilant in knowing their content, studying its affects, and reporting those effects may do the most to create a stable future for social media.

The legal responsibility for platforms to police and enforce their policies and terms and conditions is another opportunity to further incentivize platforms. The FTC currently investigates policies that are misleading or unfair, sweeping in the social media sites, but there could be an opportunity to make the platforms legally responsible for enforcing their own policies, regarding age, against hate, and inappropriate content, for example.

What would you like to see considered in Privacy law innovation for social media regulation?

Don’t Talk to Strangers! But if it’s Online, it’s Okay?

It is 2010.  You are in middle school and your parents let your best friend come over on a Friday night.  You gossip, talk about crushes, and go on all social media sites.  You decide to try the latest one, Omegle.  You automatically get paired with a stranger to talk to and video chat with.  You speak to a few random people, and then, with the next click, a stranger’s genitalia are on your screen.

Stranger Danger

Omegle is a free video-chatting social media platform.  Its primary function has become meeting new people and arranging “online sexual rendezvous.”  Registration is not required.  Omegle randomly pairs users for one-on-one video sessions.  These sessions are anonymous, and you can skip to a new person at any time.  Although there is a large warning on the home screen saying “you must be 18 or older to use Omegle”, no parental controls are available through the platform.  Should you want to install any parental controls, you must use a separate commercial program.

While the platform’s community guidelines illustrate the “dos and don’ts” of the site, it seems questionable that the platform can monitor millions of users, especially when users are not required to sign up, or to agree to any of Omegle’s terms and conditions.  It, therefore, seems that this site could harbor online predators, raising quite a few issues.

One recent case surrounding Omegle involved a pre-teen who was sexually abused, harassed, and blackmailed into sending a sexual predator obscene content.  In A.M. v. Omegle.com LLC, the open nature of Omegle ended up matching an 11-year-old girl with a sexual predator in his late thirties.  Being easily susceptible, he forced the 11-year-old into sending pornographic images and videos of herself, perform for him and other predators, and recruit other minors.  This predator was able to continue this horrific crime for three years by threatening to release these videos, pictures, and additional content publicly.  The 11-year-old plaintiff sued Omegle on two general claims of platform liability through Section 230, but only one claim was able to break through the law.

Unlimited Immunity Cards!

Under 47 U.S.C. § 230 (Section 230), social media platforms are immune from liability for content posted by third parties.  As part of the Communications Decency Act of 1996, Section 230 provides almost full protection against lawsuits for social media companies since no platform is seen as a publisher or speaker of user-generated content posted on the site.  Section 230 has gone so far to say that Google and Twitter were immune from liability for claims that their platforms were used to aid terrorist activities.  In May of 2023, these cases moved up to the Supreme Court.  Although the court declined to rule for the Google case, they ruled on the Twitter case.  Google was found not liable for the claim that they stimulated the growth of ISIS through targeted recommendations and inspired an attack that killed an American student.  Twitter was immune for the claim that the platform aided and abetted a terrorist group to raise funds and recruit members for a terrorist attack.

Wiping the Slate

In February of 2023, the District Court in Oregon for the Portland Division found that Section 230 immunity did not apply to Omegle in a products liability claim, and the platform was held liable for these predatory actions committed by the third party on the site.  By side-stepping the third-party freedom of speech issue that comes with Section 230 immunity for an online publisher, the district court found Omegle responsible under the Plaintiff’s products liability claim, which targeted the platforms’ defective design, defective warning, negligent design, and failure to warn.

Three prongs need to be proved to preclude a platform from liability under Section 230:

  1. A provider of an interactive site,
  2. Whom is sought to be treated as a publisher or speaker, and
  3. For information provided by a third-party.

It is clear that Omegle is an interactive site that fits into the definition provided by Section 230.  The issue then falls on the second and third prongs: if the cause of action treated Omegle as the speaker of third-party content.  The sole function of randomly pairing strangers causes the foreseen danger of pairing a minor with an adult. Shown in the present case, “the function occurs before the content occurs.” By designing the platform negligently and with knowing disregard for the possibility of harm, the court ultimately concluded that the liability of the platform’s function does not pertain to third-party published content and that the claim targeted specific functions rather than users’ speech on the platform.  Section 230 immunity did not apply for this first claim and Omegle was held liable.

Not MY Speech

The plaintiff’s last claim dealing with immunity under Section 230 is that Omegle negligently failed to apply reasonable precautions to provide a safe platform.  There was a foreseeable risk of harm when marketing the service to children and adults and randomly pairing them.  Unlike the products liability claim, the negligence claim was twofold: the function of matching people and publishing their communications to each other, both of which fall directly into Section 230’s immunity domain.  The Oregon District Court drew a distinct line between the two claims, so although Omegle was not liable under Section 230 here through negligent service, they were liable through products liability.

If You Cannot Get In Through the Front Door, Try the Back Door!

For almost 30 years, social media platforms have been nearly immune from liability pertaining to Section 230 issues.  In the last few years, with the growth of technology on these platforms, judges have been trying to find loopholes in the law to hold companies liable.  A.M. v. Omegle has just moved through the district court level.  If appealed, it will be an interesting case to follow and see if the ruling will stand or be overruled in conjunction with the other cases that have been decided.  

How do you think a higher court will rule on issues like these?

Jonesing For New Regulations of Internet Speech

From claims that the moon landing was faked to Area 51, the United States loves its conspiracy theories. In fact, a study sponsored by the University of Chicago found that more than half of Americans believe at least one conspiracy theory. While this is not a new phenomenon, the increasing use and reliance on social media has allowed misinformation and harmful ideas to spread with a level of ease that wasn’t possible even twenty years ago.

Individuals with a large platform can express an opinion that creates a harm to the people that are personally implicated in the ‘information’ being spread. Presently, a plaintiff’s best option to challenge harmful speech is through a claim for defamation. The inherent problem is that opinions are protected by the First Amendment and, thus, not actionable as defamation.

This leaves injured plaintiffs limited in their available remedies because statements in the context of the internet are more likely to be seen as an opinion. The internet has created a gap where we have injured plaintiffs and no available remedy. With this brave new world of communication, interaction, and the spread of information by anyone with a platform comes a need to ensure that injuries sustained by this speech will have legal recourse.

Recently, Alex Jones lost a defamation claim and was ordered to pay $965 million to the families of the Sandy Hook victims after claiming that the Sandy Hook shooting that occurred in 2012 was a “hoax.” Despite prevailing at trial, the statements that were the subject of the suit do not fit neatly into the well-established law of defamation, which makes reversal on appeal likely.

The elements of defamation require that the defendant publish a false statement purporting it to be true, which results in some harm to the plaintiff. However, just because a statement is false does not mean that the plaintiff can prove defamation because, as the Supreme Court has recognized, false statements still receive certain First Amendment protections. In Milkovich v. Lorain Journal Co., the Court held that “imaginative expression” and “loose, figurative, or hyperbolic language” is protected by the First Amendment.

The characterization of something as a “hoax” has been held by courts to fall into this category of protected speech. In Montgomery v. Risen, a software developer brought a defamation action against an author who made a statement claiming that plaintiff’s software was a “hoax.” The D.C. Circuit held that characterization of something as an “elaborate and dangerous hoax” is hyperbolic speech, which creates no basis for liability. This holding was mirrored by several courts including the District Court of Kansas in Yeagar v. National Public Radio, the District Court of Utah in Nunes v. Rushton, and the Superior Court of Delaware in Owens v. Lead Stories, LLC.

The other statements made by Alex Jones regarding Sandy Hook are also hyperbolic language. These statements include: “[i]t’s as phony as a $3 bill”, “I watched the footage, it looks like a drill”, and “my gut is… this is staged. And you know I’ve been saying the last few months, get ready for big mass shootings, and then magically, it happens.” While these statements are offensive and cruel to the suffering families, it is really difficult to characterize them as something objectively claimed to be true. ‘Phony’, ‘my gut is’, ‘looks like’, and ‘magically’ are qualifying the statement he is making as a subjective opinion based on his interpretation of the events that took place.

It is indisputable that the statements Alex Jones made caused harm to these families. They have been subjected to harassment, online abuse, and death threats from his followers. However, no matter how harmful these statements are, that does not make it defamation. Despite this, a reasonable jury was so appalled by this conduct that they found for the plaintiffs. This is essentially reverse jury nullification. They decided that Jones was culpable and should be held legally responsible even if there is no adequate basis for liability.

The jury’s determination demonstrates that current legal remedies are inadequate to regulate potentially harmful speech that can spread like wildfire on the internet. The influence that a person like Alex Jones has over his followers establishes a need for new or updated laws that hold public figures to a higher standard even when they are expressing their opinion.

A possible starting point for regulating harmful internet speech at a federal level might be through the commerce clause, which allows Congress to regulate instrumentalities of commerce. The internet, by its design, is an instrumentality of interstate commerce by enabling for the communication of ideas across state lines.

Further, the Federal Anti-Riot Act, which was passed in 1968 to suppress civil rights protestors might be an existing law that can serve this purpose. This law makes it a felony to use a facility of interstate commerce to (1) incite a riot; or (1) to organize, promote, encourage, participate in, or carry on a riot. Further, the act defines riot as:

 [A] public disturbance involving (1) an act or acts of violence by one or more persons part of an assemblage of three or more persons, which act or acts shall constitute a clear and present danger of, or shall result in, damage or injury to the property of any other person or to the person of any other individual or (2) a threat or threats of the commission of an act or acts of violence by one or more persons part of an assemblage of three or more persons having, individually or collectively, the ability of immediate execution of such threat or threats, where the performance of the threatened act or acts of violence would constitute a clear and present danger of, or would result in, damage or injury to the property of any other person or to the person of any other individual.

Under this definition, we might have a basis for holding Alex Jones accountable for organizing, promoting, or encouraging a riot through a facility (the internet) of interstate commerce. The acts of his followers in harassing the families of the Sandy Hook victims might constitute a public disturbance within this definition because it “result[ed] in, damage or injury… to the person.” While this demonstrates one potential avenue of regulating harmful internet speech, new laws might also need to be drafted to meet the evolving function of social media.

In the era of the internet, public figures have an unprecedented ability to spread misinformation and incite lawlessness. This is true even if their statements would typically constitute an opinion because the internet makes it easier for groups to form that can act on these ideas. Thus, in this internet age, it is crucial that we develop a means to regulate the spread of misinformation that has the potential to harm individual people and the general public.

Can Social Media Be Regulated?

In 1996 Congress passed what is known as Section 230 of the Communications Decency Act (CDA) which provides immunity to website publishers for third-party content posted on their websites. The CDA holds that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This Act passed in 1996, was created in a different time and era, one that could hardly envision how fast the internet would grow in the coming years. In 1996, social media for instance consisted of a little-known social media website called Bolt, the idea of a global world wide web, was still very much in its infancy. The internet was still largely based on dial-up technology, and the government was looking to expand the reach of the internet. This Act is what laid the foundation for the explosion of Social Media, E-commerce, and a society that has grown tethered to the internet.

The advent of Smart-Phones in the late 2000s, coupled with the CDA, set the stage for a society that is constantly tethered to the internet and has allowed companies like Facebook, Twitter, YouTube, and Amazon to carve out niches within our now globally integrated society.   Facebook alone in the 2nd quarter of 2021 has averaged over 1.9 billion daily users.

Recent studs conducted by the Pew Research Center show that “[m]ore than eight in ten Americans get news from digital services”

Large majority of Americans get news on digital devices

While older members of society still rely on news media online, the younger generation, namely those 18-29 years of age, receive their news via social media.

Online, most turn to news websites except for the youngest, who are more likely to use social media

The role Social Media plays in the lives of the younger generation needs to be recognized. Social Media has grown at a far greater rate than anyone could imagine. Currently, Social Media operates under its modus operandi, completely free of government interference due to its classification as a private entity, and its protection under Section 230.

Throughout the 20th century when Television News Media dominated the scenes, laws were put into effect to ensure that television and radio broadcasters would be monitored by both the courts and government regulatory commissions. For example, “[t]o maintain a license, stations are required to meet a number of criteria. The equal-time rule, for instance, states that registered candidates running for office must be given equal opportunities for airtime and advertisements at non-cable television and radio stations beginning forty-five days before a primary election and sixty days before a general election.”

What these laws and regulations were put in place for was to ensure that the public interest in broadcasting was protected. To give substance to the public interest standard, Congress has from time to time enacted requirements for what constitutes the public interest in broadcasting. But Congress also gave the FCC broad discretion to formulate and revise the meaning of broadcasters’ public interest obligations as circumstances changed.

The Federal Communications Commission (FCC) authority is constrained by the first amendment but acts as an intermediary that can intervene to correct perceived inadequacies in overall industry performance, but it cannot trample on the broad editorial discretion of licensees. The Supreme Court has continuously upheld the public trustee model of broadcast regulation as constitutional. The criticisms of regulating social media center on the notion that they are purely private entities that do not fall under the purviews of the government, and yet these same issues are what presented themselves in the precedent-setting case of Red Lion Broadcasting Co. v. Federal Communications Commission (1969.  In this case, the court held that “rights of the listeners to information should prevail over those of the broadcasters.” The Court’s holding centered on the public right to information over the rights of a broadcast company to choose what it will share, this is exactly what is at issue today when we look at companies such as Facebook, Twitter, and Snapchat censuring political figures who post views that they feel may be inciteful of anger or violence.

In essence, what these organizations are doing is keeping information and views from the attention of the present-day viewer. The vessel for the information has changed, it is no longer found in television or radio but primarily through social media. Currently, television and broadcast media are restricted by Section 315(a) of the Communications Act and Section 73.1941 of the Commission’s rules which “require that if a station allows a legally qualified candidate for any public office to use its facilities (i.e., make a positive identifiable appearance on the air for at least four seconds), it must give equal opportunities to all other candidates for that office to also use the station.” This is a restriction that is nowhere to be found for Social Media organizations. 

This is not meant to argue for one side or the other but merely to point out that there is a political discourse being stifled by these social media entities, that have shrouded themselves in the veils of a private entity. However, what these companies fail to mention is just how political they truly are. For instance, Facebook proclaims itself to be an unbiased source for all parties, and yet what it fails to mention is that currently, Facebook employs one of the largest lobbyist groups in Washington D.C. Four Facebooks lobbyist have worked directly in the office of House Speaker Pelosi. Pelosi herself has a very direct connection to Facebook, she and her husband own between $550,000 to over $1,000,000 in Facebook stock. None of this is illegal, however, it raises the question of just how unbiased is Facebook.

If the largest source of news for the coming generation is not television, radio, or news publications themselves, but rather Social Media such as Facebook, then how much power should they be allowed to wield without there being some form of regulation? The question being presented here is not a new one, but rather the same question asked in 1969, simply phrased differently. How much information is a citizen entitled to, and at what point does access to that information outweigh the rights of the organization to exercise its editorial discretion? I believe that the answer to that question is the same now as it was in 1969 and that the government ought to take steps similar to those taken with radio and television. What this looks like is ensuring that through Social Media, that the public has access to a significant amount of information on public issues so that its members can make rational political decisions. At the end of that day that it was at stake, the public’s ability to make rational political decisions.

These large Social Media conglomerates such as Facebook and Twitter have long outgrown their place as a private entity, they have grown into a public medium that has tethered itself to the realities of billions of people. Certain aspects of it need to be regulated, mainly those that interfere with the Public Interest, there are ways to regulate this without interfering with the overall First Amendment right of Free Speech for all Americans. Where however Social Media blends being a private forum for all people to express their ideas under firmly stated “terms and conditions”, and being an entity that strays into the political field whether it be by censoring heads of state, or by hiring over $50,000,000 worth of lobbyist in Washington D.C, there need to be some regulations put into place that draw the line that ensures the public still maintains the ability to make rational political decisions. Rational decisions that are not influenced by anyone organization. The time to address this issue is now when there is still a middle ground on how people receive their news and formulate opinions.

Don’t Throw Out the Digital Baby with the Cyber Bathwater: The Rest of the Story

This article is in response to Is Cyberbullying the Newest Form of Police Brutality?” which discussed law enforcement’s use of social media to apprehend people. The article provided a provocative topic, as seen by the number of comments.

I believe that discussion is healthy for society; people are entitled to their feelings and to express their beliefs. Each person has their own unique life experiences that provide a basis for their beliefs and perspectives on issues. I enjoy discussing a topic with someone because I learn about their experiences and new facts that broaden my knowledge. Developing new relationships and connections is so important. Relationships and new knowledge may change perspectives or at least add to understanding each other better. So, I ask readers to join the discussion.

My perspectives were shaped in many ways. I grew up hearing Paul Harvey’s radio broadcast “The Rest of the Story.” His radio segment provided more information on a topic than the brief news headline may have provided. He did not imply that the original story was inaccurate, just that other aspects were not covered. In his memory, I will attempt to do the same by providing you with more information on law enforcement’s use of social media. 

“Is Cyberbullying the Newest Form of Police Brutality?

 The article title served its purpose by grabbing our attention. Neither cyberbullying or police brutality are acceptable. Cyberbullying is typically envisioned as teenage bullying taking place over the internet. The U.S. Department of Health and Human Services states that “Cyberbullying includes sending, posting, or sharing negative, harmful, false, or mean content about someone else. It can include sharing personal or private information about someone else causing embarrassment or humiliation”. Similarly, police brutality occurs when law enforcement (“LE”) officers use illegal and excessive force in a situation that is unreasonable, potentially resulting in a civil rights violation or a criminal prosecution.

While the article is accurate that 76% of the surveyed police departments use social media for crime-solving tips, the rest of the story is that more departments use social media for other purposes. 91% notified the public regarding safety concerns. 89% use the technology for community outreach and citizen engagement, 86% use it for public relations and reputation management. Broad restrictions should not be implemented, which would negate all the positive community interactions increasing transparency.   

Transparency 

In an era where the public is demanding more transparency from LE agencies across the country, how is the disclosure of the public’s information held by the government considered “Cyberbullying” or “Police Brutality”? Local, state, and federal governments are subject to Freedom of Information Act laws requiring agencies to provide information to the public on their websites or release documents within days of requests or face civil liability.

New Jersey Open Public Records

While the New Jersey Supreme Court has not decided if arrest photographs are public, the New Jersey Government Records Council (“GRC”) has decided in Melton v. City of Camden, GRC 2011-233 (2013) that arrest photographs are not public records under NJ Open Public Records Act (“OPRA”) because of Governor Whitmer’s Executive Order 69 which exempts fingerprint cards, plates and photographs and similar criminal investigation records from public disclosure. It should be noted that GRC decisions are not precedential and therefore not binding on any court.

However, under OPRA, specifically 47:1A-3 Access to Records of Investigation in Progress, specific arrest information is public information and must be disclosed to the public within 24 hours of a request to include the:

  • Date, time, location, type of crime, and type of weapon,
  • Defendant’s name, age, residence, occupation, marital status, and similar background information.
  • Identity of the complaining party,
  • Text of any charges or indictment unless sealed,
  • Identity of the investigating and arresting officer and agency and the length of the investigation,
  • Time, location, and the arrest circumstances (resistance, pursuit, use of weapons),
  • Bail information.

For years, even before Melton, I believed that an arrestee’s photograph should not be released to the public. As a police chief, I refused numerous media requests for arrestee photographs protecting their rights and believing in innocence until proven guilty. Even though they have been arrested, the arrestee has not received due process in court.

New York’s Open Public Records

In New York under the Freedom of Information Law (“FOIL”), Public Officers Law, Article 6, §89(2)(b)(viii) (General provisions relating to access to records; certain cases) The disclosure of LE arrest photographs would constitute an unwarranted invasion of an individual’s personal privacy unless the public release would serve a specific LE purpose and the disclosure is not prohibited by law.

California’s Open Public Records

Under the California Public Records Act (CPRA) a person has the statutory right to be provided or inspect public records, unless a record is exempt from disclosure. Arrest photographs are inclusive in arrest records along with other personal information, including the suspect’s full name, date of birth, sex, physical characteristics, occupation, time of arrest, charges, bail information, any outstanding warrants, and parole or probation holds.

Therefore under New York and California law, the blanket posting of arrest photographs is already prohibited.

Safety and Public Information

 Recently in Ams. for Prosperity Found. V. Bonta, the compelled donor disclosure case, while invalidating the law on First Amendment grounds, Justice Alito’s concurring opinion briefly addressed the parties personal safety concerns that supporters were subjected to bomb threats, protests, stalking, and physical violence. He cited Doe v Reed  which upheld disclosures containing home addresses under Washington’s Public Records Act despite the growing risks by anyone accessing the information with a computer. 

Satisfied Warrant

I am not condoning Manhattan Beach Police Department’s error of posting information on a satisfied warrant along with a photograph on their “Wanted Wednesday” in 2020. However, the disclosed information may have been public information under CPRA then and even now. On July 23, 2021, Governor Newsom signed a law amending Section 13665 of the CPRA prohibiting LE agencies from posting photographs of an arrestee accused of a non-violent crime on social media unless:

  • The suspect is a fugitive or an imminent threat, and disseminating the arrestee’s image will assist in the apprehension.
  • There is an exigent circumstance and an urgent LE interest.
  • A judge orders the release or dissemination of the suspect’s image based on a finding that the release or dissemination is in furtherance of a legitimate LE interest.

The critical error was that the posting stated the warrant was active when it was not. A civil remedy exists and was used by the party to reach a settlement for damages. Additionally, it could be argued that the agency’s actions were not the proximate cause when vigilantes caused harm.

Scope of Influence

LE’s reliance on the public’s help did not start with social media or internet websites. The article pointed out that “Wanted Wednesday” had a mostly local following of 13,600. This raised the question if there is much of a difference between the famous “Wanted Posters” from the wild west or the “Top 10 Most Wanted” posters the Federal Bureau of Investigations (“FBI”) used to distribute to Post Offices, police stations and businesses to locate fugitives. It can be argued that this exposure was strictly localized. However, the weekly TV show America’s Most Wanted, made famous by John Walsh, aired from 1988 to 2013, highlighting fugitive cases nationally. The show claims it helped capture over 1000 criminals through their tip-line. However, national media publicity can be counter-productive by generating so many false leads that obscure credible leads.

The FBI website contains pages for Wanted People, Missing People, and Seeking Information on crimes. “CAPTURED” labels are added to photographs showing the results of the agency’s efforts. Local LE agencies should follow FBI practices. I would agree with the article that social media and websites should be updated; however, I don’t agree that the information must be removed because it is available elsewhere on the internet.

Time

Vernon Gebeth, the leading police homicide investigation instructor, believes time is an investigator’s worst enemy.  Eighty-five percent of abducted children are killed within the first five hours. Almost all are killed within the first twenty-four hours. Time is also critical because, for each hour that passed, the distance a suspect’s vehicle can travel expands by seventy-five miles in either direction. In five hours, the area can become larger than 17,000 square miles. Like Amber Alerts, social media can be used to quickly transmit information to people across the country in time-sensitive cases.

Live-Streaming Drunk Driving Leads to an Arrest

When Whitney Beall, a Florida woman, used a live streaming app to show her drinking at a bar then getting into her vehicle. The public dialed 911, and a tech-savvy officer opened the app, determined her location, and pulled her over. She was arrested after failing a DWI sobriety test.  After pleading guilty to driving under the influence, she was sentenced to 10 days of weekend work release, 150 hours of community service, probation, and a license suspension. In 2019 10,142 lives were lost to alcohol impaired driving crashes.

Family Advocating

Social media is not limited to LE. It also provides a platform for victim’s families to keep attention on their cases. The father of a seventeen-year-old created a series of Facebook Live videos about a 2011 murder resulting in the arrest of Charles Garron. He was to a fifty-year prison term.

Instagram Selfies with Drugs, Money and Stolen Guns 

Police in Palm Beach County charged a nineteen-year-old man with 142 felony charges, including possession of a weapon by a convicted felon, while investigating burglaries and jewel thefts in senior citizen communities. An officer found his Instagram account with incriminating photographs. A search warrant was executed, seizing stolen firearms and $250,000 in stolen property from over forty burglaries.

Bank Robbery Selfies


Police received a tip and located a social media posting by John E. Mogan II of himself with wads of cash in 2015. He was charged with robbing an Ashville, Ohio bank. He pled guilty and was sentenced to three years in prison. According to news reports, Morgan previously  served prison time for another bank robbery.

Food Post Becomes the Smoking Gun

LE used Instagram to identify an ID thief who posted photographs of his dinner at a high-end steakhouse with a confidential informant (“CI”).  The man who claimed he had 700,000 stolen identities and provided the CI a flash drive of stolen identities. The agents linked the flash drive to a “Troy Maye,” who the CI identified from Maye’s profile photograph. Authorities executed a search warrant on his residence and located flash drives containing the personal identifying information of thousands of ID theft victims. Nathaniel Troy Maye, a 44-year-old New York resident, was sentenced to sixty-six months in federal prison after pleading guilty to aggravated identity theft.

 

Wanted Man Turns Himself in After Facebook Challenge With Donuts

A person started trolling Redford Township Police during a Facebook Live community update. It was determined that he was a 21-year-old wanted for a probation violation for leaving the scene of a DWI collision. When asked to turn himself in, he challenged the PD to get 1000 shares and he would bring in donuts. The PD took the challenge. It went viral and within an hour reached that mark acquiring over 4000 shares. He kept his word and appeared with a dozen donuts. He faced 39 days in jail and had other outstanding warrants.

The examples in this article were readily available on the internet and on multiple news websites, along with photographs.

Under state Freedom of Information Laws, the public has a statutory right to know what enforcement actions LE is taking. Likewise, the media exercises their First Amendment rights to information daily across the country when publishing news. Cyber journalists are entitled to the same information when publishing news on the internet and social media. Traditional news organizations have adapted to online news to keep a share of the news market. LE agencies now live stream agency press conferences to communicating directly with the communities they serve.

Therefore the positive use of social media by LE should not be thrown out like bathwater when legal remedies exist when damages are caused.

“And now you know…the rest of the story.”

Free speech, should it be so free?

In the United States everybody is entitled to free speech; however, we must not forget that the First Amendment of the Constitution only protects individuals from federal and state actions. With that being said, free speech is not protected from censorship by private entities, like social media platforms. In addition, Section 230 of the Communications Decency Act (CDA) provides technology companies like Twitter, YouTube, Facebook, Snapchat, Instagram as well as other social media giants, immunity from liabilities arising from the content posted on their websites. The question becomes whether it is fair for an individual who desires to freely express himself or herself to be banned from certain social media websites by doing so? What is the public policy behind this? What are the standards employed by these social media companies when determining who should or should not be banned? On the other hand, are social media platforms being used as tools or weapons when it comes to politics? Do they play a role in how the public vote? Are the users truly seeing what they think they have chosen to see or are the contents being displayed targeted to the users and may ultimately create biases?

As we have seen earlier this year, former President Trump was banned from several social media platforms as a result of the January 6, 2021 assault at the U.S. Capitol by Trump supporters. It is no secret that our former president is not shy about his comments on a variety of topics. Some audiences view him as outspoken, direct, or perhaps provocative. When Twitter announced its permanent suspension of former President Trump’s account, its rationale was to prevent further incitement of violence. By falsely claiming that the 2020 election had been stolen from him, thousands of Trump supporters gathered in Washington, D.C. on January 5 and January 6 which ultimately led to violence and chaos. As a public figure and a politician, our former president should have known that his actions or viewpoints on social media are likely to trigger a significant impact on the public. Public figures and politicians should be held to a higher standard as they represent citizens who voted for them. As such, they are influential. Technology companies like Twitter saw the former president’s tweets as potential threats to the public as well as a violation of their company policies; hence, it was justified to ban his account. The ban was an instance of private action as opposed to government action. In other words, former President Trump’s First Amendment rights were not violated.

Spare Me Your Outrage, Your Shock. This Is America | Cognoscenti

First, let us discuss the fairness aspect of censorship. Yes, individuals possess rights to free speech; however, if the public’s safety is at stake, actions are required to avoid chaos. For example, you cannot scream “fire”  out of nowhere in a dark movie theater as it would cause panic and unnecessary disorder. There are rules you must comply with in order to use the facility and these rules are in place to protect the general welfare. As a user, if you don’t like the rules set forth by that facility, you can simply avoid using it. It does not necessarily mean that your idea or speech is strictly prohibited, just not on that particular facility. Similar to social media platforms, if users fail to follow their company policies, the companies reserve the right to ban them. Public policy probably outweighs individual freedom. As for the standards employed by these technology companies, there is no bright line. As I previously mentioned, Section 230 grants them immunity from liabilities. That being said, the contents are unregulated and therefore, these social media giants are free to implement and execute policies as they seem appropriate.

The Dangers of Social Networking - TurboFuture

In terms of politics, I believe social media platforms do play a role in shaping their users’ perspectives in some way. This is because the contents that are being displayed are targeted, if not tailored, as they collect data based on the user’s preferences and past habits. The activities each user engages in are being monitored, measured, and analyzed. In a sense, these platforms are being used as a weapon as they may manipulate users without the users even knowing. A lot of times we are not even aware that the videos or pictures that we see online are being presented to us because of past contents we had seen or selected. In other words, these social media companies may be censoring what they don’t want you to see or what they may think you don’t want to see.  For example, some technology companies are pro-vaccination. They are more likely to post information about facts about COVID-19 vaccines or perhaps publish posts that encourage their users to get vaccinated.  We think we have control over what we see or watch, but do we really?

How to Avoid Misinformation About COVID-19 | Science | Smithsonian Magazine

There are advantages and disadvantages to censorship. Censorship can reduce the negative impact of hate speech, especially on the internet. By limiting certain speeches, we create more opportunities for equality. In addition, censorship prevents the spread of racism. For example, posts and videos of racial comments could be blocked by social media companies if deemed necessary. Censorship can also protect minors from seeing harmful content. Because children can be manipulated easily, it helps promote safety.  Moreover, censorship can be a vehicle to stop false information. During unprecedented times like this pandemic, misinformation can be fatal. On the other hand, censorship may not be good for the public as it creates a specific narrative in society. This can potentially cause biases. For example, many blamed Facebook for the outcome of an election as it’s detrimental to our democracy.

Overall, I believe that some sort of social media censorship is necessary. The cyber-world is interrelated to the real world. We can’t let people do or say whatever they want as it may have dramatic detrimental effects. The issue is how do you keep the best of both worlds?

 

How Defamation and Minor Protection Laws Ultimately Shaped the Internet

Kyiv, Ukraine – September 5, 2019: A paper cubes collection with printed logos of world-famous social networks and online messengers, such as Facebook, Instagram, YouTube, Telegram and others.

The Communications Decency Act (CDA) was originally enacted with the intention of shielding minors from indecent and obscene online material. Despite its origins, Section 230 of the Communications Decency Act is now commonly used as a broad legal safeguard for social media platforms to shield themselves from legal liability for content posted on their sites by third parties. Interestingly, the reasoning behind this safeguard arises both from defamation common law, and constitutional free speech laws. As the internet has grown, however, this legal safeguard has gained increasing criticism. However, is this legislation actually undesirable? Many would disagree as section 230 contains “the 26 words that created the internet.”

 

Origin of the Communications Decency Act

The CDA was introduced and enacted as an attempt to shield minors from obscene or indecent content online. Although parts of the Act were later struck down for first amendment free speech violations, the Court left section 230 intact. The creation of section 230 was influenced by two landmark court decisions of defamation lawsuits.

The first case was in 1991, and involved an Internet site that hosted around 150 online forums. A claim was brought against the internet provider when a columnist of one of the online forums posted a defamatory comment about his competitor. The competitor sued the online distributor for the published defamation. The courts categorized the internet service provider as a distributor because they did not review any content of the forums before the content was posted to the site. As a distributor, there was no legal liability, and the case was dismissed.

 

Distributor Liability

Distributor Liability refers to the limited legal consequences that a distributor is exposed to for defamation. A common example of a distributor, is a bookstore or library. The theory behind distributor liability is that it would be impossible for distributors to moderate and censor every piece of content that they disperse because of the sheer volume, and the impossibility of knowing whether something is false or not.

The second case that influenced the creation of section 230, was Stratton Oakmont, Inc. v. Prodigy Servs. Co., in which the court used publisher liability theory to find the internet provider liable for the third party defamatory postings published on its site.  The court deemed the website a publisher because they moderated and deleted certain posts, regardless of the fact that there were far too many postings a day to regulate each one.

 

Publisher Liability

Under common law principles, a person who publishes a third-party’s defamatory statement bears the same legal responsibility as the creator of that statement. This liability is often referred to as “publisher liability,” and is based in theory that a publisher has the knowledge, opportunity, and ability to exercise control over the publication. For example, a newspaper publisher could face legal consequences for the content located within it. The court’s decision was significant because it meant that if a website attempted to moderate certain posts, it would be held liable for all posts.

 

Section 230’s Creation

In response to the Stratton-Oakmond case, and the ambiguous court decisions regarding internet services provider’s liability, members of Congress introduced an amendment to the CDA that later became Section 230. The Amendment was specifically introduced and passed with the goal of encouraging the development of unregulated, free speech online by relieving internet providers from any liability for their content.

 

Text of the Act- Subsection (c)(1) 

“No Provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

 Section 230 further provides that…

“No cause of action may be brought and no liability may be imposed under any State or local law that is inconsistent with this section.”

 The language above removes legal consequences arising from content posted on their forum. Courts have interpreted this subsection as providing broad immunity to online platforms from suits over content of third parties. Because of this, section 230 has become the principal legal safeguard from lawsuits over sites content.

 

The Good

  •  Section 230 can be viewed as being one of the most important pieces of legislation that protects free speech online. One of the unique aspects of this legislation is that it essentially extends free speech protection, applying it to private, non-governmental companies.
  • Without CDA 230, the internet would be a very different place. This section influenced some of the internet’s most distinctive characteristics. The internet promotes free speech and offers the ability for worldwide connectivity.
  • The CDA 230 does not fully eliminate liability or court remedies for victims of online defamation. Rather, it makes only the creator themselves liable for their speech, instead of the speaker and the publisher.

 

 

The Bad

  •  Because of the legal protections section 230 provides, social media networks have less of an incentive to regulate false or deceptive posts. Deceptive online posts can have an enormous impact on society. False posts have the ability to alter election results, or lead to dangerous misinformation campaigns, like the QAnon conspiracy theory, and the anti-vaccination movement.
  • Section 230 is twenty-five years old, and has not been updated to match the internet’s extensive growth.
  • Big Tech companies have been left largely unregulated regarding their online marketplaces.

 

 The Future of 230

While section 230 is still successfully used by social media platforms, concerns over the archaic legislation have mounted. Just recently, Justice Thomas, who is infamous for being a quiet Justice, wrote a concurring opinion articulating his view that the government should regulate content providers as common carriers, like utilities companies. What implications could that have on the internet? With the growing level of criticism surrounding section 230, will Congress will finally attempt to fix this legislation? If not, will the Supreme Court be left to tackle the problem themselves?

Is Cyberbullying the Newest Form of Police Brutality?

Police departments across the country are calling keyboard warriors into action to help them solve crimes…but at what cost?

In a survey of 539 police departments in the U.S., 76% of departments said that they used their social media accounts to solicit tips on crimes. Departments post “arrested” photos to celebrate arrests, surveillance footage for suspect identification, and some even post themed wanted posters, like the Harford County Sheriff’s Office.

The process for using social media as an investigative tool is dangerously simple and the consequences can be brutal. A detective thinks posting on social media might help an investigation, so the department posts a video or picture asking for information. The community, armed with full names, addresses, and other personal information, responds with some tips and a lot of judgmental, threatening, and bigoted comments. Most police departments have no policy for removing posts after information has been gathered or cases are closed, even if the highlighted person is found to be innocent. A majority of people who are arrested are not even convicted of a crime.

Law enforcement’s use of social media in this way threatens the presumption of innocence, creates a culture of public humiliation, and often results in a comment section of bigoted and threatening comments.

On February 26, 2020, the Manhattan Beach Police Department posted a mugshot of Matthew Jacques on their Facebook and Instagram pages for their “Wanted Wednesday” social media series. The pages have 4,500 and 13,600, mostly local, followers, respectively. The post equated Matthew to a fugitive and commenters responded publicly with information about where he worked. Matthew tried to call off work out of fear of a citizen’s arrest. The fear turned out to be warranted when two strangers came to find him at his workplace. Matthew eventually lost his job because he was too afraid to return to work.

You may be thinking this is not a big deal. This guy was probably wanted for something really bad and the police needed help. After all, the post said the police had a warrant. Think again.

There was no active warrant for Matthew at the time, his only (already resolved) warrant came from taking too long to schedule remedial classes for a 2017 DUI. Matthew was publicly humiliated by the local police department. The department even refused to remove the social media posts after being notified of the truth. The result?

Matthew filed a complaint against the department for defamation (as well as libel per se and false light invasion of privacy). Typically, defamation requires the plaintiff to show:

1) a false statement purporting to be fact; 2) publication or communication of that statement to a third person; 3) fault amounting to at least negligence; and 4) damages, or some harm caused to the person or entity who is the subject of the statement.

Here, the department made a false statement – that there was a warrant. They published it on their social media, satisfying the second element. They did not check readily available public records that showed Matthew did not have a warrant. Finally, Matthew lived in fear and lost his job. Clearly, he was harmed.

The police department claimed their postings were protected by the California Constitution, governmental immunity, and the 1st Amendment. Fortunately, the court denied the department’s anti-SLAPP motion. Over a year after postings, the department took down the posting and settled the lawsuit with Matthew.

Some may think that Matthew’s case is an anomaly and that, usually, the negative attention is warranted and perhaps even socially beneficial because it further de-incentivizes criminal activity via humiliation and social stigma. However, most arrests don’t result in convictions, many of the police’s cyberbullying victims are likely innocent. Even if they are guilty, leaving these posts up can increase the barrier to societal re-entry, which can increase recidivism rates. A negative digital record can make finding jobs and housing more difficult. Many commenters assume the highlighted individual’s guilt and take to their keyboards to shame them.

Here’s one example of a post and comment section from the Toledo Police Department Facebook page:

Unless departments change their social media use policies, they will continue to face defamation lawsuits and continue to further the degradation of the presumption of innocence.

Police departments should discontinue the use of social media in the humiliating ways described above. At the very least, they should consider using this tactic only for violent, felonious crimes. Some departments have already changed their policies.

The San Francisco Police Department has stopped posting mugshots for criminal suspects on social media. According to Criminal Defense Attorney Mark Reichel, “The decision was made in consultation with the San Francisco Public Defender’s Office who argued that the practice of posting mugshots online had the potential to taint criminal trials and follow accused individuals long after any debt to society is paid.” For a discussion of some of the issues social media presents to maintaining a fair trial, see Social Media, Venue and the Right to a Fair Trial.

Do you think police departments should reconsider their social media policies?

The First Amendment Is Still Great For The United States…Or Is It?

In the traditional sense, of course it is. The idea of free speech should always be upheld, without question. However, when it comes to the 21st century, this two and a half centuries old amendment poses extreme roadblocks. Here, I will be discussing how the First Amendment inhibits the ability to tackle extremism and hatred on social media platforms.

One of the things I will be highlighting is how other countries are able to enact legislation to try and deal with the ever-growing hate that festers on social media. They’re able to do so because they do not have a “First Amendment.” The idea of free speech is simply engrained into democracies; they do not need an archaic document in which they are forever bound to tell them that. Here in the U.S., as we all know, congress can be woefully slow and inefficient, with a particular emphasis on refusing to update outdated laws.

The First Amendment successfully blocks any government attempt to regulate social media platforms. Any attempt to do so is met by mostly conservatives, yelling about the government wanting to take away free speech, and the courts will/would not allow the legislation to stand. This in turn means Facebook, Snapchat, Instagram, Reddit, and all the other platform never have to worry about the white supremist and other extremist rhetoric that is prevalent on their platform. Even further than that, most, if not all their algorithms, push those vile posts to hundreds of thousands of people. We are “not allowed” to introduce laws that will come up with a baseline to regulate platforms, in order to crack down on the terrorism that flourishes there. Just as you are not allowed to scream fire in a move theatre, it should not be allowed to post and form groups to spread misinformation, white supremacy, racism, etc. Those topics do not serve the interests of greater society. Yes, it would make it a lot harder for people to be able to easily share their thoughts, no matter how appalling they may be. However, not allowing it to spread online where in 30 seconds millions of people can see it, is not taking away someone’s free speech right. Platforms don’t even necessarily have to delete the posts; just change their algorithms to stop promoting misinformation and hate, promote truth instead even if the truth is boring. They won’t do that though because promoting lies is what makes them money, and it’s always money over the good of the people.  Another reason why this doesn’t limit people’s free speech is because they can still form in person groups, talk about it in private, start an email chain etc. The idea behind trying to regulate what can be posted on social media websites is to make the world a better place for all; to make it harder for racist ideas and terrorism to spread, especially to young, impressionable children/young adults. This shouldn’t be a political issue; shouldn’t we all want to limit the spread of hate?

It is hard for me to imagine the January 6th insurrection on our capital occurring had we had regulations on social media in place. A lot of the groups that planned the insurrection had “stop the steal” groups and other related election-fraud conspiracy pages on Facebook. Imagine if we had in place a law that said social media platforms had to take down posts and pages eliciting false information that could be inciteful or detrimental to the security of the United States? I realize that is broad discretion, the legislation would have to be worded very narrowly, and those decisions to remove posts should be made with the highest level of scrutiny. Had we had a regulation like that in place, these groups would not have been able to reach as wide of an audience. I think Ashley Babbitt and Officer Sicknick would still be alive had Facebook been obligated to take those pages and posts down.

Alas, we are unable to even consider legislation to help address this cause because the courts and a lot of congress people refuse to acknowledge that we must update our laws and redefine how we read the First Amendment. The founders could never have imagined the world we live in today. Congress and the Courts need to stop pretending that a piece of paper written over a hundred years ago is some untouchable work from god. The founders wrote the First Amendment to ensure no one would be thrown in jail for speaking their mind, so that people who hold different political views could not be persecuted, to give people the ability to express themselves. Enacting legislation to prevent blatant lies, terrorism, racism, and white supremacy from spreading as easily online does not go against the First Amendment. It is not telling people they can’t have those views; it is not throwing anyone in prison or handing out fines for those views, and white supremacist or other racist ideas are not “political discourse.” Part of the role of government is to protect the people, to do what is right for society as a whole, and I fail to see how telling social media platforms they need to take down these appalling posts is outweighed by this idea that “nearly everything is free speech, even if it poisons the minds of our youth and perpetuates violence because that’s what the First Amendment says.”

Let’s now look at the United Kingdom and what they are able to do because they do not have any law comparable to the First Amendment. In May of 2021, the British Parliament introduced the Online Safety Bill. If passed into law, the bill will place a duty of care on social media firms and websites to ensure they take swift action to remove illegal content, such as hate crimes, harassment and threats directed at individuals, including abuse which falls below the criminal threshold. As currently written, the bill would also require the social media companies to limit the spread of and remove terroristic material, suicidal content, and child sexual abuse. The companies would be mandated to report postings of those kind to the authorities. Lastly, the Online Safety Bill would require companies to safeguard freedom of expression, and reinstate material unfairly removed. This includes forbidding tech firms from discriminating against particular political viewpoints. The bill reserves the right for Ofcom (the UK’s communications regulator) to hold them accountable for the arbitrary removal of journalistic content.

The penalties for not complying with the proposed law would be significant. Social Media companies that do not comply could be fined up to 10% of their net profits or $25 million. Further, the bill would allow Ofcom to bring criminal actions against named senior managers whose company does not comply with Ofcom’s request for information.

It will be interesting to see how the implementation of this bill will go if it is passed. I believe it is a good steppingstone to reign in the willful ignorance displayed by these companies. Again, it is important these bills be carefully scrutinized, otherwise you may end up with a bill like the one proposed in India. While I will not be discussing their bill at length in this post, you can read more about it here. In short, India’s bill is widely seen as autocratic in nature; giving the government the ability to fine and or criminally prosecute social media companies and their employees if they fail to remove content that the government does not like (for instance, people who are criticizing their new agriculture regulations).

Bringing this ship back home, can you imagine a bill like Britain’s ever passing in the US, let alone even being introduced? I certainly can’t because we still insist on worshiping an amendment that is 230 years old. The founders wrote the bill based on the circumstances of their time, they could never have imagined what today would look like. Ultimately, the decision to allow us to move forward and adopt our own laws to start regulating social media companies is up to the Supreme Court. Until the Supreme Court wakes up and decides to allow a modern reading/interpretation of the First Amendment, any law to hold companies accountable is doomed to fail. It is illogical to put a piece of paper over the safety and well being of Americans, yet we consistently do just that. We will keep seeing reports of how red flags were missed and as a result people were murdered or how Facebook pages helped spread another “Big Lie” which results in another capital sieged. All because we cannot move away from our past to brighten our futures.

 

What would you do to help curtail this social dilemma?

Skip to toolbar