Privacy Please: Privacy Law, Social Media Regulation and the Evolving Privacy Landscape in the US

Social media regulation is a touchy subject in the United States.  Congress and the White House have proposed, advocated, and voted on various bills, aimed at protecting and guarding people from data misuse and misappropriation, misinformation, harms suffered by children, and for the implications of vast data collection. Some of the most potent concerns about social media stem from use and misuse of information by the platforms- from the method of collection, to notice of collection and use of collected information. Efforts to pass a bill regulating social media have been frustrated, primarily by the First Amendment right to free speech. Congress has thus far failed to enact meaningful regulation on social media platforms.

The way forward may well be through privacy law. Privacy laws give people some right to control their own personhood including their data, right to be left alone, and how and when people see and view them. Privacy laws originated in their current form in the late 1800’s with the impetus being one’s freedom from constant surveillance by paparazzi and reporters, and the right to control your own personal information. As technology mutated, our understanding of privacy rights grew to encompass rights in our likeness, our reputation, and our data. Current US privacy laws do not directly address social media, and a struggle is currently playing between the vast data collection practices of the platforms, immunity for platforms under Section 230, and private rights of privacy for users.

There is very little Federal Privacy law, and that which does exist is narrowly tailored to specific purposes and circumstances in the form of specific bills. Somes states have enacted their own privacy law scheme, California being on the forefront, Virginia, Colorado, Connecticut, and Utah following in its footsteps. In the absence of a comprehensive Federal scheme, privacy law is often judge-made, and offers several private rights of action for a person whose right to be left alone has been invaded in some way. These are tort actions available for one person to bring against another for a violation of their right to privacy.

Privacy Law Introduction

Privacy law policy in the United States is premised on three fundamental personal rights to privacy:

  1. Physical right to privacy- Right to control your own information
  2. Privacy of decisions– such as decisions about sexuality, health, and child-rearing. These are the constitutional rights to privacy. Typically not about information, but about an act that flows from the decision
  3. Proprietary Privacy – the ability to protect your information from being misused by others in a proprietary sense.

Privacy Torts

Privacy law, as it concerns the individual, gives rise to four separate tort causes of action for invasion of privacy:

  1. Intrusion upon Seclusion- Privacy law provides a tort cause of action for intrusion upon seclusion when someone intentionally intrudes upon the reasonable expectation of seclusion of another, physically or otherwise, and the intrusion is objectively highly offensive.
  2. Publication of Private Facts- One gives publicity To a matter concerning the Private life of another that is not of legitimate concern to the public, and the matter publicized would be objectively highly offensive. The first amendment provides a strong defense for publication of truthful matters when they are considered newsworthy.
  3. False Light – One who gives publicity to a matter concerning another that places the other before the public in a false light when The false light in which the other was placed would be objectively highly offensive and the actor had knowledge of or acted in reckless disregard as to the falsity of the publicized matter and the false light in which the other would be placed.
  4. Appropriation of name and likeness- Appropriation of one’s name or likeness to the defendant’s own use or benefit. There is no appropriation when a persona’s picture is used to illustrate a non-commercial, newsworthy article. This is usually commercial in nature but need not be. The appropriation could be of “identity”. It need not be misappropriation of name, it could be the reputation, prestige, social or commercial standing, public interest, or other value on the plaintiff’s likeness.

These private rights of action are currently unavailable for use against social media platforms because of Section 230 of the Decency in Communications Act, which provides broad immunity to online providers for posts on their platforms. Section 230 prevents any of the privacy torts from being raised against social media platforms.

The Federal Trade Commission (FTC) and Social Media

Privacy law can implicate social media platforms when their practices become unfair or deceptive to the public through investigation by the Federal Trade Commission (FTC). The FTC is the only federal agency with both consumer protection and competition jurisdiction in broad sectors of the economy. FTC investigates business practices where those practices are unfair or deceptive. FTC Act 15 U.S.C S 45- Act prohibits “unfair or deceptive acts or practices in or affecting commerce” and grants broad jurisdiction over privacy practices of businesses to the FTC. Trade practice is unfair if it causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and is not outweighed by countervailing benefits to consumers or competition. A deceptive act or practice is a material representation, omission, or practice that is likely to mislead the consumer acting reasonably in the circumstances, to the consumer’s detriment.

Critically, there is no private right of action in FTC enforcement. The FTC has no ability to enforce fines for S5 violations but can provide injunctive relief. By design, the FTC has very limited rulemaking authority, and looks to consent decrees and procedural, long-lasting relief as an ideal remedy. The FTC pursues several types of misleading or deceptive policy and practices that implicate social media platforms: notice and choice paradigms, broken promises, retroactive policy changes, inadequate notice, and inadequate security measures. Their primary objective is to negotiate a settlement where the company submits to certain measures of control of oversight by the FTC for a certain period of time. Violations of the agreements could yield additional consequences, including steep fines and vulnerability to class action lawsuits.

Relating to social media platforms, the FTC has investigated misleading terms and conditions, and violations of platform’s own policies. In Re Snapchat, the platform claimed that user’s posted information disappeared completely after a certain period of time, however, through third party apps and manipulation of user’s posts off of the platform, posts could be retained. The FTC and Snapchat settled, through a consent decree, to subject Snapchat to FTC oversight for 20 years.

The FTC has also investigated Facebook for violation of its privacy policy. Facebook has been ordered to pay a $5 billion penalty and to submit to new restrictions and a modified corporate structure that will hold the company accountable for the decisions it makes about its users’ privacy to settle FTC charges claiming that they violated a 2012 agreement with the agency.

Unfortunately, none of these measures directly give individuals more power over their own privacy. Nor do these policies and processes give individuals any right to hold platforms responsible for being misled by algorithms using their data, or for intrusion into their privacy by collecting data without allowing an opt-out.

Some of the most harmful social media practices today relate to personal privacy. Some examples include the collection of personal data, the selling and dissemination of data through the use of algorithms designed to subtly manipulate our pocketbooks and tastes, collection and use of data belonging to children, and the design of social media sites to be more addictive- all in service of the goal of commercialization of data.

No current Federal privacy scheme exists. Previous Bills on Privacy have been few and narrowly tailored to relatively specific circumstances and topics like healthcare and medical data protection by HIPPA, protection of data surrounding video rentals as in the Video Privacy Protection Act, and narrow protection for children’s data in Children’s Online Protection Act. All the schemes are outdated and fall short of meeting the immediate need of broad protection of widely collected and broadly utilized data from social media.

Current Bills on Privacy

Upon request from some of the biggest platforms, outcry from the public, and the White House’s request for Federal Privacy regulation, Congress appears poised to act. The 118th Congress has pushed privacy law as a priority in this term by introducing several bills related to social media privacy. There are at least ten Bills currently pending between the House of the Senate addressing a variety of issues and concerns from Children’s data privacy to the minimum age for use and designation of a new agency to monitor some aspects of privacy.

S744The Data Care Act of 2023 aims to protect social media user’s data privacy by imposing fiduciary duties on the platforms. The original iteration of the bill was introduced in 2021 and failed to receive a vote. It was re-introduced in March of 2023 and is currently pending. Under the act, social media platforms would have the duty to reasonably secure user’s data from access, refrain from using the data in a way that could foreseeably “benefit the online service provider to the detriment of the end user” and to prevent disclosure of user’s data unless the party is also bound by these duties. The bill authorizes the FTC and certain state officials to take enforcement actions upon breach of those duties. The states would be permitted to take their own legal action against companies for privacy violations. The bill would also allow the FTC to intervene in the enforcement efforts by imposing fines for violations.

H.R.2701 – Perhaps the most comprehensive piece of legislation on the House floor is the Online Privacy Act. In 2023, the bill was reintroduced by democrat Anna Eshoo after an earlier version on the bill failed to receive a vote and died in Congress. The Online Privacy Act aims to protect users by providing individuals rights relating to the privacy of their personal information. The bill would also provide privacy and security requirements for treatment of personal information. To accomplish this, the bill established a new agency – the Digital Privacy Agency- which would be responsible for enforcement of the rights and requirements. The new individual rights in privacy are broad and include the rights of access, correction, deletion, human review of automated decision, individual autonomy, right to be informed, and right to impermanence, amongst others. This would be the most comprehensive plan to date. The establishment of a new agency with a task specific to administration and enforcement of privacy laws would be incredibly powerful. The creation of this agency would be valuable irrespective of whether this bill is passed.

HR 821– The Social Media Child Protection Act is a sister bill to one by a similar name which originated in the Senate. This bill aims to protect children from the harms of social media by limiting children’s access to it. Under the bill, Social Media platforms are required to verify the age of every user before accessing the platform by submitting a valid identity document or by using another reasonable verification method. A social media platform will be prohibited from allowing users under the age of 16 to access the platform. The bill also requires platforms to establish and maintain reasonable procedures to protect personal data collected from users. The bill affords for a private right of action as well as state and FTC enforcement.

S 1291The Protecting Kids on Social Media Act is similar to its counterpart in the House, with slightly less tenacity. It similarly aims to protect children from social media’s harms. Under the bill, platforms must verify its user’s age, not allow the user to use the service unless their age has been verified, and must limit access to the platform for children under 12. The bill also prohibits retention and use of information collected during the age verification process. Platforms must take reasonable steps to require affirmative consent from the parent or guardian of a minor who is at least 13 years old for the creation of a minor account, and reasonably allow access for the parent to later revoke that consent. The bill also prohibits use of data collected from minors for algorithmic recommendations. The bill would require the Department of Commerce to establish a voluntary program for secure digital age verification for social media platforms. Enforcement would be through the FTC or state action.

S 1409– The Kids Online Safety Act, proposed by Senator Blumenthal of Connecticut, also aims to protect minors from online harms. This bill, as does the Online Safety Bill, establishes fiduciary duties for social media platforms regarding children using their sites. The bill requires that platforms act in the best interest of minors using their services, including mitigating harms that may arise from use, sweeping in online bullying and sexual exploitation. Social media sites would be required to establish and provide access to safeguards such as settings that restrict access to minor’s personal data and granting parents the tools to supervise and monitor minor’s use of the platforms. Critically, the bill establishes a duty for social media platforms to create and maintain research portals for non-commercial purposes to study the effect that corporations like the platforms have on society.

Overall, these bills indicate Congress’s creative thinking and commitment to broad privacy protection for users from social media harms. I believe the establishment of a separate body to govern, other than the FTC which lacks the powers needed to compel compliance, to be a necessary step. Recourse for violations on par with the EU’s new regulatory scheme, mainly fines in the billions, could help.

Many of the bills, for myriad aims, establish new fiduciary duties for the platforms in preventing unauthorized use and harms for children. There is real promise in this scheme- establishing duty of loyalty, diligence and care for one party has a sound basis in many areas of law and would be more easily understood in implementation.

The notion that platforms would need to be vigilant in knowing their content, studying its affects, and reporting those effects may do the most to create a stable future for social media.

The legal responsibility for platforms to police and enforce their policies and terms and conditions is another opportunity to further incentivize platforms. The FTC currently investigates policies that are misleading or unfair, sweeping in the social media sites, but there could be an opportunity to make the platforms legally responsible for enforcing their own policies, regarding age, against hate, and inappropriate content, for example.

What would you like to see considered in Privacy law innovation for social media regulation?

Can Social Media Be Regulated?

In 1996 Congress passed what is known as Section 230 of the Communications Decency Act (CDA) which provides immunity to website publishers for third-party content posted on their websites. The CDA holds that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This Act passed in 1996, was created in a different time and era, one that could hardly envision how fast the internet would grow in the coming years. In 1996, social media for instance consisted of a little-known social media website called Bolt, the idea of a global world wide web, was still very much in its infancy. The internet was still largely based on dial-up technology, and the government was looking to expand the reach of the internet. This Act is what laid the foundation for the explosion of Social Media, E-commerce, and a society that has grown tethered to the internet.

The advent of Smart-Phones in the late 2000s, coupled with the CDA, set the stage for a society that is constantly tethered to the internet and has allowed companies like Facebook, Twitter, YouTube, and Amazon to carve out niches within our now globally integrated society.   Facebook alone in the 2nd quarter of 2021 has averaged over 1.9 billion daily users.

Recent studs conducted by the Pew Research Center show that “[m]ore than eight in ten Americans get news from digital services”

Large majority of Americans get news on digital devices

While older members of society still rely on news media online, the younger generation, namely those 18-29 years of age, receive their news via social media.

Online, most turn to news websites except for the youngest, who are more likely to use social media

The role Social Media plays in the lives of the younger generation needs to be recognized. Social Media has grown at a far greater rate than anyone could imagine. Currently, Social Media operates under its modus operandi, completely free of government interference due to its classification as a private entity, and its protection under Section 230.

Throughout the 20th century when Television News Media dominated the scenes, laws were put into effect to ensure that television and radio broadcasters would be monitored by both the courts and government regulatory commissions. For example, “[t]o maintain a license, stations are required to meet a number of criteria. The equal-time rule, for instance, states that registered candidates running for office must be given equal opportunities for airtime and advertisements at non-cable television and radio stations beginning forty-five days before a primary election and sixty days before a general election.”

What these laws and regulations were put in place for was to ensure that the public interest in broadcasting was protected. To give substance to the public interest standard, Congress has from time to time enacted requirements for what constitutes the public interest in broadcasting. But Congress also gave the FCC broad discretion to formulate and revise the meaning of broadcasters’ public interest obligations as circumstances changed.

The Federal Communications Commission (FCC) authority is constrained by the first amendment but acts as an intermediary that can intervene to correct perceived inadequacies in overall industry performance, but it cannot trample on the broad editorial discretion of licensees. The Supreme Court has continuously upheld the public trustee model of broadcast regulation as constitutional. The criticisms of regulating social media center on the notion that they are purely private entities that do not fall under the purviews of the government, and yet these same issues are what presented themselves in the precedent-setting case of Red Lion Broadcasting Co. v. Federal Communications Commission (1969.  In this case, the court held that “rights of the listeners to information should prevail over those of the broadcasters.” The Court’s holding centered on the public right to information over the rights of a broadcast company to choose what it will share, this is exactly what is at issue today when we look at companies such as Facebook, Twitter, and Snapchat censuring political figures who post views that they feel may be inciteful of anger or violence.

In essence, what these organizations are doing is keeping information and views from the attention of the present-day viewer. The vessel for the information has changed, it is no longer found in television or radio but primarily through social media. Currently, television and broadcast media are restricted by Section 315(a) of the Communications Act and Section 73.1941 of the Commission’s rules which “require that if a station allows a legally qualified candidate for any public office to use its facilities (i.e., make a positive identifiable appearance on the air for at least four seconds), it must give equal opportunities to all other candidates for that office to also use the station.” This is a restriction that is nowhere to be found for Social Media organizations. 

This is not meant to argue for one side or the other but merely to point out that there is a political discourse being stifled by these social media entities, that have shrouded themselves in the veils of a private entity. However, what these companies fail to mention is just how political they truly are. For instance, Facebook proclaims itself to be an unbiased source for all parties, and yet what it fails to mention is that currently, Facebook employs one of the largest lobbyist groups in Washington D.C. Four Facebooks lobbyist have worked directly in the office of House Speaker Pelosi. Pelosi herself has a very direct connection to Facebook, she and her husband own between $550,000 to over $1,000,000 in Facebook stock. None of this is illegal, however, it raises the question of just how unbiased is Facebook.

If the largest source of news for the coming generation is not television, radio, or news publications themselves, but rather Social Media such as Facebook, then how much power should they be allowed to wield without there being some form of regulation? The question being presented here is not a new one, but rather the same question asked in 1969, simply phrased differently. How much information is a citizen entitled to, and at what point does access to that information outweigh the rights of the organization to exercise its editorial discretion? I believe that the answer to that question is the same now as it was in 1969 and that the government ought to take steps similar to those taken with radio and television. What this looks like is ensuring that through Social Media, that the public has access to a significant amount of information on public issues so that its members can make rational political decisions. At the end of that day that it was at stake, the public’s ability to make rational political decisions.

These large Social Media conglomerates such as Facebook and Twitter have long outgrown their place as a private entity, they have grown into a public medium that has tethered itself to the realities of billions of people. Certain aspects of it need to be regulated, mainly those that interfere with the Public Interest, there are ways to regulate this without interfering with the overall First Amendment right of Free Speech for all Americans. Where however Social Media blends being a private forum for all people to express their ideas under firmly stated “terms and conditions”, and being an entity that strays into the political field whether it be by censoring heads of state, or by hiring over $50,000,000 worth of lobbyist in Washington D.C, there need to be some regulations put into place that draw the line that ensures the public still maintains the ability to make rational political decisions. Rational decisions that are not influenced by anyone organization. The time to address this issue is now when there is still a middle ground on how people receive their news and formulate opinions.

Are Judges’ Safety at Risk? The Increase in Personal Threats Prompts the Introduction of the Daniel Anderl Judicial Security and Privacy Act

When a judge renders a legal decision, they hardly anticipate that their commitment to serving the public could make themselves or their family a target for violence. Rather than undergo the appeals process when an unfavorable verdict is reached, disgruntled civilians are threatening and even attacking the presiding judges and their families – placing them in fear of their lives.

Earlier this month, the federal judiciary introduced legislation which aims to safeguard the personal information of judges and their immediate family members within federal databases and restrict data aggregators from reselling that information. The Administrative Office of the U.S. Courts announced their support for the Daniel Anderl Judicial Security and Privacy Act of 2021, named for the late son of Judge Esther Salas of the U.S. District Court for the District of New Jersey.

The bill comes in response to the tragedy that occurred on July 19, 2020, when an angered attorney disguised as a FedEx delivery driver showed up at the Salas’ home and opened fire. In attempting to assassinate Salas, the gunman shot and killed her 20-year-old son, Daniel, and wounded her husband, attorney Mark A. Anderl. A day after the racially motivated attack, the gunman, Roy Den Hollander, was found dead from a self-inflicted gunshot wound.

The Manhattan attorney and self-proclaimed “anti-feminist” appeared in Salas’ courtroom months prior to the attack. According to the FBI, Hollander had detailed information on Salas and her family, in addition to several other targets on his radar.  An autobiography published to Hollander’s personal website revealed anti-feminist ideology and his extreme displeasure with Salas, including the following posts:

  • “If she ruled draft registration unconstitutional, the Feminists who believed females deserved preferential treatment would criticize her. If she ruled that it did not violate the Constitution, then those Feminists who advocate for equal treatment would criticize her. Either way it was lose-lose for Salas unless someone took the risk of leading the way”
  • “Female judges didn’t bother me as long as they were middle age or older black ladies…Latinas, however, were usually a problem — driven by an inferiority complex.”
  • In another passage, he wrote that Salas was a “lazy and incompetent Latina judge appointed by Obama.”
  • He criticized Salas’ resume, writing that “affirmative action got her into and through college and law school,” and that her one accomplishment was “high school cheerleader.”

(https://www.goodmorningamerica.com/news/story/suspect-deadly-shooting-called-federal-judge-esther-salas-71901734)

In a news video two-weeks after the incident, Salas shared that “unfortunately, for my family, the threat was real, and the free flow of information from the internet allowed this sick and depraved human being to find all our personal information and target us. In my case, the monster knew where I lived and what church we attended and had a complete dossier on me and my family.” Since her sons’ killing, Judge Salas has been personally advocating for stronger protections to ensure that judges are able to render decisions without fear of reprisal or retribution – not only for safety purposes, but because our democracy depends on an independent judiciary.

***

Sadly, Judge Salas is not alone in the terrible misfortune that occurred last year. Judges are regularly threatened and harassed, specifically after high-profile legal battles with increased media attention – increasing 400% over the past five years. Four federal judges have been murdered since 1979. District Judge John Wood was assassinated outside his home in 1979 by hitman Charles Harrelson. In 1988, U.S. District Judge Richard Daronco was shot and killed in the front yard of his Pelham, New York, home. In 1989, Circuit Judge Robert Vance was killed when he opened a mail bomb sent to his home. District Judge John Roll was shot in the back and killed in 2011 at an event for Congresswoman Gabrielle Giffords, who was also shot and injured. (https://www.abajournal.com/news/article/federal-judiciary-supports-legislation-to-prevent-access-to-judges-information)

Thankfully, not all threats result in successful or fatal attacks – but the rise of intimidation tactics and inappropriate communications with federal judges and other court personnel has quadrupled since 2015.

U.S. District Judge Julie Kocurek was shot in front of her family in 2015. She miraculously survived but sustained severe injuries and underwent dozens of surgeries. The attempted assassin was a plaintiff before her court and had been tracking the judges’ whereabouts. Former Texas Federal Judge Liz Lang Miers attributes the attacks to someone misperceiving a ruling and acting irrationally “as opposed to understanding the justice system.”

In 2017, Seattle federal Judge James Robart received more than 42,000 letters, emails and calls, including more than 100 death threats, after he temporarily blocked President Donald Trump’s travel ban that barred people from Iran, Iraq, Libya, Somalia, Sudan, Syria and Yemen from entering the U.S. for 90 days. (https://www.nbcnews.com/news/us-news/attack-judge-salas-family-highlights-concerns-over-judicial-safety-n1234476)

The Internet, notably social media, has amplified the criticisms that citizens have with the judicial system. Rather than listening to and comprehending the entirety of a court ruling, an individual can fire off a tweet or post at the click of a button, spreading that inaccurate information worldwide. Before long, hundreds of thousands of people have seen that communication and are quick to draw conclusions despite not understanding the merits of the legal opinion. Misinformation, or misleading information or arguments, often aiming to influence a subset of the public, spreads rapidly. Data indicates that articles containing misinformation were among the most viral content, with “falsehoods diffusing significantly farther, faster, deeper, and more broadly than the truth in all categories of information.” (https://voxeu.org/article/misinformation-social-media).

***

Since 1789, federal judges have been entitled to home and court security systems and protections by the U.S. Marshals service – however the threats and attacks continue to prevail.

As elected public servants, judges’ information is made publicly available and easily accessible through a simple Google search. The Daniel Anderl Judicial Security and Privacy Act would shield the information of federal judges and their families, including home addresses, Social Security numbers, contact information, tax records, marital and birth records, vehicle information, photos of their vehicle and home, and the name of the schools and employers of immediate family members.

Many officials are onboard with the proposed legislation. Senator Menendez, who recommended Judge Salas to President Barack Obama for appointment to the federal bench, reveals that “the threats against our federal judiciary are real and they are on the rise.  We must give the U.S. Marshals and other agencies charged with guarding our courts the resources and tools they need to protect our judges and their families. I made a personal commitment to Judge Salas that I would put forth legislation to better protect the men and women who sit on our federal judiciary, to ensure their independence in the face of increased personal threats on judges and help prevent this unthinkable tragedy from ever happening again to anyone else.” Moreover, Rep. Fitzpatrick noted that, “in order to bolster our ability to protect our federal judges and their families, we need to safeguard the personally identifiable information of our judges and optimize our nation’s personal data sharing and privacy practices.”

Additionally, the bill is supported by the New Jersey State Bar Association, National Association of Attorneys General, Judicial Conference of the United States, Federal Magistrate Judges Association, American Bar Association (ABA), Dominican Bar Association, New York Intellectual Property Law Association, Federal Bar Council, Hispanic National Bar Association (HNBA), and Federal Judges Association.

***

In memory of Daniel Anderl, taken too soon at 20-years-young. As the only child of U.S. District Court Judge Esther Salas and defense attorney Mark Anderl, Daniel gave his life to save his parents. He was a student at Catholic University in Washington, DC. There is a plaque honoring Daniel at the entrance of the Columbus School of Law at Catholic University, as he planned to pursue a career in law. The plaque is also to serve as a reminder to young people that

How Defamation and Minor Protection Laws Ultimately Shaped the Internet

Kyiv, Ukraine – September 5, 2019: A paper cubes collection with printed logos of world-famous social networks and online messengers, such as Facebook, Instagram, YouTube, Telegram and others.

The Communications Decency Act (CDA) was originally enacted with the intention of shielding minors from indecent and obscene online material. Despite its origins, Section 230 of the Communications Decency Act is now commonly used as a broad legal safeguard for social media platforms to shield themselves from legal liability for content posted on their sites by third parties. Interestingly, the reasoning behind this safeguard arises both from defamation common law, and constitutional free speech laws. As the internet has grown, however, this legal safeguard has gained increasing criticism. However, is this legislation actually undesirable? Many would disagree as section 230 contains “the 26 words that created the internet.”

 

Origin of the Communications Decency Act

The CDA was introduced and enacted as an attempt to shield minors from obscene or indecent content online. Although parts of the Act were later struck down for first amendment free speech violations, the Court left section 230 intact. The creation of section 230 was influenced by two landmark court decisions of defamation lawsuits.

The first case was in 1991, and involved an Internet site that hosted around 150 online forums. A claim was brought against the internet provider when a columnist of one of the online forums posted a defamatory comment about his competitor. The competitor sued the online distributor for the published defamation. The courts categorized the internet service provider as a distributor because they did not review any content of the forums before the content was posted to the site. As a distributor, there was no legal liability, and the case was dismissed.

 

Distributor Liability

Distributor Liability refers to the limited legal consequences that a distributor is exposed to for defamation. A common example of a distributor, is a bookstore or library. The theory behind distributor liability is that it would be impossible for distributors to moderate and censor every piece of content that they disperse because of the sheer volume, and the impossibility of knowing whether something is false or not.

The second case that influenced the creation of section 230, was Stratton Oakmont, Inc. v. Prodigy Servs. Co., in which the court used publisher liability theory to find the internet provider liable for the third party defamatory postings published on its site.  The court deemed the website a publisher because they moderated and deleted certain posts, regardless of the fact that there were far too many postings a day to regulate each one.

 

Publisher Liability

Under common law principles, a person who publishes a third-party’s defamatory statement bears the same legal responsibility as the creator of that statement. This liability is often referred to as “publisher liability,” and is based in theory that a publisher has the knowledge, opportunity, and ability to exercise control over the publication. For example, a newspaper publisher could face legal consequences for the content located within it. The court’s decision was significant because it meant that if a website attempted to moderate certain posts, it would be held liable for all posts.

 

Section 230’s Creation

In response to the Stratton-Oakmond case, and the ambiguous court decisions regarding internet services provider’s liability, members of Congress introduced an amendment to the CDA that later became Section 230. The Amendment was specifically introduced and passed with the goal of encouraging the development of unregulated, free speech online by relieving internet providers from any liability for their content.

 

Text of the Act- Subsection (c)(1) 

“No Provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

 Section 230 further provides that…

“No cause of action may be brought and no liability may be imposed under any State or local law that is inconsistent with this section.”

 The language above removes legal consequences arising from content posted on their forum. Courts have interpreted this subsection as providing broad immunity to online platforms from suits over content of third parties. Because of this, section 230 has become the principal legal safeguard from lawsuits over sites content.

 

The Good

  •  Section 230 can be viewed as being one of the most important pieces of legislation that protects free speech online. One of the unique aspects of this legislation is that it essentially extends free speech protection, applying it to private, non-governmental companies.
  • Without CDA 230, the internet would be a very different place. This section influenced some of the internet’s most distinctive characteristics. The internet promotes free speech and offers the ability for worldwide connectivity.
  • The CDA 230 does not fully eliminate liability or court remedies for victims of online defamation. Rather, it makes only the creator themselves liable for their speech, instead of the speaker and the publisher.

 

 

The Bad

  •  Because of the legal protections section 230 provides, social media networks have less of an incentive to regulate false or deceptive posts. Deceptive online posts can have an enormous impact on society. False posts have the ability to alter election results, or lead to dangerous misinformation campaigns, like the QAnon conspiracy theory, and the anti-vaccination movement.
  • Section 230 is twenty-five years old, and has not been updated to match the internet’s extensive growth.
  • Big Tech companies have been left largely unregulated regarding their online marketplaces.

 

 The Future of 230

While section 230 is still successfully used by social media platforms, concerns over the archaic legislation have mounted. Just recently, Justice Thomas, who is infamous for being a quiet Justice, wrote a concurring opinion articulating his view that the government should regulate content providers as common carriers, like utilities companies. What implications could that have on the internet? With the growing level of criticism surrounding section 230, will Congress will finally attempt to fix this legislation? If not, will the Supreme Court be left to tackle the problem themselves?

The First Amendment Is Still Great For The United States…Or Is It?

In the traditional sense, of course it is. The idea of free speech should always be upheld, without question. However, when it comes to the 21st century, this two and a half centuries old amendment poses extreme roadblocks. Here, I will be discussing how the First Amendment inhibits the ability to tackle extremism and hatred on social media platforms.

One of the things I will be highlighting is how other countries are able to enact legislation to try and deal with the ever-growing hate that festers on social media. They’re able to do so because they do not have a “First Amendment.” The idea of free speech is simply engrained into democracies; they do not need an archaic document in which they are forever bound to tell them that. Here in the U.S., as we all know, congress can be woefully slow and inefficient, with a particular emphasis on refusing to update outdated laws.

The First Amendment successfully blocks any government attempt to regulate social media platforms. Any attempt to do so is met by mostly conservatives, yelling about the government wanting to take away free speech, and the courts will/would not allow the legislation to stand. This in turn means Facebook, Snapchat, Instagram, Reddit, and all the other platform never have to worry about the white supremist and other extremist rhetoric that is prevalent on their platform. Even further than that, most, if not all their algorithms, push those vile posts to hundreds of thousands of people. We are “not allowed” to introduce laws that will come up with a baseline to regulate platforms, in order to crack down on the terrorism that flourishes there. Just as you are not allowed to scream fire in a move theatre, it should not be allowed to post and form groups to spread misinformation, white supremacy, racism, etc. Those topics do not serve the interests of greater society. Yes, it would make it a lot harder for people to be able to easily share their thoughts, no matter how appalling they may be. However, not allowing it to spread online where in 30 seconds millions of people can see it, is not taking away someone’s free speech right. Platforms don’t even necessarily have to delete the posts; just change their algorithms to stop promoting misinformation and hate, promote truth instead even if the truth is boring. They won’t do that though because promoting lies is what makes them money, and it’s always money over the good of the people.  Another reason why this doesn’t limit people’s free speech is because they can still form in person groups, talk about it in private, start an email chain etc. The idea behind trying to regulate what can be posted on social media websites is to make the world a better place for all; to make it harder for racist ideas and terrorism to spread, especially to young, impressionable children/young adults. This shouldn’t be a political issue; shouldn’t we all want to limit the spread of hate?

It is hard for me to imagine the January 6th insurrection on our capital occurring had we had regulations on social media in place. A lot of the groups that planned the insurrection had “stop the steal” groups and other related election-fraud conspiracy pages on Facebook. Imagine if we had in place a law that said social media platforms had to take down posts and pages eliciting false information that could be inciteful or detrimental to the security of the United States? I realize that is broad discretion, the legislation would have to be worded very narrowly, and those decisions to remove posts should be made with the highest level of scrutiny. Had we had a regulation like that in place, these groups would not have been able to reach as wide of an audience. I think Ashley Babbitt and Officer Sicknick would still be alive had Facebook been obligated to take those pages and posts down.

Alas, we are unable to even consider legislation to help address this cause because the courts and a lot of congress people refuse to acknowledge that we must update our laws and redefine how we read the First Amendment. The founders could never have imagined the world we live in today. Congress and the Courts need to stop pretending that a piece of paper written over a hundred years ago is some untouchable work from god. The founders wrote the First Amendment to ensure no one would be thrown in jail for speaking their mind, so that people who hold different political views could not be persecuted, to give people the ability to express themselves. Enacting legislation to prevent blatant lies, terrorism, racism, and white supremacy from spreading as easily online does not go against the First Amendment. It is not telling people they can’t have those views; it is not throwing anyone in prison or handing out fines for those views, and white supremacist or other racist ideas are not “political discourse.” Part of the role of government is to protect the people, to do what is right for society as a whole, and I fail to see how telling social media platforms they need to take down these appalling posts is outweighed by this idea that “nearly everything is free speech, even if it poisons the minds of our youth and perpetuates violence because that’s what the First Amendment says.”

Let’s now look at the United Kingdom and what they are able to do because they do not have any law comparable to the First Amendment. In May of 2021, the British Parliament introduced the Online Safety Bill. If passed into law, the bill will place a duty of care on social media firms and websites to ensure they take swift action to remove illegal content, such as hate crimes, harassment and threats directed at individuals, including abuse which falls below the criminal threshold. As currently written, the bill would also require the social media companies to limit the spread of and remove terroristic material, suicidal content, and child sexual abuse. The companies would be mandated to report postings of those kind to the authorities. Lastly, the Online Safety Bill would require companies to safeguard freedom of expression, and reinstate material unfairly removed. This includes forbidding tech firms from discriminating against particular political viewpoints. The bill reserves the right for Ofcom (the UK’s communications regulator) to hold them accountable for the arbitrary removal of journalistic content.

The penalties for not complying with the proposed law would be significant. Social Media companies that do not comply could be fined up to 10% of their net profits or $25 million. Further, the bill would allow Ofcom to bring criminal actions against named senior managers whose company does not comply with Ofcom’s request for information.

It will be interesting to see how the implementation of this bill will go if it is passed. I believe it is a good steppingstone to reign in the willful ignorance displayed by these companies. Again, it is important these bills be carefully scrutinized, otherwise you may end up with a bill like the one proposed in India. While I will not be discussing their bill at length in this post, you can read more about it here. In short, India’s bill is widely seen as autocratic in nature; giving the government the ability to fine and or criminally prosecute social media companies and their employees if they fail to remove content that the government does not like (for instance, people who are criticizing their new agriculture regulations).

Bringing this ship back home, can you imagine a bill like Britain’s ever passing in the US, let alone even being introduced? I certainly can’t because we still insist on worshiping an amendment that is 230 years old. The founders wrote the bill based on the circumstances of their time, they could never have imagined what today would look like. Ultimately, the decision to allow us to move forward and adopt our own laws to start regulating social media companies is up to the Supreme Court. Until the Supreme Court wakes up and decides to allow a modern reading/interpretation of the First Amendment, any law to hold companies accountable is doomed to fail. It is illogical to put a piece of paper over the safety and well being of Americans, yet we consistently do just that. We will keep seeing reports of how red flags were missed and as a result people were murdered or how Facebook pages helped spread another “Big Lie” which results in another capital sieged. All because we cannot move away from our past to brighten our futures.

 

What would you do to help curtail this social dilemma?

Has Social Media Become the Most Addictive Drug We Have Ever Seen?

Before we get started, I want you to take a few minutes and answer the following questions to yourself:

  1. Do you spend a lot of time thinking about social media or planning to use social media?
  2. Do you feel urges to use social media more and more?
  3. Do you use social media to forget about personal problems?
  4. Do you often try to reduce the use of social media without success?
  5. Do you become restless or troubled if unable to use social media?
  6. Do you use social media so much that it has had a negative impact on your job or studies?

How did you answer these questions?  If you answered yes to more than three of these questions then according to the Addiction Center you may have or be developing a Social Media Addiction.  Research has shown that there is an undeniable link between social media use, negative mental health, and low self-esteem.  Negative emotional reactions are not only produced due to the social pressure of sharing things with others but also the comparison of material things and lifestyles that these sites promote.
On Instagram and Facebook, users see curated content – advertisements and posts that are specifically designed to appeal to you based on your interests.  Individuals today unlike any other time in history are seeing how other people live, and how their lifestyles differ significantly from their own.  This sense of self-worth is what is being used to curate information, children at a young age are being taught that if you are not a millionaire then you are not successful, and they are creating barometers of success based on invisible benchmarks, this is leading to an increase in suicide and depression among young adults.

Social Media has become a stimulant whose effects mimic that of someone addicted to gambling, and recreational drugs.  It has been shown that retweets, likes, and shares from these sites affect the dopamine part of the brain that becomes associated with reward. “[I]t’s estimated that people talk about themselves around 30 to 40% of the time; however, social media is all about showing off one’s life and accomplishments, so people talk about themselves a staggering 80% of the time. When a person posts a picture and gets positive social feedback, it stimulates the brain to release dopamine, which again rewards that behavior and perpetuates the social media habit.”  “Chasing the high”, is a common theme among individuals with addictive personalities, and when you see people on Social Media posting every aspect of their lives, from the meal they ate to their weekend getaway, and everything in between, that is what your chasing, but the high is the satisfaction of other people liking your post.  We have all been there you post a picture or a moment of great importance in your life, and the likes and reactions start pouring in, the reaction you garner from that love, differs significantly from the reaction you get when there is no reaction.  A recent Harvard study showed that “the act of disclosing information about oneself activates the same part of the brain that is associated with the sensation of pleasure, the same pleasure that we get from eating food, getting money or having even had sex.” Our brains have become to associate self-disclosure with being a rewarding experience.  Ask yourself when was the last time you posted something about a family or friend who died, why was this moment of sadness worth sharing with the world?  Researchers in this Harvard Study found that “when people got to share their thoughts with a friend or family member, there was a larger amount of activity in the reward region of their brain, and less of a reward sensation when they were told their thoughts would be kept private.”

“The social nature of our brains is biologically based,” said lead researcher Matthew Lieberman, Ph.D., a UCLA professor of psychology and psychiatry and biobehavioral sciences. This in itself helps you to understand where Social Media has gone to, it has evolved into a system that takes advantage of our biological makeup, “although Facebook might not have been designed with the dorsomedial prefrontal cortex in mind, the social network is very much in sync with how our brains are wired.” There is a reason when your mind is idling the first thing it wants to do is to check Social Media, Liberman one of the founders of the study of social cognitive neuroscience explains that “When I want to take a break from work, the brain network that comes on is the same network we use when we’re looking through our Facebook timeline and seeing what our friends are up to. . . That’s what our brain wants to do, especially when we take a break from work that requires other brain networks.”

This is a very real issue, that has very real consequences.  The suicide rate for children and teens is rising.  According to a September 2020 report by the U.S. Department of Health and Human Services, the suicide rate for pediatric patients rose 57.4% from 2007 to 2018. It is the second-largest cause of death in children, falling short only of accidents.  Teens in the U.S. who spend more than 3 hours a day on social media may be at a heightened risk for mental health issues, according to a 2019 study in JAMA Psychiatry. The study, which was adjusted for previous mental health diagnoses, concludes that while adolescents using social media more intensively have an increased risk of internalizing problems or reporting mental health concerns, more research is needed on “whether setting limits on daily social media use, increasing media literacy, and redesigning social media platforms are effective means of reducing the burden of mental health problems in this population.” Social Media has become a coping mechanism for some to deal with their stress, loneliness, or depression.  We have all come into contact with someone who posts their entire life on social media, and more often than not we might brush it off, even make a crude joke, but in fact, this is someone who is hurting and looking for help in a place that does not offer any solitude.

I write about this to emphasize a very real, and dangerous issue that is growing worse every single day.  For far too long Social Media have hidden behind a shield of immunity.

Section 230, a provision of the 1996 Communications Decency Act that shields social media companies from liability for content posted by their users and allows them to remove lawful but objectionable posts.  Section 230 states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” (47 U.S.C. § 230)

In 1996 when this Law was introduced and passed, the internet was still in its infancy, and no one at that time could have ever envisioned how big it would become.  At this point, Social Media Corporations operate in an almost Omnipotent capacity.  Creating their governing boards, and moderators to filter out negative information.  However, while the focus is often on the information being put out by the users what gets ignored is how that same information gets directed to the consumer.  You see Facebook, Snap Chat, Twitter, even YouTube, rely on the consumer commonly known as “influencers” to direct posts, and information to the consumer also known as the “User”, to direct advertisement and product placement.  To accomplish their goals which at the end of the day is the same as anyone Corporation to create a profit, information is directed at a person that will keep their attention.  At this point, there are little to no regulations, on how information is directed at an individual.  For instance, the FCC has rules in place that “limits the number of time broadcasters, cable operators, and satellite providers can devote to advertisements during children’s programs.” however, there are no such rules when dealing with children, there is only one such case in which the FTC has levied any fines for directed content at Children. Yet this suit was based more on  the notion that Google through their subsidiary YouTube “illegally collected personal information from children without their parents’ consent.”  When dealing with an advertisement for children Google itself sets the parameters.

Social Media has grown too large for itself and has far outgrown its place as a private entity that cannot be regulated.  The FCC was created in 1934 to replace the outdated Federal Radio Commission an outdated entity.  Therefore, just as it was recognized in 1934 that technology calls for change, today we need to call on Congress to regulate Social Media, it is not too farfetched to say that our Children and our Children’s futures depend on this.

In my next blog, I will post how regulation on Social Media could look and explain in more detail how Social Media has grown too big for itself.

 

 

“There Oughta be a Law”

In February 2015, two young men dared  Parker Drake to jump into a frigid ocean for virtual entertainment. Parker, who doctors diagnosed as having autism spectrum disorder, first “met” the men through twitter. After several exchanges the young men took Parker to the ocean, “for laughs” dared him to jump in and then videotaped Parker’s struggle to return to shore.  The men published the video on Facebook, you could hear them laugh as Parker battled the waves.

Upon discovering the tape, Manasquan, NJ Municipal Court officials charged the men with “endangering the welfare of an incompetent person.”  The problem, however, is that because 19 year old Parker voluntarily jumped into the ocean, the men had not, in fact, committed a crime.

The case is another example of a moral wrong failing to translate into a legal wrong.  Sadly, laws do not exist to punish those who use social media for bullying; just consider the events that prompted Tyler Clementi to jump off the George Washington Bridge.  With this unfortunate event, Parker’s mother joins the rank of parents who fail to see justice in the courts for reprehensible harms committed against their children.

The response to the Parker Drake event, much like the response to many  social media wrongs for which the criminal law offers no retribution, is both outrage and frustration.   Parker’s mother is seeking justice in the civil courts.  The politicians have weighed in too.  Just last week several New Jersey lawmakers announced their intention to draft a law aimed atpunishing individuals who victimized disabled persons.

The law is not well suited for punishment of harms like the one that happened to Parker.  Our Constitution often stands as a roadblock between justice for social media wrongs and the right to voice opinions and ideas.  First Amendment concerns prevent punishing many types of speech, particularly outside of the classroom.   And then there are issues of “void for vagueness.”  A law that punishes those who exploit the developmentally disabled leaves open to interpretation what constitutes “exploitation.” (and I suspect defendants charged in a crime such as this might try to escape punishment by challenging whether his or her “victim” was developmentally disabled.”)

I am interested in seeing the legislation New Jersey law makers propose.  My hope is that they can walk the fine line between justice and free speech.  The lawyer in me, however, suspects that the bill will never make it to the Governor’s desk; as we have seen too many times before, regulating social media bullying in the courts is a nearly impossible task.

 

 

 

Another day, another proposed piece of social media legislation

This one comes from the great state of Virginia.  Virginia lawmakers are considering a bill to permit parental access to a deceased child’s digital accounts. The bill defines digital accounts as “blogging, e-mail, multimedia, personal, social networking, and other online accounts..”  The bill mirrors legislation other jurisdictions are considering, which are designed to grant survivors the benefits of a decedent’s social media estate.  The Virginia Law, however, differs in that it is limited to minor decedents, most of whose estates may not have the financial value of adults who have cultivated a profitable empire through blogging, twitter or the like.  Though not expressely stated, one can assume that Virgnia lawmakers, in adopting the law, are hoping to provide parents with information of value concerning instances of “cyber-bullying”  or unintended consequence of social interaction.  Minors can circumvent the measure through through language in a will or other trust instrument.

Of particular note is the drafting of the bill, which  leaves room for future, anticipated or perhaps even unforeseeable expansion of social media, by including in its definition of digital accounts, “other on-line accounts or comparable items as technology develops.”  The language provides lawmakers with a future-catchall and will potentially guard against the all to common problem of laws playing catch-up with rapid technological advances.  One has to wonder, however, if such broad language could survive a “void for vagueness” challenge.

More States Consider Social Media Privacy Bills

The concept of legislatively limiting employer access to employee social media traffic is gaining traction.  Legislators in Georgia, Montana and North Dakota are considering bills similar to the one already adopted by the Illinois legislature.  The bills would restrict employers from researching social media sights as a means of gaining additional insights about employees and/or employee candidates.  More information about the potential laws is available here.

Are these bills innovative or are they just a natural extension of the HR workplace rules that prohibit, say, asking a candidate is she is pregnant?

France to prohibit the use of #hashtags

It amazing to see just how far the French Government is willing to go to prevent Anglicization of its country. A French governmental commission, charged with assuring that Anglican words and traditions don’t infiltrate its boarders, has directed that all official French government legislation and correspondence use the word mot-diese, (meaning sharp word) in place of the familiar hashtag.   A few years back the French government was successful in changing the word email to courriel, and so there is no reason to think that the new word for hashtag might just catch on beyond the governmental mandate.  Interesting to see just how far a country can go in mandating language, without the cloak of the Constitution as a bar.

 

Skip to toolbar