Miracles Can Be Misleading

Want to lose 20 pounds in 4 days? Try this *insert any miracle weight-loss product * and you’ll be skinny in no time!

Miracle weight-loss products (MWLP) are dietary supplements that either work as an appetite suppressant or forcefully induce weight loss. These products are not approved or indicated by pharmaceutical agencies as weight loss prophylactics. Social media users are continuously bombarded with the newest weight-loss products via targeted advertisements and endorsements from their favorite influencers. Users are force fed false promises of achieving the picture-perfect body while companies are profiting off their delusions. Influencer marketing has increased significantly as social media becomes more and more prevalent. 86 percent of women use social media for purchasing advice. 70 percent of teens trust influencers more than traditional celebrities. If you’re on social media, then you’ve seen your favorite influencer endorsing some form of a MWLP and you probably thought to yourself “well if Kylie Jenner is using it, it must be legit.”

The advertisements of MWLP are promoting an unrealistic and oversexualized body image. This trend of selling skinny has detrimental consequences, often leading to body image issues, such as body dysmorphia and various eating disorders. In 2011, the Florida House Experience conducted a study among 1,000 men and women. The study revealed that 87 percent of women and 65 percent of men compare their bodies to those they see on social media. From the 1,000 subjects, 50 percent of the women and 37 percent of the men viewed their bodies unfavorably when compared to those they saw on social media. In 2019, Project Know, a nonprofit organization that studies addictive behaviors, conducted a study which suggested that social media can worsen genetic and psychological predispositions to eating disorders.

Who Is In Charge?

The collateral damages that advertisements of MWLP have on a social media user’s body image is a societal concern. As the world becomes more digital, even more creators of MWLP are going to rely on influencers to generate revenue for their products, but who is in charge of monitoring the truthfulness of these advertisements?

In the United States, the Federal Trade Commission (FTC) and the Food and Drug Administration (FDA) are the two federal regulators responsible for promulgating regulations relating to dietary supplements and other MWLP. While the FDA is responsible for the labeling of supplements, they lack jurisdiction over advertising. Therefore, the FTC is primarily responsible for advertisements that promote supplements and over-the-counter drugs.

The FTC regulates MWLP advertising through the Federal Trade Commission Act of 1914 (the Act). Sections 5 and 12 of the Act collectively prohibit “false advertising” and “deceptive acts or practices” in the marketing and sales of consumer products, and grants authority to the FTC to take action against those companies. An advertisement is in violation of the Act when it is false, misleading, or unsubstantiated. An advertisement is false or misleading when it contains “objective, material representation that is likely to deceive consumers acting reasonably under the circumstances.” An advertisement is unsubstantiated when it lacks “a reasonable basis for its contained representation.” With the rise of influencer marketing, the Act also requires influencers to clearly disclose when they have a financial or other relationship with the product they are promoting.

Under the Act, the FTC has taken action against companies that falsely advertise MWLP. The FTC typically brings enforcement claims against companies by alleging that the advertiser’s claims lack substantiation. To determine the specific level and type of substantiation required, the FTC considers what is known as the “Pfizer factors” established In re Pfizer. These factors include:

    • The type and specificity of the claim made.
    • The type of product.
    • The possible consequences of a false claim.
    • The degree of reliance by consumers on the claims.
    • The type, and accessibility, of evidence adequate to form a reasonable basis for making the particular claims.

In 2014, the FTC applied the Pfizer factors when they brought an enforcement action seeking a permanent injunction against Sensa Products, LLC. Since 2008, Sensa sold a powder weight loss product that allegedly could make an individual lose 30 pounds in six months without dieting or exercise. The company advertised their product via print, radio, endorsements, and online ads. The FTC claimed that Sensa’s marketing techniques were false and deceptive because they lacked evidence to support their health claims, i.e., losing 30 pounds in six months. Furthermore, the FTC additionally claimed that Sensa violated the Act by failing to disclose that their endorsers were given financial incentives for their customer testimonials. Ultimately, Sensa settled, and the FTC was granted the permanent injunction.

What Else Can We Do?

Currently, the FTC, utilizing its authority under the Act, is the main legal recourse for removing these deceitful advertisements from social media. Unfortunately, social media platforms, such as Facebook, Twitter, Instagram, etc., cannot be liable for the post of other users. Under section 230 of the Communications Decency Act, “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” That means, social media platforms cannot be held responsible for the misleading advertisements of MWLP; regardless of if the advertisement is through an influencer or the companies own social media page and regardless of the collateral consequences that these advertisements create.

However, there are other courses of action that social media users and social media platforms have taken to prevent these advertisements from poisoning the body images of users. Many social media influencers and celebrities have rose to the occasion to have MWLP advertisements removed. In fact, in 2018, Jameela Jamil, an actress starring on The Good Place, launched an Instagram account called I Weigh which “encourages women to feel and look beyond the flesh on their bones.” Influencer activism has led to Instagram and Facebook blocking users, under the age of 18, from viewing posts advertising certain weight loss products or other cosmetic procedures. While these are small steps in the right direction, more work certainly needs to be done.

Don’t Throw Out the Digital Baby with the Cyber Bathwater: The Rest of the Story

This article is in response to Is Cyberbullying the Newest Form of Police Brutality?” which discussed law enforcement’s use of social media to apprehend people. The article provided a provocative topic, as seen by the number of comments.

I believe that discussion is healthy for society; people are entitled to their feelings and to express their beliefs. Each person has their own unique life experiences that provide a basis for their beliefs and perspectives on issues. I enjoy discussing a topic with someone because I learn about their experiences and new facts that broaden my knowledge. Developing new relationships and connections is so important. Relationships and new knowledge may change perspectives or at least add to understanding each other better. So, I ask readers to join the discussion.

My perspectives were shaped in many ways. I grew up hearing Paul Harvey’s radio broadcast “The Rest of the Story.” His radio segment provided more information on a topic than the brief news headline may have provided. He did not imply that the original story was inaccurate, just that other aspects were not covered. In his memory, I will attempt to do the same by providing you with more information on law enforcement’s use of social media. 

“Is Cyberbullying the Newest Form of Police Brutality?

 The article title served its purpose by grabbing our attention. Neither cyberbullying or police brutality are acceptable. Cyberbullying is typically envisioned as teenage bullying taking place over the internet. The U.S. Department of Health and Human Services states that “Cyberbullying includes sending, posting, or sharing negative, harmful, false, or mean content about someone else. It can include sharing personal or private information about someone else causing embarrassment or humiliation”. Similarly, police brutality occurs when law enforcement (“LE”) officers use illegal and excessive force in a situation that is unreasonable, potentially resulting in a civil rights violation or a criminal prosecution.

While the article is accurate that 76% of the surveyed police departments use social media for crime-solving tips, the rest of the story is that more departments use social media for other purposes. 91% notified the public regarding safety concerns. 89% use the technology for community outreach and citizen engagement, 86% use it for public relations and reputation management. Broad restrictions should not be implemented, which would negate all the positive community interactions increasing transparency.   

Transparency 

In an era where the public is demanding more transparency from LE agencies across the country, how is the disclosure of the public’s information held by the government considered “Cyberbullying” or “Police Brutality”? Local, state, and federal governments are subject to Freedom of Information Act laws requiring agencies to provide information to the public on their websites or release documents within days of requests or face civil liability.

New Jersey Open Public Records

While the New Jersey Supreme Court has not decided if arrest photographs are public, the New Jersey Government Records Council (“GRC”) has decided in Melton v. City of Camden, GRC 2011-233 (2013) that arrest photographs are not public records under NJ Open Public Records Act (“OPRA”) because of Governor Whitmer’s Executive Order 69 which exempts fingerprint cards, plates and photographs and similar criminal investigation records from public disclosure. It should be noted that GRC decisions are not precedential and therefore not binding on any court.

However, under OPRA, specifically 47:1A-3 Access to Records of Investigation in Progress, specific arrest information is public information and must be disclosed to the public within 24 hours of a request to include the:

  • Date, time, location, type of crime, and type of weapon,
  • Defendant’s name, age, residence, occupation, marital status, and similar background information.
  • Identity of the complaining party,
  • Text of any charges or indictment unless sealed,
  • Identity of the investigating and arresting officer and agency and the length of the investigation,
  • Time, location, and the arrest circumstances (resistance, pursuit, use of weapons),
  • Bail information.

For years, even before Melton, I believed that an arrestee’s photograph should not be released to the public. As a police chief, I refused numerous media requests for arrestee photographs protecting their rights and believing in innocence until proven guilty. Even though they have been arrested, the arrestee has not received due process in court.

New York’s Open Public Records

In New York under the Freedom of Information Law (“FOIL”), Public Officers Law, Article 6, §89(2)(b)(viii) (General provisions relating to access to records; certain cases) The disclosure of LE arrest photographs would constitute an unwarranted invasion of an individual’s personal privacy unless the public release would serve a specific LE purpose and the disclosure is not prohibited by law.

California’s Open Public Records

Under the California Public Records Act (CPRA) a person has the statutory right to be provided or inspect public records, unless a record is exempt from disclosure. Arrest photographs are inclusive in arrest records along with other personal information, including the suspect’s full name, date of birth, sex, physical characteristics, occupation, time of arrest, charges, bail information, any outstanding warrants, and parole or probation holds.

Therefore under New York and California law, the blanket posting of arrest photographs is already prohibited.

Safety and Public Information

 Recently in Ams. for Prosperity Found. V. Bonta, the compelled donor disclosure case, while invalidating the law on First Amendment grounds, Justice Alito’s concurring opinion briefly addressed the parties personal safety concerns that supporters were subjected to bomb threats, protests, stalking, and physical violence. He cited Doe v Reed  which upheld disclosures containing home addresses under Washington’s Public Records Act despite the growing risks by anyone accessing the information with a computer. 

Satisfied Warrant

I am not condoning Manhattan Beach Police Department’s error of posting information on a satisfied warrant along with a photograph on their “Wanted Wednesday” in 2020. However, the disclosed information may have been public information under CPRA then and even now. On July 23, 2021, Governor Newsom signed a law amending Section 13665 of the CPRA prohibiting LE agencies from posting photographs of an arrestee accused of a non-violent crime on social media unless:

  • The suspect is a fugitive or an imminent threat, and disseminating the arrestee’s image will assist in the apprehension.
  • There is an exigent circumstance and an urgent LE interest.
  • A judge orders the release or dissemination of the suspect’s image based on a finding that the release or dissemination is in furtherance of a legitimate LE interest.

The critical error was that the posting stated the warrant was active when it was not. A civil remedy exists and was used by the party to reach a settlement for damages. Additionally, it could be argued that the agency’s actions were not the proximate cause when vigilantes caused harm.

Scope of Influence

LE’s reliance on the public’s help did not start with social media or internet websites. The article pointed out that “Wanted Wednesday” had a mostly local following of 13,600. This raised the question if there is much of a difference between the famous “Wanted Posters” from the wild west or the “Top 10 Most Wanted” posters the Federal Bureau of Investigations (“FBI”) used to distribute to Post Offices, police stations and businesses to locate fugitives. It can be argued that this exposure was strictly localized. However, the weekly TV show America’s Most Wanted, made famous by John Walsh, aired from 1988 to 2013, highlighting fugitive cases nationally. The show claims it helped capture over 1000 criminals through their tip-line. However, national media publicity can be counter-productive by generating so many false leads that obscure credible leads.

The FBI website contains pages for Wanted People, Missing People, and Seeking Information on crimes. “CAPTURED” labels are added to photographs showing the results of the agency’s efforts. Local LE agencies should follow FBI practices. I would agree with the article that social media and websites should be updated; however, I don’t agree that the information must be removed because it is available elsewhere on the internet.

Time

Vernon Gebeth, the leading police homicide investigation instructor, believes time is an investigator’s worst enemy.  Eighty-five percent of abducted children are killed within the first five hours. Almost all are killed within the first twenty-four hours. Time is also critical because, for each hour that passed, the distance a suspect’s vehicle can travel expands by seventy-five miles in either direction. In five hours, the area can become larger than 17,000 square miles. Like Amber Alerts, social media can be used to quickly transmit information to people across the country in time-sensitive cases.

Live-Streaming Drunk Driving Leads to an Arrest

When Whitney Beall, a Florida woman, used a live streaming app to show her drinking at a bar then getting into her vehicle. The public dialed 911, and a tech-savvy officer opened the app, determined her location, and pulled her over. She was arrested after failing a DWI sobriety test.  After pleading guilty to driving under the influence, she was sentenced to 10 days of weekend work release, 150 hours of community service, probation, and a license suspension. In 2019 10,142 lives were lost to alcohol impaired driving crashes.

Family Advocating

Social media is not limited to LE. It also provides a platform for victim’s families to keep attention on their cases. The father of a seventeen-year-old created a series of Facebook Live videos about a 2011 murder resulting in the arrest of Charles Garron. He was to a fifty-year prison term.

Instagram Selfies with Drugs, Money and Stolen Guns 

Police in Palm Beach County charged a nineteen-year-old man with 142 felony charges, including possession of a weapon by a convicted felon, while investigating burglaries and jewel thefts in senior citizen communities. An officer found his Instagram account with incriminating photographs. A search warrant was executed, seizing stolen firearms and $250,000 in stolen property from over forty burglaries.

Bank Robbery Selfies


Police received a tip and located a social media posting by John E. Mogan II of himself with wads of cash in 2015. He was charged with robbing an Ashville, Ohio bank. He pled guilty and was sentenced to three years in prison. According to news reports, Morgan previously  served prison time for another bank robbery.

Food Post Becomes the Smoking Gun

LE used Instagram to identify an ID thief who posted photographs of his dinner at a high-end steakhouse with a confidential informant (“CI”).  The man who claimed he had 700,000 stolen identities and provided the CI a flash drive of stolen identities. The agents linked the flash drive to a “Troy Maye,” who the CI identified from Maye’s profile photograph. Authorities executed a search warrant on his residence and located flash drives containing the personal identifying information of thousands of ID theft victims. Nathaniel Troy Maye, a 44-year-old New York resident, was sentenced to sixty-six months in federal prison after pleading guilty to aggravated identity theft.

 

Wanted Man Turns Himself in After Facebook Challenge With Donuts

A person started trolling Redford Township Police during a Facebook Live community update. It was determined that he was a 21-year-old wanted for a probation violation for leaving the scene of a DWI collision. When asked to turn himself in, he challenged the PD to get 1000 shares and he would bring in donuts. The PD took the challenge. It went viral and within an hour reached that mark acquiring over 4000 shares. He kept his word and appeared with a dozen donuts. He faced 39 days in jail and had other outstanding warrants.

The examples in this article were readily available on the internet and on multiple news websites, along with photographs.

Under state Freedom of Information Laws, the public has a statutory right to know what enforcement actions LE is taking. Likewise, the media exercises their First Amendment rights to information daily across the country when publishing news. Cyber journalists are entitled to the same information when publishing news on the internet and social media. Traditional news organizations have adapted to online news to keep a share of the news market. LE agencies now live stream agency press conferences to communicating directly with the communities they serve.

Therefore the positive use of social media by LE should not be thrown out like bathwater when legal remedies exist when damages are caused.

“And now you know…the rest of the story.”

Free speech, should it be so free?

In the United States everybody is entitled to free speech; however, we must not forget that the First Amendment of the Constitution only protects individuals from federal and state actions. With that being said, free speech is not protected from censorship by private entities, like social media platforms. In addition, Section 230 of the Communications Decency Act (CDA) provides technology companies like Twitter, YouTube, Facebook, Snapchat, Instagram as well as other social media giants, immunity from liabilities arising from the content posted on their websites. The question becomes whether it is fair for an individual who desires to freely express himself or herself to be banned from certain social media websites by doing so? What is the public policy behind this? What are the standards employed by these social media companies when determining who should or should not be banned? On the other hand, are social media platforms being used as tools or weapons when it comes to politics? Do they play a role in how the public vote? Are the users truly seeing what they think they have chosen to see or are the contents being displayed targeted to the users and may ultimately create biases?

As we have seen earlier this year, former President Trump was banned from several social media platforms as a result of the January 6, 2021 assault at the U.S. Capitol by Trump supporters. It is no secret that our former president is not shy about his comments on a variety of topics. Some audiences view him as outspoken, direct, or perhaps provocative. When Twitter announced its permanent suspension of former President Trump’s account, its rationale was to prevent further incitement of violence. By falsely claiming that the 2020 election had been stolen from him, thousands of Trump supporters gathered in Washington, D.C. on January 5 and January 6 which ultimately led to violence and chaos. As a public figure and a politician, our former president should have known that his actions or viewpoints on social media are likely to trigger a significant impact on the public. Public figures and politicians should be held to a higher standard as they represent citizens who voted for them. As such, they are influential. Technology companies like Twitter saw the former president’s tweets as potential threats to the public as well as a violation of their company policies; hence, it was justified to ban his account. The ban was an instance of private action as opposed to government action. In other words, former President Trump’s First Amendment rights were not violated.

Spare Me Your Outrage, Your Shock. This Is America | Cognoscenti

First, let us discuss the fairness aspect of censorship. Yes, individuals possess rights to free speech; however, if the public’s safety is at stake, actions are required to avoid chaos. For example, you cannot scream “fire”  out of nowhere in a dark movie theater as it would cause panic and unnecessary disorder. There are rules you must comply with in order to use the facility and these rules are in place to protect the general welfare. As a user, if you don’t like the rules set forth by that facility, you can simply avoid using it. It does not necessarily mean that your idea or speech is strictly prohibited, just not on that particular facility. Similar to social media platforms, if users fail to follow their company policies, the companies reserve the right to ban them. Public policy probably outweighs individual freedom. As for the standards employed by these technology companies, there is no bright line. As I previously mentioned, Section 230 grants them immunity from liabilities. That being said, the contents are unregulated and therefore, these social media giants are free to implement and execute policies as they seem appropriate.

The Dangers of Social Networking - TurboFuture

In terms of politics, I believe social media platforms do play a role in shaping their users’ perspectives in some way. This is because the contents that are being displayed are targeted, if not tailored, as they collect data based on the user’s preferences and past habits. The activities each user engages in are being monitored, measured, and analyzed. In a sense, these platforms are being used as a weapon as they may manipulate users without the users even knowing. A lot of times we are not even aware that the videos or pictures that we see online are being presented to us because of past contents we had seen or selected. In other words, these social media companies may be censoring what they don’t want you to see or what they may think you don’t want to see.  For example, some technology companies are pro-vaccination. They are more likely to post information about facts about COVID-19 vaccines or perhaps publish posts that encourage their users to get vaccinated.  We think we have control over what we see or watch, but do we really?

How to Avoid Misinformation About COVID-19 | Science | Smithsonian Magazine

There are advantages and disadvantages to censorship. Censorship can reduce the negative impact of hate speech, especially on the internet. By limiting certain speeches, we create more opportunities for equality. In addition, censorship prevents the spread of racism. For example, posts and videos of racial comments could be blocked by social media companies if deemed necessary. Censorship can also protect minors from seeing harmful content. Because children can be manipulated easily, it helps promote safety.  Moreover, censorship can be a vehicle to stop false information. During unprecedented times like this pandemic, misinformation can be fatal. On the other hand, censorship may not be good for the public as it creates a specific narrative in society. This can potentially cause biases. For example, many blamed Facebook for the outcome of an election as it’s detrimental to our democracy.

Overall, I believe that some sort of social media censorship is necessary. The cyber-world is interrelated to the real world. We can’t let people do or say whatever they want as it may have dramatic detrimental effects. The issue is how do you keep the best of both worlds?

 

How Defamation and Minor Protection Laws Ultimately Shaped the Internet

Kyiv, Ukraine – September 5, 2019: A paper cubes collection with printed logos of world-famous social networks and online messengers, such as Facebook, Instagram, YouTube, Telegram and others.

The Communications Decency Act (CDA) was originally enacted with the intention of shielding minors from indecent and obscene online material. Despite its origins, Section 230 of the Communications Decency Act is now commonly used as a broad legal safeguard for social media platforms to shield themselves from legal liability for content posted on their sites by third parties. Interestingly, the reasoning behind this safeguard arises both from defamation common law, and constitutional free speech laws. As the internet has grown, however, this legal safeguard has gained increasing criticism. However, is this legislation actually undesirable? Many would disagree as section 230 contains “the 26 words that created the internet.”

 

Origin of the Communications Decency Act

The CDA was introduced and enacted as an attempt to shield minors from obscene or indecent content online. Although parts of the Act were later struck down for first amendment free speech violations, the Court left section 230 intact. The creation of section 230 was influenced by two landmark court decisions of defamation lawsuits.

The first case was in 1991, and involved an Internet site that hosted around 150 online forums. A claim was brought against the internet provider when a columnist of one of the online forums posted a defamatory comment about his competitor. The competitor sued the online distributor for the published defamation. The courts categorized the internet service provider as a distributor because they did not review any content of the forums before the content was posted to the site. As a distributor, there was no legal liability, and the case was dismissed.

 

Distributor Liability

Distributor Liability refers to the limited legal consequences that a distributor is exposed to for defamation. A common example of a distributor, is a bookstore or library. The theory behind distributor liability is that it would be impossible for distributors to moderate and censor every piece of content that they disperse because of the sheer volume, and the impossibility of knowing whether something is false or not.

The second case that influenced the creation of section 230, was Stratton Oakmont, Inc. v. Prodigy Servs. Co., in which the court used publisher liability theory to find the internet provider liable for the third party defamatory postings published on its site.  The court deemed the website a publisher because they moderated and deleted certain posts, regardless of the fact that there were far too many postings a day to regulate each one.

 

Publisher Liability

Under common law principles, a person who publishes a third-party’s defamatory statement bears the same legal responsibility as the creator of that statement. This liability is often referred to as “publisher liability,” and is based in theory that a publisher has the knowledge, opportunity, and ability to exercise control over the publication. For example, a newspaper publisher could face legal consequences for the content located within it. The court’s decision was significant because it meant that if a website attempted to moderate certain posts, it would be held liable for all posts.

 

Section 230’s Creation

In response to the Stratton-Oakmond case, and the ambiguous court decisions regarding internet services provider’s liability, members of Congress introduced an amendment to the CDA that later became Section 230. The Amendment was specifically introduced and passed with the goal of encouraging the development of unregulated, free speech online by relieving internet providers from any liability for their content.

 

Text of the Act- Subsection (c)(1) 

“No Provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

 Section 230 further provides that…

“No cause of action may be brought and no liability may be imposed under any State or local law that is inconsistent with this section.”

 The language above removes legal consequences arising from content posted on their forum. Courts have interpreted this subsection as providing broad immunity to online platforms from suits over content of third parties. Because of this, section 230 has become the principal legal safeguard from lawsuits over sites content.

 

The Good

  •  Section 230 can be viewed as being one of the most important pieces of legislation that protects free speech online. One of the unique aspects of this legislation is that it essentially extends free speech protection, applying it to private, non-governmental companies.
  • Without CDA 230, the internet would be a very different place. This section influenced some of the internet’s most distinctive characteristics. The internet promotes free speech and offers the ability for worldwide connectivity.
  • The CDA 230 does not fully eliminate liability or court remedies for victims of online defamation. Rather, it makes only the creator themselves liable for their speech, instead of the speaker and the publisher.

 

 

The Bad

  •  Because of the legal protections section 230 provides, social media networks have less of an incentive to regulate false or deceptive posts. Deceptive online posts can have an enormous impact on society. False posts have the ability to alter election results, or lead to dangerous misinformation campaigns, like the QAnon conspiracy theory, and the anti-vaccination movement.
  • Section 230 is twenty-five years old, and has not been updated to match the internet’s extensive growth.
  • Big Tech companies have been left largely unregulated regarding their online marketplaces.

 

 The Future of 230

While section 230 is still successfully used by social media platforms, concerns over the archaic legislation have mounted. Just recently, Justice Thomas, who is infamous for being a quiet Justice, wrote a concurring opinion articulating his view that the government should regulate content providers as common carriers, like utilities companies. What implications could that have on the internet? With the growing level of criticism surrounding section 230, will Congress will finally attempt to fix this legislation? If not, will the Supreme Court be left to tackle the problem themselves?

Is Cyberbullying the Newest Form of Police Brutality?

Police departments across the country are calling keyboard warriors into action to help them solve crimes…but at what cost?

In a survey of 539 police departments in the U.S., 76% of departments said that they used their social media accounts to solicit tips on crimes. Departments post “arrested” photos to celebrate arrests, surveillance footage for suspect identification, and some even post themed wanted posters, like the Harford County Sheriff’s Office.

The process for using social media as an investigative tool is dangerously simple and the consequences can be brutal. A detective thinks posting on social media might help an investigation, so the department posts a video or picture asking for information. The community, armed with full names, addresses, and other personal information, responds with some tips and a lot of judgmental, threatening, and bigoted comments. Most police departments have no policy for removing posts after information has been gathered or cases are closed, even if the highlighted person is found to be innocent. A majority of people who are arrested are not even convicted of a crime.

Law enforcement’s use of social media in this way threatens the presumption of innocence, creates a culture of public humiliation, and often results in a comment section of bigoted and threatening comments.

On February 26, 2020, the Manhattan Beach Police Department posted a mugshot of Matthew Jacques on their Facebook and Instagram pages for their “Wanted Wednesday” social media series. The pages have 4,500 and 13,600, mostly local, followers, respectively. The post equated Matthew to a fugitive and commenters responded publicly with information about where he worked. Matthew tried to call off work out of fear of a citizen’s arrest. The fear turned out to be warranted when two strangers came to find him at his workplace. Matthew eventually lost his job because he was too afraid to return to work.

You may be thinking this is not a big deal. This guy was probably wanted for something really bad and the police needed help. After all, the post said the police had a warrant. Think again.

There was no active warrant for Matthew at the time, his only (already resolved) warrant came from taking too long to schedule remedial classes for a 2017 DUI. Matthew was publicly humiliated by the local police department. The department even refused to remove the social media posts after being notified of the truth. The result?

Matthew filed a complaint against the department for defamation (as well as libel per se and false light invasion of privacy). Typically, defamation requires the plaintiff to show:

1) a false statement purporting to be fact; 2) publication or communication of that statement to a third person; 3) fault amounting to at least negligence; and 4) damages, or some harm caused to the person or entity who is the subject of the statement.

Here, the department made a false statement – that there was a warrant. They published it on their social media, satisfying the second element. They did not check readily available public records that showed Matthew did not have a warrant. Finally, Matthew lived in fear and lost his job. Clearly, he was harmed.

The police department claimed their postings were protected by the California Constitution, governmental immunity, and the 1st Amendment. Fortunately, the court denied the department’s anti-SLAPP motion. Over a year after postings, the department took down the posting and settled the lawsuit with Matthew.

Some may think that Matthew’s case is an anomaly and that, usually, the negative attention is warranted and perhaps even socially beneficial because it further de-incentivizes criminal activity via humiliation and social stigma. However, most arrests don’t result in convictions, many of the police’s cyberbullying victims are likely innocent. Even if they are guilty, leaving these posts up can increase the barrier to societal re-entry, which can increase recidivism rates. A negative digital record can make finding jobs and housing more difficult. Many commenters assume the highlighted individual’s guilt and take to their keyboards to shame them.

Here’s one example of a post and comment section from the Toledo Police Department Facebook page:

Unless departments change their social media use policies, they will continue to face defamation lawsuits and continue to further the degradation of the presumption of innocence.

Police departments should discontinue the use of social media in the humiliating ways described above. At the very least, they should consider using this tactic only for violent, felonious crimes. Some departments have already changed their policies.

The San Francisco Police Department has stopped posting mugshots for criminal suspects on social media. According to Criminal Defense Attorney Mark Reichel, “The decision was made in consultation with the San Francisco Public Defender’s Office who argued that the practice of posting mugshots online had the potential to taint criminal trials and follow accused individuals long after any debt to society is paid.” For a discussion of some of the issues social media presents to maintaining a fair trial, see Social Media, Venue and the Right to a Fair Trial.

Do you think police departments should reconsider their social media policies?

Is social media promoting or curbing Asian hate?

The COVID-19 pandemic has caused our lives to twist and turn in many unexpected ways. Of all the ethnicities in the world, the Asian population took the hardest hit since the virus originated from China. This ultimately caused a significant increase in hate crimes, particularly towards the Asian community, in the real world as well as the cyber world. Since the number of internet users is almost uncountable, the impact that it creates online, as well as offline, is massive. Social media can create bias and social media has the power to remedy bias. The question becomes which side of the scale is it currently tipping towards? Is the internet making social network platform users more vulnerable to manipulation? Are hatred and bias “contagious” through cyber means? On the contrary, is social media remedying the bias that people have created through the internet?

Section 230 of the Communications Decency Act governs the cyber world. It essentially provides legal immunity to internet providers such as TikTok, Facebook, Instagram, Snapchat and etc. The Act states: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” With that being said, posts and comments that appear on these social media platforms do not have any legal ramifications for the tech companies. Hence, do these tech companies have incentives to regulate what is posted on their websites? With the Asian hate wave currently going on, will it evolve into a giant snowball of problems if social media platforms fail to step in? On the other hand, if these tech companies elect to step in, to what extent can they regulate or supervise?

The hatred and bias sparked by the pandemic have not been limited to the real world. Asian Americans have reported the biggest increase in serious incidents of online hate and harassment throughout such a crazy time. Many of them were verbally attacked or insulted by racist and xenophobic slurs merely because they have Asian last names or that they look Asian. According to a new survey shared exclusively with USA TODAY, comparing to last year, there was an 11% increase in sexual harassment, stalking, physical threats, and other incidents reported by Asian Americans, of which many were through online social media platforms. Pursuant to the findings by the Center for the Study of Hate and Extremism at California State University, hate crimes against Asian Americans rose 149% from 2019 to 2020. That is 149% in one year. In addition, an AI-based internet abuse detection organization named L1ght reported a 900% increase on Twitter since the start of this pandemic. This may just be the tip of an iceberg as many of the hate crime incidents may have gone unreported. As you may recall, former President Trump publicly referred the COVID-19 coronavirus as the “Chinese Virus” which led to a record-breaking level of brutal online harassment against Asian Americans. This also gave rise to other similar remarks such as “Kung Flu” or “Wuhan Virus.” Social media users began using hashtags of the like. Just the hashtag “#ChineseVirus” alone has been used over 68,000 times on Instagram.

We must not forget that the real world and the cyber world are interconnected. Ideas consumed online can have a significant impact on our offline actions which may lead to violence. Last week, I had the privilege to interview New York Police Department Lieutenant Mike Wang who is in charge of the NYPD’s Asian Hate Crimes Task Force in Brooklyn, he expressed his concerns about the Asian community being attacked, seniors in particular. Lieutenant Wang said during the interview: “It’s just emotionally difficult and heartbreaking. New York Police Department is definitely taking unprecedented measures to combat these crimes. These incidents cannot be overlooked.” Most of these incidents were unprovoked. Some examples include an elderly Thai immigrant who died after being shoved to the ground, a Filipino-American citizen being slashed in the face with a box cutter leaving a big permanent scar on his face, a Chinese lady being slapped and then set on fire, as well as six Asian-Americans being brutally shot to death in a spa one night. Wang indicated that crimes against Asian-Americans in general are nothing new, they have been in existence for quite some time; however, the rage and frustration of the COVID-19 pandemic fueled this fire to an uncontrollable level. Wang encourages citizens to report crimes in general, not just hate crimes, as we need to be more vocal. You can read more about hate crimes and bias on the city’s website.

From verbal harassment to physical assaults, there have been thousands of reported cases since the pandemic started. These are typically hate crimes as offenders believe that the Asian population should be blamed for the spread of the virus. Perhaps people’s daily interactions online play an important role here. Almost everyone uses some sort of social network in our country, the more hatred and bias they see online, the more likely they will exhibit violence in real life. Why? Because people would think such behaviors are acceptable since many others are doing it. Accountability does not seem to be an issue, especially through social channels. At the most, the user’s post would be removed or the account would get suspended. With that being said, it is questionable as to whether the tech companies are doing enough to address these issues? When encountering these hateful behaviors in the cyber world, what are the policies of the social media giants? For instance, Twitter has implemented a policy on hate speech that prohibits accounts whose primary purpose was to incite harm towards others. Twitter does reserve the discretion to remove inappropriate content or suspend users who violated their policy. You can read more about their Hateful Conduct Policy on their website. Other social media platforms such as Facebook, TikTok, and YouTube all have similar policies in place to address hateful behaviors, violent threats, and harassment; however, are they sufficient? According to the CEO of the Anti-Defamation League, online users continue to experience strong hateful comments despite that the social network companies alleged that they are taking things seriously. Facebook and YouTube are still allowing users to use the racially incentive term “Kung Flu” while TikTok has prohibited it. A comics artist Ethan Van Sciver joked about killing Chinese people in one of his videos but later claimed that it was “facetious sarcasm.” YouTube only removed the video stating that it was a violation of its hate speech policy. Like I previously mentioned, the accountability with these social networks is minimal.

Social networks have definitely helped spread the news keeping everyone in the country informed about the horrible incidents that are happening on a regular basis. Other than spreading the virus of hatred and bias online, social networks also raise awareness and promote positivity on the other hand. As Asian hate crimes spike, public figures, and celebrities are taking part to stand against this battle. Allure magazine’s editor-in-chief Michelle Lee and designer Phillip Lim are one of them. They have posted videos on Instagram sharing their very own experiences of racism in an effort to raise awareness. They also used the hashtag #StopAsianHate in their posts. On March 20, 2021, “Killing Eve” star Sandra Oh joined a “Stop Asian Hate” protest in Pittsburgh. She said she is “proud to be Asian” while giving a powerful speech urging people to fight against racism and hatred towards the Asian community. The video of her speech went viral online in just a day and there have been more than ninety-three thousand views on YouTube since.  I have to say that our generation is not afraid to speak up about the hate and injustice we face in our society today. This generation is taking it upon ourselves to prove racism instead of relying on authorities to recognize the threats and implement policy changes. This is how #StopAAPIHate came about. The hashtag stands for “Stop Asian American and Pacific Islander Hate.” Stop AAPI Hate is a nonprofit organization that tracks incidents of hate and discrimination against Asian Americans and Pacific Islanders in the United States. It was recently created as a social media platform to bring awareness, education, and resources to the Asian community and its allies. Stop AAPI Hate also utilized social networks like Instagram to organize support groups, provide aid and pressure those in power to act. The following is a list of influential members of the AAPI community who are vocalizing their concerns and belief: Christine Chiu, “The Bling Empire” star who is also a producer and an entrepreneur; Chriselle Lim, who is a digital influencer, content creator and entrepreneur; Tina Craig, who is the founder and CEO of U Beauty; Daniel Martin, who is the makeup artist and global director of Artistry & Education at Tatcha; Yu Tsai, who is a celebrity and fashion photographer & host; Sarah Lee and Christine Chang, who are the co-founders and co-CEOs of Glow Recipe; Aimee Song, who is an entrepreneur and digital influencer; Samuel Hyun, who is the chairman of the Massachusetts Asian American Commission; Daniel Nguyen who is an actor; Mai Quynh, who is a celebrity makeup artist; Ann McFerran, who is the founder and CEO of Glamnetic; Nadya Okamoto, who is the founder of August; Sharon Pak who is the founder of INH; Sonja Rasula, who is the founder of Unique Markets; as well as Candice Kumai, who is a writer, journalist, director and best-selling author. The list can go on but the purpose of these influential speakers is that taking things to social media is not just about holding people or companies accountable, instead, it is about creating meaningful changes in our society.

The internet is more powerful than we think it is. It is dangerous to allow individuals to attack or harass others, even through the screen. I understand that the social media platforms cannot blatantly censor contents or materials as they see inappropriate on their websites as it may be a violation of the user’s First Amendment rights; however, there has to be more that they can do. Perhaps creating more rigorous policies as an effort to combat hate speech. If we are able to track the user’s identity to his or her real-life credentials, it may curb the tendency of potential offenders or repeated offenders. The question is how do you draw the line between freedom of speech and social order?

 

The First Amendment Is Still Great For The United States…Or Is It?

In the traditional sense, of course it is. The idea of free speech should always be upheld, without question. However, when it comes to the 21st century, this two and a half centuries old amendment poses extreme roadblocks. Here, I will be discussing how the First Amendment inhibits the ability to tackle extremism and hatred on social media platforms.

One of the things I will be highlighting is how other countries are able to enact legislation to try and deal with the ever-growing hate that festers on social media. They’re able to do so because they do not have a “First Amendment.” The idea of free speech is simply engrained into democracies; they do not need an archaic document in which they are forever bound to tell them that. Here in the U.S., as we all know, congress can be woefully slow and inefficient, with a particular emphasis on refusing to update outdated laws.

The First Amendment successfully blocks any government attempt to regulate social media platforms. Any attempt to do so is met by mostly conservatives, yelling about the government wanting to take away free speech, and the courts will/would not allow the legislation to stand. This in turn means Facebook, Snapchat, Instagram, Reddit, and all the other platform never have to worry about the white supremist and other extremist rhetoric that is prevalent on their platform. Even further than that, most, if not all their algorithms, push those vile posts to hundreds of thousands of people. We are “not allowed” to introduce laws that will come up with a baseline to regulate platforms, in order to crack down on the terrorism that flourishes there. Just as you are not allowed to scream fire in a move theatre, it should not be allowed to post and form groups to spread misinformation, white supremacy, racism, etc. Those topics do not serve the interests of greater society. Yes, it would make it a lot harder for people to be able to easily share their thoughts, no matter how appalling they may be. However, not allowing it to spread online where in 30 seconds millions of people can see it, is not taking away someone’s free speech right. Platforms don’t even necessarily have to delete the posts; just change their algorithms to stop promoting misinformation and hate, promote truth instead even if the truth is boring. They won’t do that though because promoting lies is what makes them money, and it’s always money over the good of the people.  Another reason why this doesn’t limit people’s free speech is because they can still form in person groups, talk about it in private, start an email chain etc. The idea behind trying to regulate what can be posted on social media websites is to make the world a better place for all; to make it harder for racist ideas and terrorism to spread, especially to young, impressionable children/young adults. This shouldn’t be a political issue; shouldn’t we all want to limit the spread of hate?

It is hard for me to imagine the January 6th insurrection on our capital occurring had we had regulations on social media in place. A lot of the groups that planned the insurrection had “stop the steal” groups and other related election-fraud conspiracy pages on Facebook. Imagine if we had in place a law that said social media platforms had to take down posts and pages eliciting false information that could be inciteful or detrimental to the security of the United States? I realize that is broad discretion, the legislation would have to be worded very narrowly, and those decisions to remove posts should be made with the highest level of scrutiny. Had we had a regulation like that in place, these groups would not have been able to reach as wide of an audience. I think Ashley Babbitt and Officer Sicknick would still be alive had Facebook been obligated to take those pages and posts down.

Alas, we are unable to even consider legislation to help address this cause because the courts and a lot of congress people refuse to acknowledge that we must update our laws and redefine how we read the First Amendment. The founders could never have imagined the world we live in today. Congress and the Courts need to stop pretending that a piece of paper written over a hundred years ago is some untouchable work from god. The founders wrote the First Amendment to ensure no one would be thrown in jail for speaking their mind, so that people who hold different political views could not be persecuted, to give people the ability to express themselves. Enacting legislation to prevent blatant lies, terrorism, racism, and white supremacy from spreading as easily online does not go against the First Amendment. It is not telling people they can’t have those views; it is not throwing anyone in prison or handing out fines for those views, and white supremacist or other racist ideas are not “political discourse.” Part of the role of government is to protect the people, to do what is right for society as a whole, and I fail to see how telling social media platforms they need to take down these appalling posts is outweighed by this idea that “nearly everything is free speech, even if it poisons the minds of our youth and perpetuates violence because that’s what the First Amendment says.”

Let’s now look at the United Kingdom and what they are able to do because they do not have any law comparable to the First Amendment. In May of 2021, the British Parliament introduced the Online Safety Bill. If passed into law, the bill will place a duty of care on social media firms and websites to ensure they take swift action to remove illegal content, such as hate crimes, harassment and threats directed at individuals, including abuse which falls below the criminal threshold. As currently written, the bill would also require the social media companies to limit the spread of and remove terroristic material, suicidal content, and child sexual abuse. The companies would be mandated to report postings of those kind to the authorities. Lastly, the Online Safety Bill would require companies to safeguard freedom of expression, and reinstate material unfairly removed. This includes forbidding tech firms from discriminating against particular political viewpoints. The bill reserves the right for Ofcom (the UK’s communications regulator) to hold them accountable for the arbitrary removal of journalistic content.

The penalties for not complying with the proposed law would be significant. Social Media companies that do not comply could be fined up to 10% of their net profits or $25 million. Further, the bill would allow Ofcom to bring criminal actions against named senior managers whose company does not comply with Ofcom’s request for information.

It will be interesting to see how the implementation of this bill will go if it is passed. I believe it is a good steppingstone to reign in the willful ignorance displayed by these companies. Again, it is important these bills be carefully scrutinized, otherwise you may end up with a bill like the one proposed in India. While I will not be discussing their bill at length in this post, you can read more about it here. In short, India’s bill is widely seen as autocratic in nature; giving the government the ability to fine and or criminally prosecute social media companies and their employees if they fail to remove content that the government does not like (for instance, people who are criticizing their new agriculture regulations).

Bringing this ship back home, can you imagine a bill like Britain’s ever passing in the US, let alone even being introduced? I certainly can’t because we still insist on worshiping an amendment that is 230 years old. The founders wrote the bill based on the circumstances of their time, they could never have imagined what today would look like. Ultimately, the decision to allow us to move forward and adopt our own laws to start regulating social media companies is up to the Supreme Court. Until the Supreme Court wakes up and decides to allow a modern reading/interpretation of the First Amendment, any law to hold companies accountable is doomed to fail. It is illogical to put a piece of paper over the safety and well being of Americans, yet we consistently do just that. We will keep seeing reports of how red flags were missed and as a result people were murdered or how Facebook pages helped spread another “Big Lie” which results in another capital sieged. All because we cannot move away from our past to brighten our futures.

 

What would you do to help curtail this social dilemma?

Has Social Media Become the Most Addictive Drug We Have Ever Seen?

Before we get started, I want you to take a few minutes and answer the following questions to yourself:

  1. Do you spend a lot of time thinking about social media or planning to use social media?
  2. Do you feel urges to use social media more and more?
  3. Do you use social media to forget about personal problems?
  4. Do you often try to reduce the use of social media without success?
  5. Do you become restless or troubled if unable to use social media?
  6. Do you use social media so much that it has had a negative impact on your job or studies?

How did you answer these questions?  If you answered yes to more than three of these questions then according to the Addiction Center you may have or be developing a Social Media Addiction.  Research has shown that there is an undeniable link between social media use, negative mental health, and low self-esteem.  Negative emotional reactions are not only produced due to the social pressure of sharing things with others but also the comparison of material things and lifestyles that these sites promote.
On Instagram and Facebook, users see curated content – advertisements and posts that are specifically designed to appeal to you based on your interests.  Individuals today unlike any other time in history are seeing how other people live, and how their lifestyles differ significantly from their own.  This sense of self-worth is what is being used to curate information, children at a young age are being taught that if you are not a millionaire then you are not successful, and they are creating barometers of success based on invisible benchmarks, this is leading to an increase in suicide and depression among young adults.

Social Media has become a stimulant whose effects mimic that of someone addicted to gambling, and recreational drugs.  It has been shown that retweets, likes, and shares from these sites affect the dopamine part of the brain that becomes associated with reward. “[I]t’s estimated that people talk about themselves around 30 to 40% of the time; however, social media is all about showing off one’s life and accomplishments, so people talk about themselves a staggering 80% of the time. When a person posts a picture and gets positive social feedback, it stimulates the brain to release dopamine, which again rewards that behavior and perpetuates the social media habit.”  “Chasing the high”, is a common theme among individuals with addictive personalities, and when you see people on Social Media posting every aspect of their lives, from the meal they ate to their weekend getaway, and everything in between, that is what your chasing, but the high is the satisfaction of other people liking your post.  We have all been there you post a picture or a moment of great importance in your life, and the likes and reactions start pouring in, the reaction you garner from that love, differs significantly from the reaction you get when there is no reaction.  A recent Harvard study showed that “the act of disclosing information about oneself activates the same part of the brain that is associated with the sensation of pleasure, the same pleasure that we get from eating food, getting money or having even had sex.” Our brains have become to associate self-disclosure with being a rewarding experience.  Ask yourself when was the last time you posted something about a family or friend who died, why was this moment of sadness worth sharing with the world?  Researchers in this Harvard Study found that “when people got to share their thoughts with a friend or family member, there was a larger amount of activity in the reward region of their brain, and less of a reward sensation when they were told their thoughts would be kept private.”

“The social nature of our brains is biologically based,” said lead researcher Matthew Lieberman, Ph.D., a UCLA professor of psychology and psychiatry and biobehavioral sciences. This in itself helps you to understand where Social Media has gone to, it has evolved into a system that takes advantage of our biological makeup, “although Facebook might not have been designed with the dorsomedial prefrontal cortex in mind, the social network is very much in sync with how our brains are wired.” There is a reason when your mind is idling the first thing it wants to do is to check Social Media, Liberman one of the founders of the study of social cognitive neuroscience explains that “When I want to take a break from work, the brain network that comes on is the same network we use when we’re looking through our Facebook timeline and seeing what our friends are up to. . . That’s what our brain wants to do, especially when we take a break from work that requires other brain networks.”

This is a very real issue, that has very real consequences.  The suicide rate for children and teens is rising.  According to a September 2020 report by the U.S. Department of Health and Human Services, the suicide rate for pediatric patients rose 57.4% from 2007 to 2018. It is the second-largest cause of death in children, falling short only of accidents.  Teens in the U.S. who spend more than 3 hours a day on social media may be at a heightened risk for mental health issues, according to a 2019 study in JAMA Psychiatry. The study, which was adjusted for previous mental health diagnoses, concludes that while adolescents using social media more intensively have an increased risk of internalizing problems or reporting mental health concerns, more research is needed on “whether setting limits on daily social media use, increasing media literacy, and redesigning social media platforms are effective means of reducing the burden of mental health problems in this population.” Social Media has become a coping mechanism for some to deal with their stress, loneliness, or depression.  We have all come into contact with someone who posts their entire life on social media, and more often than not we might brush it off, even make a crude joke, but in fact, this is someone who is hurting and looking for help in a place that does not offer any solitude.

I write about this to emphasize a very real, and dangerous issue that is growing worse every single day.  For far too long Social Media have hidden behind a shield of immunity.

Section 230, a provision of the 1996 Communications Decency Act that shields social media companies from liability for content posted by their users and allows them to remove lawful but objectionable posts.  Section 230 states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” (47 U.S.C. § 230)

In 1996 when this Law was introduced and passed, the internet was still in its infancy, and no one at that time could have ever envisioned how big it would become.  At this point, Social Media Corporations operate in an almost Omnipotent capacity.  Creating their governing boards, and moderators to filter out negative information.  However, while the focus is often on the information being put out by the users what gets ignored is how that same information gets directed to the consumer.  You see Facebook, Snap Chat, Twitter, even YouTube, rely on the consumer commonly known as “influencers” to direct posts, and information to the consumer also known as the “User”, to direct advertisement and product placement.  To accomplish their goals which at the end of the day is the same as anyone Corporation to create a profit, information is directed at a person that will keep their attention.  At this point, there are little to no regulations, on how information is directed at an individual.  For instance, the FCC has rules in place that “limits the number of time broadcasters, cable operators, and satellite providers can devote to advertisements during children’s programs.” however, there are no such rules when dealing with children, there is only one such case in which the FTC has levied any fines for directed content at Children. Yet this suit was based more on  the notion that Google through their subsidiary YouTube “illegally collected personal information from children without their parents’ consent.”  When dealing with an advertisement for children Google itself sets the parameters.

Social Media has grown too large for itself and has far outgrown its place as a private entity that cannot be regulated.  The FCC was created in 1934 to replace the outdated Federal Radio Commission an outdated entity.  Therefore, just as it was recognized in 1934 that technology calls for change, today we need to call on Congress to regulate Social Media, it is not too farfetched to say that our Children and our Children’s futures depend on this.

In my next blog, I will post how regulation on Social Media could look and explain in more detail how Social Media has grown too big for itself.

 

 

A Slap in the Face(book)?

Social media law has become somewhat of a contentious issue in recent years. While most people nowadays could not imagine life without it, many realize too, that it’s influence on our daily lives may not be a great thing. As the technology has advanced to unimaginable levels and the platforms have boomed in popularity, it seems as though our smart phones and Big Tech know our every move. The leading social media platform, Facebook, has around 1.82 billion active users a day, with people volunteering all sorts of personal information to be stored in the internet database. Individual profiles hold pictures of our children, our friends, our family, meals we eat, locations we visit. “What’s on your mind?” is the opening invite to any Facebook page, and one can only hazard a guess as to how many people actually answer that question on a daily basis.  Social media sites know our likes, our dislikes, our preferences, our moods, the shoes we want to buy for that dress we are thinking of wearing to the party we are looking forward to in three weeks!

With all that knowledge, comes enormous power, and through algorithmic design, social media can manipulate our thoughts and beliefs by controlling what we see and don’t see. With all that power, therefore, should come responsibility, but Section 230 of the Communications Decency Act (CDA) has created a stark disconnect between the two. What started out as a worthy protection for internet service providers for the content posted by others, has more recently drawn criticism for the lack of accountability held by social media oligarchs such as Jack Dorsey (Twitter) and Mark Zuckerberg (Facebook).

However, that could all be about to change.

On May 28, 2017, three friends lost their lives in a deadly car accident in which the 17-year-old driver, Jason Davis, crashed into a tree at an estimated speed of 113 mph. Landen Brown, 20, and Hunter Morby, 17, were passengers. Tragic accident? Or wrongful death?

Parents of the deceased lay blame on the Snapchat App, which offered a ‘Speed Filter’ that would clock how fast you were moving, and allowed users to snap and share videos of their movements in progress.

You see where this is going.

As quickly became the trend, the three youths used the app to see how fast they could record the speed of their car. Just moments before their deaths, Davis had posted a ‘snap’ clocking the car’s speed at 123 mph. In Lemmon v Snap, the parents of two of the boys brought suit against the social media provider, Snap, Inc., claiming that the app feature encouraged reckless driving and ultimately served to “entice” the young users to their death.

Until now, social media platforms and other internet service providers have enjoyed the protection of near absolute immunity from liability. Written in 1996, Section 230 was designed to protect tech companies from liability, for suits such as defamation, for third party posts. In the early days, it was small tech companies, or an online business with a ‘comments’ feature that generally saw the benefits of the Code. 25 years later, many people are questioning the role of Section 230 within the vastly developing era of social media and the powerful pass it grants Big Tech in many of its societal shortcomings.

Regarded more as an open forum than the publisher or speaker, social media platforms such as Facebook, Twitter, TikTok, Instagram and Snapchat, have been shielded by Section 230 from any legal claims of harm caused by the content posted on their sites.

Applied broadly, it is argued that Section 230 prevents Snap, Inc. from being held legally responsible for the deaths of the three boys in this case, which is the defense the tech company relied upon. The district court dismissed the case on those grounds, holding that the captured speeds fall into the category of content published by a third party, for which the service provider cannot be held liable. The Ninth Circuit however, disagrees. The Court’s interesting swerve of such immunity, is that the speed filter resulted in the deaths of the boys regardless of whether or not their captured speeds were posted. In other words, it did not matter if the vehicle’s speed was shared with others in the app; the fact that the app promotes, and rewards, high speed (although the award system within the app is not entirely clear), is enough.

The implications of this could be tremendous. At a time when debate over 230 reevaluations is already heavy, this precedential interpretation of Section 230 could lead to some cleverly formulated legal arguments for holding internet service providers accountable for some of the highly damaging effects of internet, social media and smart phone usage.

For the many benefits the internet has to offer, it can no longer be denied that there is another, very ugly side to internet usage, in particular with social media.

It is somewhat of an open secret that social media platforms such as Facebook and Instagram, purposely design their apps to be addictive by its users. It is also no secret that there is a growing association between social media usage and suicides, depression and other mental health issues. Cyber bullying has long been a very real problem. In addition, studies have shown that smart device screen time in very young children has shockingly detrimental impacts on a child’s social and emotional developments,  not to mention the now commonly known damage it can have on a person’s eyesight.

An increased rate of divorces has been linked to smart phones, and distracted driving – whether it be texting or keeping tabs on your Twitter retweets, or Facebook ‘likes’– is on the increase. Even an increase in accidents while walking has been linked to distractions caused by the addictive smart devices.

With the idea of accountability being the underlying issue, it can of course be stated that almost all of these problems should be a matter of personal responsibility. Growing apart from your spouse? Ditch your cell phone and reinvent date night. Feeling depressed about your life as you ‘heart’ a picture of your colleague’s wine glass in front of a perfect sunset beach backdrop? Close your laptop and stop comparing yourself to everyone else’s highlights. Step in front of a cyclist while LOL’ing in a group text? Seriously….put your Apple Watch hand in your pocket and look where you are going! The list of personal-blame is endless. But then we hear about three young friends, two still in their teens, who lose their lives engaged with social media, and suddenly it’s not so easy to blame them for their own devastating misfortune.

While social media sites cannot be held responsible for the content posted by others, no matter how hurtful it might be to some, or no matter what actions it leads others to take, should they be held responsible for negligently making their sites so addictive, so emotionally manipulative and so targeted towards individual users, that such extensive and compulsive use leads to dire consequences? According to the Ninth Circuit, negligent app design can in fact be a cause of action for wrongful death.

With a potential crack in the 230-armor, the questions many lawyers will be scrambling to ask are:

      • What duties do the smart device producers and/or internet service providers owe to their users?
      • Are these duties breached by continuing to design, produce, and provide products that are now known to create such disturbing problems?
      • What injuries have occurred and where those injuries foreseeably caused by any such breaches of duty?

For the time being, it is unlikely that any substantial milestone will be reached with regards to Big Tech accountability, but the Ninth Circuit decision in this case has certainly delivered a powerful blow to the Big Tech apparent untouchability in the courtroom.

As awareness of all these social media related issues grow, could this court decision open the door to further suits of defective or negligent product design resulting in death or injury? Time will tell…..stay tuned.

Skip to toolbar