Private or not private, that is the question.

Section 230 of the Communications Decency Act (CDA), protects private online companies from liability for content posted by others. This immunity also grants internet service providers the freedom to regulate what is posted onto their sites. What has faced much criticism of late however, is social media’s immense power to silence any voices the platform CEOs disagree with.

Section 230(c)(2), known as the Good Samaritan clause, states that no provider shall be held liable for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”

When considered in the context of a ‘1996’ understanding of internet influence (the year the CDA was created) this law might seem perfectly reasonable. Fast forward 25 years though, with how massively influential social media has become on society and the spread of political information, there has developed a strong demand for a repeal, or at the very least, a review of Section 230.

The Good Samaritan clause is what shields Big Tech from legal complaint. The law does not define obscene, lewd, lascivious, filthy, harassing or excessively violent. And “otherwise objectionable” leaves the internet service providers’ room for discretion all the more open-ended. The issue at the heart of many critics of Big Tech, is that the censorship companies such as Facebook, Twitter, and YouTube (owned by Google) impose on particular users is not fairly exercised, and many conservatives feel they do not receive equal treatment of their policies.

Ultimately, there is little argument around the fact that social media platforms like Facebook and Twitter are private companies, therefore curbing any claims of First Amendment violations under the law. The First Amendment of the US Constitution only prevents the government from interfering with an individual’s right to free speech. There is no constitutional provision that dictates any private business owes the same.

Former President Trump’s recent class action lawsuits however, against Facebook, Twitter, Google, and each of their CEOs, challenges the characterization of these entities as being private.

In response to the January 6th  Capitol takeover by Trump supporters, Facebook and Twitter suspended the accounts of the then sitting president of the United States – President Trump.

The justification was that President Trump violated their rules by inciting violence and encouraged an insurrection following the disputed election results of 2020. In the midst of the unrest, Twitter, Facebook and Google also removed a video posted by Trump, in which he called for peace and urged protestors to go home. The explanation given was that “on balance we believe it contributes to, rather than diminishes the risk of ongoing violence” because the video also doubled down on the belief that the election was stolen.

Following long-standing contentions with Big Tech throughout his presidency, the main argument in the lawsuit is that the tech giants Facebook, Twitter and Google, should no longer be considered private companies because their respective CEOs, Mark Zuckerberg, Jack Dorsey, and Sundar Pichai, actively coordinate with the government to censor politically oppositional posts.

For those who support Trump, probably all wish to believe this case has a legal standing.

For anyone else who share concerns about the almost omnipotent power of Silicon Valley, many may admit that Trump makes a valid point. But legally, deep down, it might feel like a stretch. Could it be? Should it be? Maybe. But will Trump see the outcome he is looking for? The initial honest answer was “probably not.”

However, on July 15th 2021, White House press secretary, Jen Psaki, informed the public that the Biden administration is in regular contact with Facebook to flag “problematic posts” regarding the “disinformation” of Covid-19 vaccinations.

Wait….what?!? The White House is in communication with social media platforms to determine what the public is and isn’t allowed to hear regarding vaccine information? Or “disinformation” as Psaki called it.

Conservative legal heads went into a spin. Is this allowed? Or does this strengthen Trump’s claim that social media platforms are working as third-party state actors?

If it is determined that social media is in fact acting as a strong-arm agent for the government, regarding what information the public is allowed to access, then they too should be subject to the First Amendment. And if social media is subject to the First Amendment, then all information, including information that questions, or even completely disagrees with the left-lean policies of the current White House administration, is protected by the US Constitution.

Referring back to the language of the law, Section 230(c)(2) requires actions to restrict access to information be made in good faith. Taking an objective look at some of the posts that are removed from Facebook, Twitter and YouTube, along with many of the posts that are not removed, it begs the question of how much “good faith” is truly exercised. When a former president of the United States is still blocked from social media, but the Iranian leader Ali Khamenei is allowed to post what appears nothing short of a threat to that same president’s life, it can certainly make you wonder. Or when illogical insistence for unquestioned mass emergency vaccinations, now with continued mask wearing is rammed down our throats, but a video showing one of the creators of the mRNA vaccine expressing his doubts regarding the safety of the vaccine for the young is removed from YouTube, it ought to have everyone question whose side is Big Tech really on? Are they really in the business of allowing populations to make informed decisions of their own, gaining information from a public forum of ideas? Or are they working on behalf of government actors to push an agenda?

One way or another, the courts will decide, but Trump’s class action lawsuit could be a pivotal moment in the future of Big Tech world power.

How Defamation and Minor Protection Laws Ultimately Shaped the Internet

Kyiv, Ukraine – September 5, 2019: A paper cubes collection with printed logos of world-famous social networks and online messengers, such as Facebook, Instagram, YouTube, Telegram and others.

The Communications Decency Act (CDA) was originally enacted with the intention of shielding minors from indecent and obscene online material. Despite its origins, Section 230 of the Communications Decency Act is now commonly used as a broad legal safeguard for social media platforms to shield themselves from legal liability for content posted on their sites by third parties. Interestingly, the reasoning behind this safeguard arises both from defamation common law, and constitutional free speech laws. As the internet has grown, however, this legal safeguard has gained increasing criticism. However, is this legislation actually undesirable? Many would disagree as section 230 contains “the 26 words that created the internet.”

 

Origin of the Communications Decency Act

The CDA was introduced and enacted as an attempt to shield minors from obscene or indecent content online. Although parts of the Act were later struck down for first amendment free speech violations, the Court left section 230 intact. The creation of section 230 was influenced by two landmark court decisions of defamation lawsuits.

The first case was in 1991, and involved an Internet site that hosted around 150 online forums. A claim was brought against the internet provider when a columnist of one of the online forums posted a defamatory comment about his competitor. The competitor sued the online distributor for the published defamation. The courts categorized the internet service provider as a distributor because they did not review any content of the forums before the content was posted to the site. As a distributor, there was no legal liability, and the case was dismissed.

 

Distributor Liability

Distributor Liability refers to the limited legal consequences that a distributor is exposed to for defamation. A common example of a distributor, is a bookstore or library. The theory behind distributor liability is that it would be impossible for distributors to moderate and censor every piece of content that they disperse because of the sheer volume, and the impossibility of knowing whether something is false or not.

The second case that influenced the creation of section 230, was Stratton Oakmont, Inc. v. Prodigy Servs. Co., in which the court used publisher liability theory to find the internet provider liable for the third party defamatory postings published on its site.  The court deemed the website a publisher because they moderated and deleted certain posts, regardless of the fact that there were far too many postings a day to regulate each one.

 

Publisher Liability

Under common law principles, a person who publishes a third-party’s defamatory statement bears the same legal responsibility as the creator of that statement. This liability is often referred to as “publisher liability,” and is based in theory that a publisher has the knowledge, opportunity, and ability to exercise control over the publication. For example, a newspaper publisher could face legal consequences for the content located within it. The court’s decision was significant because it meant that if a website attempted to moderate certain posts, it would be held liable for all posts.

 

Section 230’s Creation

In response to the Stratton-Oakmond case, and the ambiguous court decisions regarding internet services provider’s liability, members of Congress introduced an amendment to the CDA that later became Section 230. The Amendment was specifically introduced and passed with the goal of encouraging the development of unregulated, free speech online by relieving internet providers from any liability for their content.

 

Text of the Act- Subsection (c)(1) 

“No Provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

 Section 230 further provides that…

“No cause of action may be brought and no liability may be imposed under any State or local law that is inconsistent with this section.”

 The language above removes legal consequences arising from content posted on their forum. Courts have interpreted this subsection as providing broad immunity to online platforms from suits over content of third parties. Because of this, section 230 has become the principal legal safeguard from lawsuits over sites content.

 

The Good

  •  Section 230 can be viewed as being one of the most important pieces of legislation that protects free speech online. One of the unique aspects of this legislation is that it essentially extends free speech protection, applying it to private, non-governmental companies.
  • Without CDA 230, the internet would be a very different place. This section influenced some of the internet’s most distinctive characteristics. The internet promotes free speech and offers the ability for worldwide connectivity.
  • The CDA 230 does not fully eliminate liability or court remedies for victims of online defamation. Rather, it makes only the creator themselves liable for their speech, instead of the speaker and the publisher.

 

 

The Bad

  •  Because of the legal protections section 230 provides, social media networks have less of an incentive to regulate false or deceptive posts. Deceptive online posts can have an enormous impact on society. False posts have the ability to alter election results, or lead to dangerous misinformation campaigns, like the QAnon conspiracy theory, and the anti-vaccination movement.
  • Section 230 is twenty-five years old, and has not been updated to match the internet’s extensive growth.
  • Big Tech companies have been left largely unregulated regarding their online marketplaces.

 

 The Future of 230

While section 230 is still successfully used by social media platforms, concerns over the archaic legislation have mounted. Just recently, Justice Thomas, who is infamous for being a quiet Justice, wrote a concurring opinion articulating his view that the government should regulate content providers as common carriers, like utilities companies. What implications could that have on the internet? With the growing level of criticism surrounding section 230, will Congress will finally attempt to fix this legislation? If not, will the Supreme Court be left to tackle the problem themselves?

Is social media promoting or curbing Asian hate?

The COVID-19 pandemic has caused our lives to twist and turn in many unexpected ways. Of all the ethnicities in the world, the Asian population took the hardest hit since the virus originated from China. This ultimately caused a significant increase in hate crimes, particularly towards the Asian community, in the real world as well as the cyber world. Since the number of internet users is almost uncountable, the impact that it creates online, as well as offline, is massive. Social media can create bias and social media has the power to remedy bias. The question becomes which side of the scale is it currently tipping towards? Is the internet making social network platform users more vulnerable to manipulation? Are hatred and bias “contagious” through cyber means? On the contrary, is social media remedying the bias that people have created through the internet?

Section 230 of the Communications Decency Act governs the cyber world. It essentially provides legal immunity to internet providers such as TikTok, Facebook, Instagram, Snapchat and etc. The Act states: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” With that being said, posts and comments that appear on these social media platforms do not have any legal ramifications for the tech companies. Hence, do these tech companies have incentives to regulate what is posted on their websites? With the Asian hate wave currently going on, will it evolve into a giant snowball of problems if social media platforms fail to step in? On the other hand, if these tech companies elect to step in, to what extent can they regulate or supervise?

The hatred and bias sparked by the pandemic have not been limited to the real world. Asian Americans have reported the biggest increase in serious incidents of online hate and harassment throughout such a crazy time. Many of them were verbally attacked or insulted by racist and xenophobic slurs merely because they have Asian last names or that they look Asian. According to a new survey shared exclusively with USA TODAY, comparing to last year, there was an 11% increase in sexual harassment, stalking, physical threats, and other incidents reported by Asian Americans, of which many were through online social media platforms. Pursuant to the findings by the Center for the Study of Hate and Extremism at California State University, hate crimes against Asian Americans rose 149% from 2019 to 2020. That is 149% in one year. In addition, an AI-based internet abuse detection organization named L1ght reported a 900% increase on Twitter since the start of this pandemic. This may just be the tip of an iceberg as many of the hate crime incidents may have gone unreported. As you may recall, former President Trump publicly referred the COVID-19 coronavirus as the “Chinese Virus” which led to a record-breaking level of brutal online harassment against Asian Americans. This also gave rise to other similar remarks such as “Kung Flu” or “Wuhan Virus.” Social media users began using hashtags of the like. Just the hashtag “#ChineseVirus” alone has been used over 68,000 times on Instagram.

We must not forget that the real world and the cyber world are interconnected. Ideas consumed online can have a significant impact on our offline actions which may lead to violence. Last week, I had the privilege to interview New York Police Department Lieutenant Mike Wang who is in charge of the NYPD’s Asian Hate Crimes Task Force in Brooklyn, he expressed his concerns about the Asian community being attacked, seniors in particular. Lieutenant Wang said during the interview: “It’s just emotionally difficult and heartbreaking. New York Police Department is definitely taking unprecedented measures to combat these crimes. These incidents cannot be overlooked.” Most of these incidents were unprovoked. Some examples include an elderly Thai immigrant who died after being shoved to the ground, a Filipino-American citizen being slashed in the face with a box cutter leaving a big permanent scar on his face, a Chinese lady being slapped and then set on fire, as well as six Asian-Americans being brutally shot to death in a spa one night. Wang indicated that crimes against Asian-Americans in general are nothing new, they have been in existence for quite some time; however, the rage and frustration of the COVID-19 pandemic fueled this fire to an uncontrollable level. Wang encourages citizens to report crimes in general, not just hate crimes, as we need to be more vocal. You can read more about hate crimes and bias on the city’s website.

From verbal harassment to physical assaults, there have been thousands of reported cases since the pandemic started. These are typically hate crimes as offenders believe that the Asian population should be blamed for the spread of the virus. Perhaps people’s daily interactions online play an important role here. Almost everyone uses some sort of social network in our country, the more hatred and bias they see online, the more likely they will exhibit violence in real life. Why? Because people would think such behaviors are acceptable since many others are doing it. Accountability does not seem to be an issue, especially through social channels. At the most, the user’s post would be removed or the account would get suspended. With that being said, it is questionable as to whether the tech companies are doing enough to address these issues? When encountering these hateful behaviors in the cyber world, what are the policies of the social media giants? For instance, Twitter has implemented a policy on hate speech that prohibits accounts whose primary purpose was to incite harm towards others. Twitter does reserve the discretion to remove inappropriate content or suspend users who violated their policy. You can read more about their Hateful Conduct Policy on their website. Other social media platforms such as Facebook, TikTok, and YouTube all have similar policies in place to address hateful behaviors, violent threats, and harassment; however, are they sufficient? According to the CEO of the Anti-Defamation League, online users continue to experience strong hateful comments despite that the social network companies alleged that they are taking things seriously. Facebook and YouTube are still allowing users to use the racially incentive term “Kung Flu” while TikTok has prohibited it. A comics artist Ethan Van Sciver joked about killing Chinese people in one of his videos but later claimed that it was “facetious sarcasm.” YouTube only removed the video stating that it was a violation of its hate speech policy. Like I previously mentioned, the accountability with these social networks is minimal.

Social networks have definitely helped spread the news keeping everyone in the country informed about the horrible incidents that are happening on a regular basis. Other than spreading the virus of hatred and bias online, social networks also raise awareness and promote positivity on the other hand. As Asian hate crimes spike, public figures, and celebrities are taking part to stand against this battle. Allure magazine’s editor-in-chief Michelle Lee and designer Phillip Lim are one of them. They have posted videos on Instagram sharing their very own experiences of racism in an effort to raise awareness. They also used the hashtag #StopAsianHate in their posts. On March 20, 2021, “Killing Eve” star Sandra Oh joined a “Stop Asian Hate” protest in Pittsburgh. She said she is “proud to be Asian” while giving a powerful speech urging people to fight against racism and hatred towards the Asian community. The video of her speech went viral online in just a day and there have been more than ninety-three thousand views on YouTube since.  I have to say that our generation is not afraid to speak up about the hate and injustice we face in our society today. This generation is taking it upon ourselves to prove racism instead of relying on authorities to recognize the threats and implement policy changes. This is how #StopAAPIHate came about. The hashtag stands for “Stop Asian American and Pacific Islander Hate.” Stop AAPI Hate is a nonprofit organization that tracks incidents of hate and discrimination against Asian Americans and Pacific Islanders in the United States. It was recently created as a social media platform to bring awareness, education, and resources to the Asian community and its allies. Stop AAPI Hate also utilized social networks like Instagram to organize support groups, provide aid and pressure those in power to act. The following is a list of influential members of the AAPI community who are vocalizing their concerns and belief: Christine Chiu, “The Bling Empire” star who is also a producer and an entrepreneur; Chriselle Lim, who is a digital influencer, content creator and entrepreneur; Tina Craig, who is the founder and CEO of U Beauty; Daniel Martin, who is the makeup artist and global director of Artistry & Education at Tatcha; Yu Tsai, who is a celebrity and fashion photographer & host; Sarah Lee and Christine Chang, who are the co-founders and co-CEOs of Glow Recipe; Aimee Song, who is an entrepreneur and digital influencer; Samuel Hyun, who is the chairman of the Massachusetts Asian American Commission; Daniel Nguyen who is an actor; Mai Quynh, who is a celebrity makeup artist; Ann McFerran, who is the founder and CEO of Glamnetic; Nadya Okamoto, who is the founder of August; Sharon Pak who is the founder of INH; Sonja Rasula, who is the founder of Unique Markets; as well as Candice Kumai, who is a writer, journalist, director and best-selling author. The list can go on but the purpose of these influential speakers is that taking things to social media is not just about holding people or companies accountable, instead, it is about creating meaningful changes in our society.

The internet is more powerful than we think it is. It is dangerous to allow individuals to attack or harass others, even through the screen. I understand that the social media platforms cannot blatantly censor contents or materials as they see inappropriate on their websites as it may be a violation of the user’s First Amendment rights; however, there has to be more that they can do. Perhaps creating more rigorous policies as an effort to combat hate speech. If we are able to track the user’s identity to his or her real-life credentials, it may curb the tendency of potential offenders or repeated offenders. The question is how do you draw the line between freedom of speech and social order?

 

The First Amendment Is Still Great For The United States…Or Is It?

In the traditional sense, of course it is. The idea of free speech should always be upheld, without question. However, when it comes to the 21st century, this two and a half centuries old amendment poses extreme roadblocks. Here, I will be discussing how the First Amendment inhibits the ability to tackle extremism and hatred on social media platforms.

One of the things I will be highlighting is how other countries are able to enact legislation to try and deal with the ever-growing hate that festers on social media. They’re able to do so because they do not have a “First Amendment.” The idea of free speech is simply engrained into democracies; they do not need an archaic document in which they are forever bound to tell them that. Here in the U.S., as we all know, congress can be woefully slow and inefficient, with a particular emphasis on refusing to update outdated laws.

The First Amendment successfully blocks any government attempt to regulate social media platforms. Any attempt to do so is met by mostly conservatives, yelling about the government wanting to take away free speech, and the courts will/would not allow the legislation to stand. This in turn means Facebook, Snapchat, Instagram, Reddit, and all the other platform never have to worry about the white supremist and other extremist rhetoric that is prevalent on their platform. Even further than that, most, if not all their algorithms, push those vile posts to hundreds of thousands of people. We are “not allowed” to introduce laws that will come up with a baseline to regulate platforms, in order to crack down on the terrorism that flourishes there. Just as you are not allowed to scream fire in a move theatre, it should not be allowed to post and form groups to spread misinformation, white supremacy, racism, etc. Those topics do not serve the interests of greater society. Yes, it would make it a lot harder for people to be able to easily share their thoughts, no matter how appalling they may be. However, not allowing it to spread online where in 30 seconds millions of people can see it, is not taking away someone’s free speech right. Platforms don’t even necessarily have to delete the posts; just change their algorithms to stop promoting misinformation and hate, promote truth instead even if the truth is boring. They won’t do that though because promoting lies is what makes them money, and it’s always money over the good of the people.  Another reason why this doesn’t limit people’s free speech is because they can still form in person groups, talk about it in private, start an email chain etc. The idea behind trying to regulate what can be posted on social media websites is to make the world a better place for all; to make it harder for racist ideas and terrorism to spread, especially to young, impressionable children/young adults. This shouldn’t be a political issue; shouldn’t we all want to limit the spread of hate?

It is hard for me to imagine the January 6th insurrection on our capital occurring had we had regulations on social media in place. A lot of the groups that planned the insurrection had “stop the steal” groups and other related election-fraud conspiracy pages on Facebook. Imagine if we had in place a law that said social media platforms had to take down posts and pages eliciting false information that could be inciteful or detrimental to the security of the United States? I realize that is broad discretion, the legislation would have to be worded very narrowly, and those decisions to remove posts should be made with the highest level of scrutiny. Had we had a regulation like that in place, these groups would not have been able to reach as wide of an audience. I think Ashley Babbitt and Officer Sicknick would still be alive had Facebook been obligated to take those pages and posts down.

Alas, we are unable to even consider legislation to help address this cause because the courts and a lot of congress people refuse to acknowledge that we must update our laws and redefine how we read the First Amendment. The founders could never have imagined the world we live in today. Congress and the Courts need to stop pretending that a piece of paper written over a hundred years ago is some untouchable work from god. The founders wrote the First Amendment to ensure no one would be thrown in jail for speaking their mind, so that people who hold different political views could not be persecuted, to give people the ability to express themselves. Enacting legislation to prevent blatant lies, terrorism, racism, and white supremacy from spreading as easily online does not go against the First Amendment. It is not telling people they can’t have those views; it is not throwing anyone in prison or handing out fines for those views, and white supremacist or other racist ideas are not “political discourse.” Part of the role of government is to protect the people, to do what is right for society as a whole, and I fail to see how telling social media platforms they need to take down these appalling posts is outweighed by this idea that “nearly everything is free speech, even if it poisons the minds of our youth and perpetuates violence because that’s what the First Amendment says.”

Let’s now look at the United Kingdom and what they are able to do because they do not have any law comparable to the First Amendment. In May of 2021, the British Parliament introduced the Online Safety Bill. If passed into law, the bill will place a duty of care on social media firms and websites to ensure they take swift action to remove illegal content, such as hate crimes, harassment and threats directed at individuals, including abuse which falls below the criminal threshold. As currently written, the bill would also require the social media companies to limit the spread of and remove terroristic material, suicidal content, and child sexual abuse. The companies would be mandated to report postings of those kind to the authorities. Lastly, the Online Safety Bill would require companies to safeguard freedom of expression, and reinstate material unfairly removed. This includes forbidding tech firms from discriminating against particular political viewpoints. The bill reserves the right for Ofcom (the UK’s communications regulator) to hold them accountable for the arbitrary removal of journalistic content.

The penalties for not complying with the proposed law would be significant. Social Media companies that do not comply could be fined up to 10% of their net profits or $25 million. Further, the bill would allow Ofcom to bring criminal actions against named senior managers whose company does not comply with Ofcom’s request for information.

It will be interesting to see how the implementation of this bill will go if it is passed. I believe it is a good steppingstone to reign in the willful ignorance displayed by these companies. Again, it is important these bills be carefully scrutinized, otherwise you may end up with a bill like the one proposed in India. While I will not be discussing their bill at length in this post, you can read more about it here. In short, India’s bill is widely seen as autocratic in nature; giving the government the ability to fine and or criminally prosecute social media companies and their employees if they fail to remove content that the government does not like (for instance, people who are criticizing their new agriculture regulations).

Bringing this ship back home, can you imagine a bill like Britain’s ever passing in the US, let alone even being introduced? I certainly can’t because we still insist on worshiping an amendment that is 230 years old. The founders wrote the bill based on the circumstances of their time, they could never have imagined what today would look like. Ultimately, the decision to allow us to move forward and adopt our own laws to start regulating social media companies is up to the Supreme Court. Until the Supreme Court wakes up and decides to allow a modern reading/interpretation of the First Amendment, any law to hold companies accountable is doomed to fail. It is illogical to put a piece of paper over the safety and well being of Americans, yet we consistently do just that. We will keep seeing reports of how red flags were missed and as a result people were murdered or how Facebook pages helped spread another “Big Lie” which results in another capital sieged. All because we cannot move away from our past to brighten our futures.

 

What would you do to help curtail this social dilemma?

A Slap in the Face(book)?

Social media law has become somewhat of a contentious issue in recent years. While most people nowadays could not imagine life without it, many realize too, that it’s influence on our daily lives may not be a great thing. As the technology has advanced to unimaginable levels and the platforms have boomed in popularity, it seems as though our smart phones and Big Tech know our every move. The leading social media platform, Facebook, has around 1.82 billion active users a day, with people volunteering all sorts of personal information to be stored in the internet database. Individual profiles hold pictures of our children, our friends, our family, meals we eat, locations we visit. “What’s on your mind?” is the opening invite to any Facebook page, and one can only hazard a guess as to how many people actually answer that question on a daily basis.  Social media sites know our likes, our dislikes, our preferences, our moods, the shoes we want to buy for that dress we are thinking of wearing to the party we are looking forward to in three weeks!

With all that knowledge, comes enormous power, and through algorithmic design, social media can manipulate our thoughts and beliefs by controlling what we see and don’t see. With all that power, therefore, should come responsibility, but Section 230 of the Communications Decency Act (CDA) has created a stark disconnect between the two. What started out as a worthy protection for internet service providers for the content posted by others, has more recently drawn criticism for the lack of accountability held by social media oligarchs such as Jack Dorsey (Twitter) and Mark Zuckerberg (Facebook).

However, that could all be about to change.

On May 28, 2017, three friends lost their lives in a deadly car accident in which the 17-year-old driver, Jason Davis, crashed into a tree at an estimated speed of 113 mph. Landen Brown, 20, and Hunter Morby, 17, were passengers. Tragic accident? Or wrongful death?

Parents of the deceased lay blame on the Snapchat App, which offered a ‘Speed Filter’ that would clock how fast you were moving, and allowed users to snap and share videos of their movements in progress.

You see where this is going.

As quickly became the trend, the three youths used the app to see how fast they could record the speed of their car. Just moments before their deaths, Davis had posted a ‘snap’ clocking the car’s speed at 123 mph. In Lemmon v Snap, the parents of two of the boys brought suit against the social media provider, Snap, Inc., claiming that the app feature encouraged reckless driving and ultimately served to “entice” the young users to their death.

Until now, social media platforms and other internet service providers have enjoyed the protection of near absolute immunity from liability. Written in 1996, Section 230 was designed to protect tech companies from liability, for suits such as defamation, for third party posts. In the early days, it was small tech companies, or an online business with a ‘comments’ feature that generally saw the benefits of the Code. 25 years later, many people are questioning the role of Section 230 within the vastly developing era of social media and the powerful pass it grants Big Tech in many of its societal shortcomings.

Regarded more as an open forum than the publisher or speaker, social media platforms such as Facebook, Twitter, TikTok, Instagram and Snapchat, have been shielded by Section 230 from any legal claims of harm caused by the content posted on their sites.

Applied broadly, it is argued that Section 230 prevents Snap, Inc. from being held legally responsible for the deaths of the three boys in this case, which is the defense the tech company relied upon. The district court dismissed the case on those grounds, holding that the captured speeds fall into the category of content published by a third party, for which the service provider cannot be held liable. The Ninth Circuit however, disagrees. The Court’s interesting swerve of such immunity, is that the speed filter resulted in the deaths of the boys regardless of whether or not their captured speeds were posted. In other words, it did not matter if the vehicle’s speed was shared with others in the app; the fact that the app promotes, and rewards, high speed (although the award system within the app is not entirely clear), is enough.

The implications of this could be tremendous. At a time when debate over 230 reevaluations is already heavy, this precedential interpretation of Section 230 could lead to some cleverly formulated legal arguments for holding internet service providers accountable for some of the highly damaging effects of internet, social media and smart phone usage.

For the many benefits the internet has to offer, it can no longer be denied that there is another, very ugly side to internet usage, in particular with social media.

It is somewhat of an open secret that social media platforms such as Facebook and Instagram, purposely design their apps to be addictive by its users. It is also no secret that there is a growing association between social media usage and suicides, depression and other mental health issues. Cyber bullying has long been a very real problem. In addition, studies have shown that smart device screen time in very young children has shockingly detrimental impacts on a child’s social and emotional developments,  not to mention the now commonly known damage it can have on a person’s eyesight.

An increased rate of divorces has been linked to smart phones, and distracted driving – whether it be texting or keeping tabs on your Twitter retweets, or Facebook ‘likes’– is on the increase. Even an increase in accidents while walking has been linked to distractions caused by the addictive smart devices.

With the idea of accountability being the underlying issue, it can of course be stated that almost all of these problems should be a matter of personal responsibility. Growing apart from your spouse? Ditch your cell phone and reinvent date night. Feeling depressed about your life as you ‘heart’ a picture of your colleague’s wine glass in front of a perfect sunset beach backdrop? Close your laptop and stop comparing yourself to everyone else’s highlights. Step in front of a cyclist while LOL’ing in a group text? Seriously….put your Apple Watch hand in your pocket and look where you are going! The list of personal-blame is endless. But then we hear about three young friends, two still in their teens, who lose their lives engaged with social media, and suddenly it’s not so easy to blame them for their own devastating misfortune.

While social media sites cannot be held responsible for the content posted by others, no matter how hurtful it might be to some, or no matter what actions it leads others to take, should they be held responsible for negligently making their sites so addictive, so emotionally manipulative and so targeted towards individual users, that such extensive and compulsive use leads to dire consequences? According to the Ninth Circuit, negligent app design can in fact be a cause of action for wrongful death.

With a potential crack in the 230-armor, the questions many lawyers will be scrambling to ask are:

      • What duties do the smart device producers and/or internet service providers owe to their users?
      • Are these duties breached by continuing to design, produce, and provide products that are now known to create such disturbing problems?
      • What injuries have occurred and where those injuries foreseeably caused by any such breaches of duty?

For the time being, it is unlikely that any substantial milestone will be reached with regards to Big Tech accountability, but the Ninth Circuit decision in this case has certainly delivered a powerful blow to the Big Tech apparent untouchability in the courtroom.

As awareness of all these social media related issues grow, could this court decision open the door to further suits of defective or negligent product design resulting in death or injury? Time will tell…..stay tuned.

Facebook Posts Can Land You In Jail!

Did you know that a single Facebook post can land you in jail?  Its true, an acting judge in Westchester NY recently ruled that a ‘tag’ notification on Facebook violated  a protective order.  The result of the violation; second-degree contempt, which can lead to punishment of up to a year in jail.   In January, the a judge issued a  restraining order against Maria Gonzalez, prohibiting her from communicating with her former sister-in-law, Maribel Calderon.  Restraining orders are issued to prevent person from making contact with protected individuals.  Traditionally, courts interpreted contact to mean direct communications in person, mail, email, phone, voicemail or even text.   Facebook tags, however, present a slightly different form of contact.

Unlike Facebook messages, tagging someone identifies the tagged person on the poster’s Facebook page.  The tag, however, has the concurrent effect of linking to the identified person’s profile; thereby notifying them of the post.  Ms. Gonzalez tagged Calderon in a post on her (Gonzalez’s) timeline calling Calderon stupid and writing “you have a sad family.”  Gonzalez argued the post did not violate the protective order since there was no contact aimed directly at Calderon.  Acting Westchester (NY) County Supreme Court Justice Susan Capeci felt otherwise writing a restraining order includes “contacting the protected party by electronic or other means.”  Other means, it seems, is through personal posts put out on social media.

And Social Media posts aren’t just evidence of orders of protection violations, they are also grounds for supporting the issuance of restraining orders.  In 2013, a court granted an order of protection for actress Ashley Tinsdale against an alleged stalker.  Tinsdale’s lawyers presented evidence of over 19,000 tweets that the alleged stalker posted about the actress (an average of 100 tweets per day).

The bottom line:  Naming another on a social media post, even one that is directed to the twittersphere or Facebook community, rather than toward a particular individual,  is sufficient contact for purposes of supporting restraining orders or violations thereof.   We should all keep our posts positives –even more so if we have been told to stay away!!!

“There Oughta be a Law”

In February 2015, two young men dared  Parker Drake to jump into a frigid ocean for virtual entertainment. Parker, who doctors diagnosed as having autism spectrum disorder, first “met” the men through twitter. After several exchanges the young men took Parker to the ocean, “for laughs” dared him to jump in and then videotaped Parker’s struggle to return to shore.  The men published the video on Facebook, you could hear them laugh as Parker battled the waves.

Upon discovering the tape, Manasquan, NJ Municipal Court officials charged the men with “endangering the welfare of an incompetent person.”  The problem, however, is that because 19 year old Parker voluntarily jumped into the ocean, the men had not, in fact, committed a crime.

The case is another example of a moral wrong failing to translate into a legal wrong.  Sadly, laws do not exist to punish those who use social media for bullying; just consider the events that prompted Tyler Clementi to jump off the George Washington Bridge.  With this unfortunate event, Parker’s mother joins the rank of parents who fail to see justice in the courts for reprehensible harms committed against their children.

The response to the Parker Drake event, much like the response to many  social media wrongs for which the criminal law offers no retribution, is both outrage and frustration.   Parker’s mother is seeking justice in the civil courts.  The politicians have weighed in too.  Just last week several New Jersey lawmakers announced their intention to draft a law aimed atpunishing individuals who victimized disabled persons.

The law is not well suited for punishment of harms like the one that happened to Parker.  Our Constitution often stands as a roadblock between justice for social media wrongs and the right to voice opinions and ideas.  First Amendment concerns prevent punishing many types of speech, particularly outside of the classroom.   And then there are issues of “void for vagueness.”  A law that punishes those who exploit the developmentally disabled leaves open to interpretation what constitutes “exploitation.” (and I suspect defendants charged in a crime such as this might try to escape punishment by challenging whether his or her “victim” was developmentally disabled.”)

I am interested in seeing the legislation New Jersey law makers propose.  My hope is that they can walk the fine line between justice and free speech.  The lawyer in me, however, suspects that the bill will never make it to the Governor’s desk; as we have seen too many times before, regulating social media bullying in the courts is a nearly impossible task.

 

 

 

Does Social Media Replace the Need to Think? Has it Caused Our Critical Thinking Skills to Shrink?

 As we all know, through social media, information disseminates with lightning speed.  Instantly, millions are up to date and provided conclusions to a variety of stories and issues. Users simply acquire, retain, and click (i.e., re-tweet, like or dislike), easy-peasy- free of thought.  Is this troubling?  Robert Frost once said “Thinking isn’t agreeing or disagreeing.  That’s voting.”

Accordingly, if a re-tweet is nothing more than a vote for the product of the analysis of others , and if clicking Facebook’s “like” button simply allows over 1 billion users to avoid intellectual expression all together, are we setting a trend abandoning 2500 years of trans-disciplinary critical thinking?  Is this dangerous to future generations?  Is this a good trend, beneficial perhaps?  Is it worrisome that social media allows so many to routinely supplant active argumentation? 

In 1987, the National Council for Excellence in Critical Thinking defined critical thinking as “the intellectually disciplined process of actively and skillfully conceptualizing, applying, analyzing, synthesizing, and/or evaluating information gathered from, or generated by, observation, experience, reflection, reasoning, or communication, as a guide to belief and action.  In its exemplary form, it is based on universal intellectual values that transcend subject matter divisions: clarity, accuracy, precision, consistency, relevance, sound evidence, good reasons, depth, breadth, and fairness.   Critical thinking can be seen as having two components: 1) a set of information and belief generating and processing skills, and 2) the habit, based on intellectual commitment, of using those skills to guide behavior.”

Thinking is thus to be contrasted with:  the mere acquisition and retention of information alone, because it involves a particular way in which information is sought and treated.  If this is true, it means that the net intellectual engagement in context- for millions of social media users- amounts to nothing more than a preferential re-tweet, and/or clicking “like”/“dislike,” with a smile. 

But with only so many users, social media remains a form of entertainment.  One may argue: Relax!  It’s fun.  There are plenty of people left who still read and think!  Okay, but what happens when 5 or 6 billion people become devoted users?  How much fun would that look like?  Perhaps it is just evolution?

  Could it be that “thinking” is simply a natural process that will adapt to social media and evolve accordingly, in a beneficial way?  Perhaps an active mode of thinking- where the thinker consciously separates facts from opinions and challenges assumptions- is becoming outdated? 

Social Media Companies and Subpoena’s

Given the digital goldmine of potential evidence available from social media websites, it is not surprising that they are increasingly targeted by search warrants and government subpoenas in criminal matters.

I recently had a conversation with an Assistant District Attorney that stated when they subpoena digital records from social media websites like Facebook, and Twitter, those social media companies disclose to the user that a subpoena has been ordered to release specific information from the website. As the ADA stated, “this makes it extremely difficult to investigate a person’s social media activity during an on-going investigation.” Further, when a subpoena is issued, the ADA already has creditable evidence to move forward with a subpoena to proceed with the investigation. The ADA is not issuing subpoena’s to invade the privacy of an individual’s innocent conduct.

This new policy from social media companies comes in the wake of the NSA surveillance scandal. Just last month, Eric Snowden appeared via videoconference at the South by Southwest technology conference, urging companies to increase their security and protect their users from government intrusion. Snowden wants the technology industry to get serious about protecting the privacy of its users and customers. Since the NSA scandal, social media companies have implemented new privacy policies that have made it difficult for investigators to subpoena records. This has changed the way social media companies cooperate with government officials.

Federal law provides that, in some circumstances, the government may compel social media companies to produce social media evidence without a warrant. The Stored Communications Act (“SCA”) governs the ability of governmental entities to compel service providers, such as Twitter and Facebook, to produce content (e.g., posts and Tweets) and non-content customer records (e.g., name and address) in certain circumstances. The SCA, which was passed in 1986, has not been amended to reflect society’s heavy use of new technologies and electronic services, such as social media, which have evolved since the SCA’s original enactment. As a result, courts have been left to determine how and whether the SCA applies to the varying features of different social media services.

Facebook has posted in a Help page article titled “May I obtain contents of a user’s account from Facebook using a civil subpoena? The article cites the Stored Communications Act as the reason that “Federal law prohibits Facebook from disclosing user content…in response to a civil subpoena,” stating unequivocally:

“Federal law prohibits Facebook from disclosing user content (such as messages, timeline posts, photos, etc.) in response to a civil subpoena. Specifically, the Stored Communications Act, 18 U.S.C. § 2701 et seq., prohibits Facebook from disclosing the contents of an account to any non-governmental entity pursuant to a subpoena or court order.”

In response to Facebook’s interpretation of SCA, a federal district court judge has held that certain elements (e.g., private messages) of a user’s Facebook or MySpace profile were protected from being subpoenaed under the Stored Communications Act by analogizing them to a type of electronic message (Bulletin Board System–BBS) that was mentioned in the Stored Communications Act. Crispin v. Audigier, 717 F.Supp.2d 965 (2010, C.D. CA).  The court quashed the defendant’s subpoenas to Facebook and MySpace requesting private messages from the plaintiff’s account.

As to the subpoenas seeking Facebook wall postings and MySpace comments, the Crispin court remanded the matter so a fuller evidentiary record regarding plaintiff’s privacy settings so it could be determined before deciding whether to quash the subpoena for that content. This implies that Facebook does not get to decide where the “privacy” bar should be set in determining whether social networking postings and comments are subject to a subpoena as Facebook’s Help pages would lead us to believe—only the court gets to decide that.

Perhaps this is why companies like Facebook have implemented a disclosure rule that notifies the user when a warrant or subpoena has been issued and requests the users site based content.

Are social media companies doing the right thing by notifying users when records are subpoenaed? Thoughts?