How Defamation and Minor Protection Laws Ultimately Shaped the Internet

Kyiv, Ukraine – September 5, 2019: A paper cubes collection with printed logos of world-famous social networks and online messengers, such as Facebook, Instagram, YouTube, Telegram and others.

The Communications Decency Act (CDA) was originally enacted with the intention of shielding minors from indecent and obscene online material. Despite its origins, Section 230 of the Communications Decency Act is now commonly used as a broad legal safeguard for social media platforms to shield themselves from legal liability for content posted on their sites by third parties. Interestingly, the reasoning behind this safeguard arises both from defamation common law, and constitutional free speech laws. As the internet has grown, however, this legal safeguard has gained increasing criticism. However, is this legislation actually undesirable? Many would disagree as section 230 contains “the 26 words that created the internet.”

 

Origin of the Communications Decency Act

The CDA was introduced and enacted as an attempt to shield minors from obscene or indecent content online. Although parts of the Act were later struck down for first amendment free speech violations, the Court left section 230 intact. The creation of section 230 was influenced by two landmark court decisions of defamation lawsuits.

The first case was in 1991, and involved an Internet site that hosted around 150 online forums. A claim was brought against the internet provider when a columnist of one of the online forums posted a defamatory comment about his competitor. The competitor sued the online distributor for the published defamation. The courts categorized the internet service provider as a distributor because they did not review any content of the forums before the content was posted to the site. As a distributor, there was no legal liability, and the case was dismissed.

 

Distributor Liability

Distributor Liability refers to the limited legal consequences that a distributor is exposed to for defamation. A common example of a distributor, is a bookstore or library. The theory behind distributor liability is that it would be impossible for distributors to moderate and censor every piece of content that they disperse because of the sheer volume, and the impossibility of knowing whether something is false or not.

The second case that influenced the creation of section 230, was Stratton Oakmont, Inc. v. Prodigy Servs. Co., in which the court used publisher liability theory to find the internet provider liable for the third party defamatory postings published on its site.  The court deemed the website a publisher because they moderated and deleted certain posts, regardless of the fact that there were far too many postings a day to regulate each one.

 

Publisher Liability

Under common law principles, a person who publishes a third-party’s defamatory statement bears the same legal responsibility as the creator of that statement. This liability is often referred to as “publisher liability,” and is based in theory that a publisher has the knowledge, opportunity, and ability to exercise control over the publication. For example, a newspaper publisher could face legal consequences for the content located within it. The court’s decision was significant because it meant that if a website attempted to moderate certain posts, it would be held liable for all posts.

 

Section 230’s Creation

In response to the Stratton-Oakmond case, and the ambiguous court decisions regarding internet services provider’s liability, members of Congress introduced an amendment to the CDA that later became Section 230. The Amendment was specifically introduced and passed with the goal of encouraging the development of unregulated, free speech online by relieving internet providers from any liability for their content.

 

Text of the Act- Subsection (c)(1) 

“No Provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

 Section 230 further provides that…

“No cause of action may be brought and no liability may be imposed under any State or local law that is inconsistent with this section.”

 The language above removes legal consequences arising from content posted on their forum. Courts have interpreted this subsection as providing broad immunity to online platforms from suits over content of third parties. Because of this, section 230 has become the principal legal safeguard from lawsuits over sites content.

 

The Good

  •  Section 230 can be viewed as being one of the most important pieces of legislation that protects free speech online. One of the unique aspects of this legislation is that it essentially extends free speech protection, applying it to private, non-governmental companies.
  • Without CDA 230, the internet would be a very different place. This section influenced some of the internet’s most distinctive characteristics. The internet promotes free speech and offers the ability for worldwide connectivity.
  • The CDA 230 does not fully eliminate liability or court remedies for victims of online defamation. Rather, it makes only the creator themselves liable for their speech, instead of the speaker and the publisher.

 

 

The Bad

  •  Because of the legal protections section 230 provides, social media networks have less of an incentive to regulate false or deceptive posts. Deceptive online posts can have an enormous impact on society. False posts have the ability to alter election results, or lead to dangerous misinformation campaigns, like the QAnon conspiracy theory, and the anti-vaccination movement.
  • Section 230 is twenty-five years old, and has not been updated to match the internet’s extensive growth.
  • Big Tech companies have been left largely unregulated regarding their online marketplaces.

 

 The Future of 230

While section 230 is still successfully used by social media platforms, concerns over the archaic legislation have mounted. Just recently, Justice Thomas, who is infamous for being a quiet Justice, wrote a concurring opinion articulating his view that the government should regulate content providers as common carriers, like utilities companies. What implications could that have on the internet? With the growing level of criticism surrounding section 230, will Congress will finally attempt to fix this legislation? If not, will the Supreme Court be left to tackle the problem themselves?

Is Cyberbullying the Newest Form of Police Brutality?

Police departments across the country are calling keyboard warriors into action to help them solve crimes…but at what cost?

In a survey of 539 police departments in the U.S., 76% of departments said that they used their social media accounts to solicit tips on crimes. Departments post “arrested” photos to celebrate arrests, surveillance footage for suspect identification, and some even post themed wanted posters, like the Harford County Sheriff’s Office.

The process for using social media as an investigative tool is dangerously simple and the consequences can be brutal. A detective thinks posting on social media might help an investigation, so the department posts a video or picture asking for information. The community, armed with full names, addresses, and other personal information, responds with some tips and a lot of judgmental, threatening, and bigoted comments. Most police departments have no policy for removing posts after information has been gathered or cases are closed, even if the highlighted person is found to be innocent. A majority of people who are arrested are not even convicted of a crime.

Law enforcement’s use of social media in this way threatens the presumption of innocence, creates a culture of public humiliation, and often results in a comment section of bigoted and threatening comments.

On February 26, 2020, the Manhattan Beach Police Department posted a mugshot of Matthew Jacques on their Facebook and Instagram pages for their “Wanted Wednesday” social media series. The pages have 4,500 and 13,600, mostly local, followers, respectively. The post equated Matthew to a fugitive and commenters responded publicly with information about where he worked. Matthew tried to call off work out of fear of a citizen’s arrest. The fear turned out to be warranted when two strangers came to find him at his workplace. Matthew eventually lost his job because he was too afraid to return to work.

You may be thinking this is not a big deal. This guy was probably wanted for something really bad and the police needed help. After all, the post said the police had a warrant. Think again.

There was no active warrant for Matthew at the time, his only (already resolved) warrant came from taking too long to schedule remedial classes for a 2017 DUI. Matthew was publicly humiliated by the local police department. The department even refused to remove the social media posts after being notified of the truth. The result?

Matthew filed a complaint against the department for defamation (as well as libel per se and false light invasion of privacy). Typically, defamation requires the plaintiff to show:

1) a false statement purporting to be fact; 2) publication or communication of that statement to a third person; 3) fault amounting to at least negligence; and 4) damages, or some harm caused to the person or entity who is the subject of the statement.

Here, the department made a false statement – that there was a warrant. They published it on their social media, satisfying the second element. They did not check readily available public records that showed Matthew did not have a warrant. Finally, Matthew lived in fear and lost his job. Clearly, he was harmed.

The police department claimed their postings were protected by the California Constitution, governmental immunity, and the 1st Amendment. Fortunately, the court denied the department’s anti-SLAPP motion. Over a year after postings, the department took down the posting and settled the lawsuit with Matthew.

Some may think that Matthew’s case is an anomaly and that, usually, the negative attention is warranted and perhaps even socially beneficial because it further de-incentivizes criminal activity via humiliation and social stigma. However, most arrests don’t result in convictions, many of the police’s cyberbullying victims are likely innocent. Even if they are guilty, leaving these posts up can increase the barrier to societal re-entry, which can increase recidivism rates. A negative digital record can make finding jobs and housing more difficult. Many commenters assume the highlighted individual’s guilt and take to their keyboards to shame them.

Here’s one example of a post and comment section from the Toledo Police Department Facebook page:

Unless departments change their social media use policies, they will continue to face defamation lawsuits and continue to further the degradation of the presumption of innocence.

Police departments should discontinue the use of social media in the humiliating ways described above. At the very least, they should consider using this tactic only for violent, felonious crimes. Some departments have already changed their policies.

The San Francisco Police Department has stopped posting mugshots for criminal suspects on social media. According to Criminal Defense Attorney Mark Reichel, “The decision was made in consultation with the San Francisco Public Defender’s Office who argued that the practice of posting mugshots online had the potential to taint criminal trials and follow accused individuals long after any debt to society is paid.” For a discussion of some of the issues social media presents to maintaining a fair trial, see Social Media, Venue and the Right to a Fair Trial.

Do you think police departments should reconsider their social media policies?

Is social media promoting or curbing Asian hate?

The COVID-19 pandemic has caused our lives to twist and turn in many unexpected ways. Of all the ethnicities in the world, the Asian population took the hardest hit since the virus originated from China. This ultimately caused a significant increase in hate crimes, particularly towards the Asian community, in the real world as well as the cyber world. Since the number of internet users is almost uncountable, the impact that it creates online, as well as offline, is massive. Social media can create bias and social media has the power to remedy bias. The question becomes which side of the scale is it currently tipping towards? Is the internet making social network platform users more vulnerable to manipulation? Are hatred and bias “contagious” through cyber means? On the contrary, is social media remedying the bias that people have created through the internet?

Section 230 of the Communications Decency Act governs the cyber world. It essentially provides legal immunity to internet providers such as TikTok, Facebook, Instagram, Snapchat and etc. The Act states: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” With that being said, posts and comments that appear on these social media platforms do not have any legal ramifications for the tech companies. Hence, do these tech companies have incentives to regulate what is posted on their websites? With the Asian hate wave currently going on, will it evolve into a giant snowball of problems if social media platforms fail to step in? On the other hand, if these tech companies elect to step in, to what extent can they regulate or supervise?

The hatred and bias sparked by the pandemic have not been limited to the real world. Asian Americans have reported the biggest increase in serious incidents of online hate and harassment throughout such a crazy time. Many of them were verbally attacked or insulted by racist and xenophobic slurs merely because they have Asian last names or that they look Asian. According to a new survey shared exclusively with USA TODAY, comparing to last year, there was an 11% increase in sexual harassment, stalking, physical threats, and other incidents reported by Asian Americans, of which many were through online social media platforms. Pursuant to the findings by the Center for the Study of Hate and Extremism at California State University, hate crimes against Asian Americans rose 149% from 2019 to 2020. That is 149% in one year. In addition, an AI-based internet abuse detection organization named L1ght reported a 900% increase on Twitter since the start of this pandemic. This may just be the tip of an iceberg as many of the hate crime incidents may have gone unreported. As you may recall, former President Trump publicly referred the COVID-19 coronavirus as the “Chinese Virus” which led to a record-breaking level of brutal online harassment against Asian Americans. This also gave rise to other similar remarks such as “Kung Flu” or “Wuhan Virus.” Social media users began using hashtags of the like. Just the hashtag “#ChineseVirus” alone has been used over 68,000 times on Instagram.

We must not forget that the real world and the cyber world are interconnected. Ideas consumed online can have a significant impact on our offline actions which may lead to violence. Last week, I had the privilege to interview New York Police Department Lieutenant Mike Wang who is in charge of the NYPD’s Asian Hate Crimes Task Force in Brooklyn, he expressed his concerns about the Asian community being attacked, seniors in particular. Lieutenant Wang said during the interview: “It’s just emotionally difficult and heartbreaking. New York Police Department is definitely taking unprecedented measures to combat these crimes. These incidents cannot be overlooked.” Most of these incidents were unprovoked. Some examples include an elderly Thai immigrant who died after being shoved to the ground, a Filipino-American citizen being slashed in the face with a box cutter leaving a big permanent scar on his face, a Chinese lady being slapped and then set on fire, as well as six Asian-Americans being brutally shot to death in a spa one night. Wang indicated that crimes against Asian-Americans in general are nothing new, they have been in existence for quite some time; however, the rage and frustration of the COVID-19 pandemic fueled this fire to an uncontrollable level. Wang encourages citizens to report crimes in general, not just hate crimes, as we need to be more vocal. You can read more about hate crimes and bias on the city’s website.

From verbal harassment to physical assaults, there have been thousands of reported cases since the pandemic started. These are typically hate crimes as offenders believe that the Asian population should be blamed for the spread of the virus. Perhaps people’s daily interactions online play an important role here. Almost everyone uses some sort of social network in our country, the more hatred and bias they see online, the more likely they will exhibit violence in real life. Why? Because people would think such behaviors are acceptable since many others are doing it. Accountability does not seem to be an issue, especially through social channels. At the most, the user’s post would be removed or the account would get suspended. With that being said, it is questionable as to whether the tech companies are doing enough to address these issues? When encountering these hateful behaviors in the cyber world, what are the policies of the social media giants? For instance, Twitter has implemented a policy on hate speech that prohibits accounts whose primary purpose was to incite harm towards others. Twitter does reserve the discretion to remove inappropriate content or suspend users who violated their policy. You can read more about their Hateful Conduct Policy on their website. Other social media platforms such as Facebook, TikTok, and YouTube all have similar policies in place to address hateful behaviors, violent threats, and harassment; however, are they sufficient? According to the CEO of the Anti-Defamation League, online users continue to experience strong hateful comments despite that the social network companies alleged that they are taking things seriously. Facebook and YouTube are still allowing users to use the racially incentive term “Kung Flu” while TikTok has prohibited it. A comics artist Ethan Van Sciver joked about killing Chinese people in one of his videos but later claimed that it was “facetious sarcasm.” YouTube only removed the video stating that it was a violation of its hate speech policy. Like I previously mentioned, the accountability with these social networks is minimal.

Social networks have definitely helped spread the news keeping everyone in the country informed about the horrible incidents that are happening on a regular basis. Other than spreading the virus of hatred and bias online, social networks also raise awareness and promote positivity on the other hand. As Asian hate crimes spike, public figures, and celebrities are taking part to stand against this battle. Allure magazine’s editor-in-chief Michelle Lee and designer Phillip Lim are one of them. They have posted videos on Instagram sharing their very own experiences of racism in an effort to raise awareness. They also used the hashtag #StopAsianHate in their posts. On March 20, 2021, “Killing Eve” star Sandra Oh joined a “Stop Asian Hate” protest in Pittsburgh. She said she is “proud to be Asian” while giving a powerful speech urging people to fight against racism and hatred towards the Asian community. The video of her speech went viral online in just a day and there have been more than ninety-three thousand views on YouTube since.  I have to say that our generation is not afraid to speak up about the hate and injustice we face in our society today. This generation is taking it upon ourselves to prove racism instead of relying on authorities to recognize the threats and implement policy changes. This is how #StopAAPIHate came about. The hashtag stands for “Stop Asian American and Pacific Islander Hate.” Stop AAPI Hate is a nonprofit organization that tracks incidents of hate and discrimination against Asian Americans and Pacific Islanders in the United States. It was recently created as a social media platform to bring awareness, education, and resources to the Asian community and its allies. Stop AAPI Hate also utilized social networks like Instagram to organize support groups, provide aid and pressure those in power to act. The following is a list of influential members of the AAPI community who are vocalizing their concerns and belief: Christine Chiu, “The Bling Empire” star who is also a producer and an entrepreneur; Chriselle Lim, who is a digital influencer, content creator and entrepreneur; Tina Craig, who is the founder and CEO of U Beauty; Daniel Martin, who is the makeup artist and global director of Artistry & Education at Tatcha; Yu Tsai, who is a celebrity and fashion photographer & host; Sarah Lee and Christine Chang, who are the co-founders and co-CEOs of Glow Recipe; Aimee Song, who is an entrepreneur and digital influencer; Samuel Hyun, who is the chairman of the Massachusetts Asian American Commission; Daniel Nguyen who is an actor; Mai Quynh, who is a celebrity makeup artist; Ann McFerran, who is the founder and CEO of Glamnetic; Nadya Okamoto, who is the founder of August; Sharon Pak who is the founder of INH; Sonja Rasula, who is the founder of Unique Markets; as well as Candice Kumai, who is a writer, journalist, director and best-selling author. The list can go on but the purpose of these influential speakers is that taking things to social media is not just about holding people or companies accountable, instead, it is about creating meaningful changes in our society.

The internet is more powerful than we think it is. It is dangerous to allow individuals to attack or harass others, even through the screen. I understand that the social media platforms cannot blatantly censor contents or materials as they see inappropriate on their websites as it may be a violation of the user’s First Amendment rights; however, there has to be more that they can do. Perhaps creating more rigorous policies as an effort to combat hate speech. If we are able to track the user’s identity to his or her real-life credentials, it may curb the tendency of potential offenders or repeated offenders. The question is how do you draw the line between freedom of speech and social order?

 

The First Amendment Is Still Great For The United States…Or Is It?

In the traditional sense, of course it is. The idea of free speech should always be upheld, without question. However, when it comes to the 21st century, this two and a half centuries old amendment poses extreme roadblocks. Here, I will be discussing how the First Amendment inhibits the ability to tackle extremism and hatred on social media platforms.

One of the things I will be highlighting is how other countries are able to enact legislation to try and deal with the ever-growing hate that festers on social media. They’re able to do so because they do not have a “First Amendment.” The idea of free speech is simply engrained into democracies; they do not need an archaic document in which they are forever bound to tell them that. Here in the U.S., as we all know, congress can be woefully slow and inefficient, with a particular emphasis on refusing to update outdated laws.

The First Amendment successfully blocks any government attempt to regulate social media platforms. Any attempt to do so is met by mostly conservatives, yelling about the government wanting to take away free speech, and the courts will/would not allow the legislation to stand. This in turn means Facebook, Snapchat, Instagram, Reddit, and all the other platform never have to worry about the white supremist and other extremist rhetoric that is prevalent on their platform. Even further than that, most, if not all their algorithms, push those vile posts to hundreds of thousands of people. We are “not allowed” to introduce laws that will come up with a baseline to regulate platforms, in order to crack down on the terrorism that flourishes there. Just as you are not allowed to scream fire in a move theatre, it should not be allowed to post and form groups to spread misinformation, white supremacy, racism, etc. Those topics do not serve the interests of greater society. Yes, it would make it a lot harder for people to be able to easily share their thoughts, no matter how appalling they may be. However, not allowing it to spread online where in 30 seconds millions of people can see it, is not taking away someone’s free speech right. Platforms don’t even necessarily have to delete the posts; just change their algorithms to stop promoting misinformation and hate, promote truth instead even if the truth is boring. They won’t do that though because promoting lies is what makes them money, and it’s always money over the good of the people.  Another reason why this doesn’t limit people’s free speech is because they can still form in person groups, talk about it in private, start an email chain etc. The idea behind trying to regulate what can be posted on social media websites is to make the world a better place for all; to make it harder for racist ideas and terrorism to spread, especially to young, impressionable children/young adults. This shouldn’t be a political issue; shouldn’t we all want to limit the spread of hate?

It is hard for me to imagine the January 6th insurrection on our capital occurring had we had regulations on social media in place. A lot of the groups that planned the insurrection had “stop the steal” groups and other related election-fraud conspiracy pages on Facebook. Imagine if we had in place a law that said social media platforms had to take down posts and pages eliciting false information that could be inciteful or detrimental to the security of the United States? I realize that is broad discretion, the legislation would have to be worded very narrowly, and those decisions to remove posts should be made with the highest level of scrutiny. Had we had a regulation like that in place, these groups would not have been able to reach as wide of an audience. I think Ashley Babbitt and Officer Sicknick would still be alive had Facebook been obligated to take those pages and posts down.

Alas, we are unable to even consider legislation to help address this cause because the courts and a lot of congress people refuse to acknowledge that we must update our laws and redefine how we read the First Amendment. The founders could never have imagined the world we live in today. Congress and the Courts need to stop pretending that a piece of paper written over a hundred years ago is some untouchable work from god. The founders wrote the First Amendment to ensure no one would be thrown in jail for speaking their mind, so that people who hold different political views could not be persecuted, to give people the ability to express themselves. Enacting legislation to prevent blatant lies, terrorism, racism, and white supremacy from spreading as easily online does not go against the First Amendment. It is not telling people they can’t have those views; it is not throwing anyone in prison or handing out fines for those views, and white supremacist or other racist ideas are not “political discourse.” Part of the role of government is to protect the people, to do what is right for society as a whole, and I fail to see how telling social media platforms they need to take down these appalling posts is outweighed by this idea that “nearly everything is free speech, even if it poisons the minds of our youth and perpetuates violence because that’s what the First Amendment says.”

Let’s now look at the United Kingdom and what they are able to do because they do not have any law comparable to the First Amendment. In May of 2021, the British Parliament introduced the Online Safety Bill. If passed into law, the bill will place a duty of care on social media firms and websites to ensure they take swift action to remove illegal content, such as hate crimes, harassment and threats directed at individuals, including abuse which falls below the criminal threshold. As currently written, the bill would also require the social media companies to limit the spread of and remove terroristic material, suicidal content, and child sexual abuse. The companies would be mandated to report postings of those kind to the authorities. Lastly, the Online Safety Bill would require companies to safeguard freedom of expression, and reinstate material unfairly removed. This includes forbidding tech firms from discriminating against particular political viewpoints. The bill reserves the right for Ofcom (the UK’s communications regulator) to hold them accountable for the arbitrary removal of journalistic content.

The penalties for not complying with the proposed law would be significant. Social Media companies that do not comply could be fined up to 10% of their net profits or $25 million. Further, the bill would allow Ofcom to bring criminal actions against named senior managers whose company does not comply with Ofcom’s request for information.

It will be interesting to see how the implementation of this bill will go if it is passed. I believe it is a good steppingstone to reign in the willful ignorance displayed by these companies. Again, it is important these bills be carefully scrutinized, otherwise you may end up with a bill like the one proposed in India. While I will not be discussing their bill at length in this post, you can read more about it here. In short, India’s bill is widely seen as autocratic in nature; giving the government the ability to fine and or criminally prosecute social media companies and their employees if they fail to remove content that the government does not like (for instance, people who are criticizing their new agriculture regulations).

Bringing this ship back home, can you imagine a bill like Britain’s ever passing in the US, let alone even being introduced? I certainly can’t because we still insist on worshiping an amendment that is 230 years old. The founders wrote the bill based on the circumstances of their time, they could never have imagined what today would look like. Ultimately, the decision to allow us to move forward and adopt our own laws to start regulating social media companies is up to the Supreme Court. Until the Supreme Court wakes up and decides to allow a modern reading/interpretation of the First Amendment, any law to hold companies accountable is doomed to fail. It is illogical to put a piece of paper over the safety and well being of Americans, yet we consistently do just that. We will keep seeing reports of how red flags were missed and as a result people were murdered or how Facebook pages helped spread another “Big Lie” which results in another capital sieged. All because we cannot move away from our past to brighten our futures.

 

What would you do to help curtail this social dilemma?

Has Social Media Become the Most Addictive Drug We Have Ever Seen?

Before we get started, I want you to take a few minutes and answer the following questions to yourself:

  1. Do you spend a lot of time thinking about social media or planning to use social media?
  2. Do you feel urges to use social media more and more?
  3. Do you use social media to forget about personal problems?
  4. Do you often try to reduce the use of social media without success?
  5. Do you become restless or troubled if unable to use social media?
  6. Do you use social media so much that it has had a negative impact on your job or studies?

How did you answer these questions?  If you answered yes to more than three of these questions then according to the Addiction Center you may have or be developing a Social Media Addiction.  Research has shown that there is an undeniable link between social media use, negative mental health, and low self-esteem.  Negative emotional reactions are not only produced due to the social pressure of sharing things with others but also the comparison of material things and lifestyles that these sites promote.
On Instagram and Facebook, users see curated content – advertisements and posts that are specifically designed to appeal to you based on your interests.  Individuals today unlike any other time in history are seeing how other people live, and how their lifestyles differ significantly from their own.  This sense of self-worth is what is being used to curate information, children at a young age are being taught that if you are not a millionaire then you are not successful, and they are creating barometers of success based on invisible benchmarks, this is leading to an increase in suicide and depression among young adults.

Social Media has become a stimulant whose effects mimic that of someone addicted to gambling, and recreational drugs.  It has been shown that retweets, likes, and shares from these sites affect the dopamine part of the brain that becomes associated with reward. “[I]t’s estimated that people talk about themselves around 30 to 40% of the time; however, social media is all about showing off one’s life and accomplishments, so people talk about themselves a staggering 80% of the time. When a person posts a picture and gets positive social feedback, it stimulates the brain to release dopamine, which again rewards that behavior and perpetuates the social media habit.”  “Chasing the high”, is a common theme among individuals with addictive personalities, and when you see people on Social Media posting every aspect of their lives, from the meal they ate to their weekend getaway, and everything in between, that is what your chasing, but the high is the satisfaction of other people liking your post.  We have all been there you post a picture or a moment of great importance in your life, and the likes and reactions start pouring in, the reaction you garner from that love, differs significantly from the reaction you get when there is no reaction.  A recent Harvard study showed that “the act of disclosing information about oneself activates the same part of the brain that is associated with the sensation of pleasure, the same pleasure that we get from eating food, getting money or having even had sex.” Our brains have become to associate self-disclosure with being a rewarding experience.  Ask yourself when was the last time you posted something about a family or friend who died, why was this moment of sadness worth sharing with the world?  Researchers in this Harvard Study found that “when people got to share their thoughts with a friend or family member, there was a larger amount of activity in the reward region of their brain, and less of a reward sensation when they were told their thoughts would be kept private.”

“The social nature of our brains is biologically based,” said lead researcher Matthew Lieberman, Ph.D., a UCLA professor of psychology and psychiatry and biobehavioral sciences. This in itself helps you to understand where Social Media has gone to, it has evolved into a system that takes advantage of our biological makeup, “although Facebook might not have been designed with the dorsomedial prefrontal cortex in mind, the social network is very much in sync with how our brains are wired.” There is a reason when your mind is idling the first thing it wants to do is to check Social Media, Liberman one of the founders of the study of social cognitive neuroscience explains that “When I want to take a break from work, the brain network that comes on is the same network we use when we’re looking through our Facebook timeline and seeing what our friends are up to. . . That’s what our brain wants to do, especially when we take a break from work that requires other brain networks.”

This is a very real issue, that has very real consequences.  The suicide rate for children and teens is rising.  According to a September 2020 report by the U.S. Department of Health and Human Services, the suicide rate for pediatric patients rose 57.4% from 2007 to 2018. It is the second-largest cause of death in children, falling short only of accidents.  Teens in the U.S. who spend more than 3 hours a day on social media may be at a heightened risk for mental health issues, according to a 2019 study in JAMA Psychiatry. The study, which was adjusted for previous mental health diagnoses, concludes that while adolescents using social media more intensively have an increased risk of internalizing problems or reporting mental health concerns, more research is needed on “whether setting limits on daily social media use, increasing media literacy, and redesigning social media platforms are effective means of reducing the burden of mental health problems in this population.” Social Media has become a coping mechanism for some to deal with their stress, loneliness, or depression.  We have all come into contact with someone who posts their entire life on social media, and more often than not we might brush it off, even make a crude joke, but in fact, this is someone who is hurting and looking for help in a place that does not offer any solitude.

I write about this to emphasize a very real, and dangerous issue that is growing worse every single day.  For far too long Social Media have hidden behind a shield of immunity.

Section 230, a provision of the 1996 Communications Decency Act that shields social media companies from liability for content posted by their users and allows them to remove lawful but objectionable posts.  Section 230 states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” (47 U.S.C. § 230)

In 1996 when this Law was introduced and passed, the internet was still in its infancy, and no one at that time could have ever envisioned how big it would become.  At this point, Social Media Corporations operate in an almost Omnipotent capacity.  Creating their governing boards, and moderators to filter out negative information.  However, while the focus is often on the information being put out by the users what gets ignored is how that same information gets directed to the consumer.  You see Facebook, Snap Chat, Twitter, even YouTube, rely on the consumer commonly known as “influencers” to direct posts, and information to the consumer also known as the “User”, to direct advertisement and product placement.  To accomplish their goals which at the end of the day is the same as anyone Corporation to create a profit, information is directed at a person that will keep their attention.  At this point, there are little to no regulations, on how information is directed at an individual.  For instance, the FCC has rules in place that “limits the number of time broadcasters, cable operators, and satellite providers can devote to advertisements during children’s programs.” however, there are no such rules when dealing with children, there is only one such case in which the FTC has levied any fines for directed content at Children. Yet this suit was based more on  the notion that Google through their subsidiary YouTube “illegally collected personal information from children without their parents’ consent.”  When dealing with an advertisement for children Google itself sets the parameters.

Social Media has grown too large for itself and has far outgrown its place as a private entity that cannot be regulated.  The FCC was created in 1934 to replace the outdated Federal Radio Commission an outdated entity.  Therefore, just as it was recognized in 1934 that technology calls for change, today we need to call on Congress to regulate Social Media, it is not too farfetched to say that our Children and our Children’s futures depend on this.

In my next blog, I will post how regulation on Social Media could look and explain in more detail how Social Media has grown too big for itself.

 

 

A Slap in the Face(book)?

Social media law has become somewhat of a contentious issue in recent years. While most people nowadays could not imagine life without it, many realize too, that it’s influence on our daily lives may not be a great thing. As the technology has advanced to unimaginable levels and the platforms have boomed in popularity, it seems as though our smart phones and Big Tech know our every move. The leading social media platform, Facebook, has around 1.82 billion active users a day, with people volunteering all sorts of personal information to be stored in the internet database. Individual profiles hold pictures of our children, our friends, our family, meals we eat, locations we visit. “What’s on your mind?” is the opening invite to any Facebook page, and one can only hazard a guess as to how many people actually answer that question on a daily basis.  Social media sites know our likes, our dislikes, our preferences, our moods, the shoes we want to buy for that dress we are thinking of wearing to the party we are looking forward to in three weeks!

With all that knowledge, comes enormous power, and through algorithmic design, social media can manipulate our thoughts and beliefs by controlling what we see and don’t see. With all that power, therefore, should come responsibility, but Section 230 of the Communications Decency Act (CDA) has created a stark disconnect between the two. What started out as a worthy protection for internet service providers for the content posted by others, has more recently drawn criticism for the lack of accountability held by social media oligarchs such as Jack Dorsey (Twitter) and Mark Zuckerberg (Facebook).

However, that could all be about to change.

On May 28, 2017, three friends lost their lives in a deadly car accident in which the 17-year-old driver, Jason Davis, crashed into a tree at an estimated speed of 113 mph. Landen Brown, 20, and Hunter Morby, 17, were passengers. Tragic accident? Or wrongful death?

Parents of the deceased lay blame on the Snapchat App, which offered a ‘Speed Filter’ that would clock how fast you were moving, and allowed users to snap and share videos of their movements in progress.

You see where this is going.

As quickly became the trend, the three youths used the app to see how fast they could record the speed of their car. Just moments before their deaths, Davis had posted a ‘snap’ clocking the car’s speed at 123 mph. In Lemmon v Snap, the parents of two of the boys brought suit against the social media provider, Snap, Inc., claiming that the app feature encouraged reckless driving and ultimately served to “entice” the young users to their death.

Until now, social media platforms and other internet service providers have enjoyed the protection of near absolute immunity from liability. Written in 1996, Section 230 was designed to protect tech companies from liability, for suits such as defamation, for third party posts. In the early days, it was small tech companies, or an online business with a ‘comments’ feature that generally saw the benefits of the Code. 25 years later, many people are questioning the role of Section 230 within the vastly developing era of social media and the powerful pass it grants Big Tech in many of its societal shortcomings.

Regarded more as an open forum than the publisher or speaker, social media platforms such as Facebook, Twitter, TikTok, Instagram and Snapchat, have been shielded by Section 230 from any legal claims of harm caused by the content posted on their sites.

Applied broadly, it is argued that Section 230 prevents Snap, Inc. from being held legally responsible for the deaths of the three boys in this case, which is the defense the tech company relied upon. The district court dismissed the case on those grounds, holding that the captured speeds fall into the category of content published by a third party, for which the service provider cannot be held liable. The Ninth Circuit however, disagrees. The Court’s interesting swerve of such immunity, is that the speed filter resulted in the deaths of the boys regardless of whether or not their captured speeds were posted. In other words, it did not matter if the vehicle’s speed was shared with others in the app; the fact that the app promotes, and rewards, high speed (although the award system within the app is not entirely clear), is enough.

The implications of this could be tremendous. At a time when debate over 230 reevaluations is already heavy, this precedential interpretation of Section 230 could lead to some cleverly formulated legal arguments for holding internet service providers accountable for some of the highly damaging effects of internet, social media and smart phone usage.

For the many benefits the internet has to offer, it can no longer be denied that there is another, very ugly side to internet usage, in particular with social media.

It is somewhat of an open secret that social media platforms such as Facebook and Instagram, purposely design their apps to be addictive by its users. It is also no secret that there is a growing association between social media usage and suicides, depression and other mental health issues. Cyber bullying has long been a very real problem. In addition, studies have shown that smart device screen time in very young children has shockingly detrimental impacts on a child’s social and emotional developments,  not to mention the now commonly known damage it can have on a person’s eyesight.

An increased rate of divorces has been linked to smart phones, and distracted driving – whether it be texting or keeping tabs on your Twitter retweets, or Facebook ‘likes’– is on the increase. Even an increase in accidents while walking has been linked to distractions caused by the addictive smart devices.

With the idea of accountability being the underlying issue, it can of course be stated that almost all of these problems should be a matter of personal responsibility. Growing apart from your spouse? Ditch your cell phone and reinvent date night. Feeling depressed about your life as you ‘heart’ a picture of your colleague’s wine glass in front of a perfect sunset beach backdrop? Close your laptop and stop comparing yourself to everyone else’s highlights. Step in front of a cyclist while LOL’ing in a group text? Seriously….put your Apple Watch hand in your pocket and look where you are going! The list of personal-blame is endless. But then we hear about three young friends, two still in their teens, who lose their lives engaged with social media, and suddenly it’s not so easy to blame them for their own devastating misfortune.

While social media sites cannot be held responsible for the content posted by others, no matter how hurtful it might be to some, or no matter what actions it leads others to take, should they be held responsible for negligently making their sites so addictive, so emotionally manipulative and so targeted towards individual users, that such extensive and compulsive use leads to dire consequences? According to the Ninth Circuit, negligent app design can in fact be a cause of action for wrongful death.

With a potential crack in the 230-armor, the questions many lawyers will be scrambling to ask are:

      • What duties do the smart device producers and/or internet service providers owe to their users?
      • Are these duties breached by continuing to design, produce, and provide products that are now known to create such disturbing problems?
      • What injuries have occurred and where those injuries foreseeably caused by any such breaches of duty?

For the time being, it is unlikely that any substantial milestone will be reached with regards to Big Tech accountability, but the Ninth Circuit decision in this case has certainly delivered a powerful blow to the Big Tech apparent untouchability in the courtroom.

As awareness of all these social media related issues grow, could this court decision open the door to further suits of defective or negligent product design resulting in death or injury? Time will tell…..stay tuned.

Facebook: Watching your every move since 2012

It finally happened.  My mother joined Facebook.  I’m not sure what the current population of planet earth is, but it’s probably around 1.28 billion.  I know this because that’s how many people are currently using Facebook[1].

A few years ago when the company went public, people started complaining about a perceived lack of privacy.  Most people were concerned that the constantly evolving format created a need to always be aware that what you were posting would be directed to the appropriate audience.  What many people hadn’t yet realized was that Facebook had begun mining information at an unprecedented rate.

Sign-in to Facebook today and notice that those shoes you just considered purchasing are now featured prominently on your news feed.  That Google search you just performed has now caused advertisements to display alongside your profile.  It almost seems like Mark Zuckerberg is stalking us.  Taking their data-mining scheme to the next level, Facebook has gone on a spending spree.  They recently purchased popular apps Instagram and Whatsapp.  Those who use these apps have probably noticed that you can login to them using your Facebook information.

As the complaints have increased, Facebook has come up with a proposed solution – the “anonymous login.”  What it will do is allow users to login to third-party apps without giving any personal information to that app.  However, Facebook will still verify your identity, know what app you’ve signed in to, and they’ll know how often you sign in and how much time you spend on that app[2].

It seems that “anonymous” doesn’t really mean what we thought.  Where should the data-mining line be drawn?


[1] http://expandedramblings.com/index.php/resource-how-many-people-use-the-top-social-media/3/

[2] http://mashable.com/2014/05/01/facebooks-anonymous-login-is-evil-genius/

Should we add Doxx to the Lexicon?

Emily Bazelon’s most recent NY Times Magazine article, The Online Avengers, details the activities of a group of individuals who “scour the internet for personal data” of bullies and then “publicly link that information to the perpetrator’s transgressions.”   This practice of trolling the internet for transgressions is known as “doxxing.” The article focuses in particular attention to a man named Ash, who, together with a woman named Katherine, created an online group called OpAntiBullying.  Although the group never met in person, and never met the victims for whom they championed, they worked together, for a while at least, to publicly shame adolescent bullies. One focus of the article is the infighting that eventually occurred among the small group of “do-gooders,” highlighting the fragile bond between zealots brought together by a common cause, and the way in which their united enthusiasm lead to an equally fevered undoing.

What struck me most about the article, was the use of the word doxx, which I hadn’t heard before.  A cursory google search suggests the word has yet to gain much traction.  Urbandictionary.com defines doxx as exposing someone’s true identity.  A practice, the site suggests “is one of the scummiest things someone can do on the internet.”  In contrast, Emily Bazelon profiles doxxing in a more positive manner.  In her article Bazelon credits doxxing with bringing down the defendants in the Steubenville sexual assault case and with bringing awareness to a similar assault in Canada.

Doxxers are hackers.  In most instances, a doxx can only occur if one breaks into someone’s twitter account, or instagram feed, finding incriminating comments or pictures. Consequently, most doxxers are anonymous, as was the case in the article.

But the practice and the goals of doxxers create a dichotomoy with which I am not sure I am comfortable.  While a doxxers goal is more laudable, the conduct necessary to reach his or her goal is  often  illegal.  Its a little like Robin Hood, committing a crime to achieve a better good. I am not sure how I come out on this, though I suspect I fall on the side of legality (would one expect otherwise from a lawyer?)

Regardless, I suspect  doxx will become a word uttered with increasing frequency in the coming year.  Thoughts, examples or opinions on doxx are greatly welcomed.

 

The Birth of RoboTweeting

NBC News reports that companies are becoming “Twitter-savvy” when it comes to consumer complaints.  In some instances customers logging complaint are retweeted with patronizing responses.  For example, according to the article, when @OccupyLA tweeted “you can help by stop stealing people’s houses!!” The Bank of America retweeted “We’d be happy to review your account.”  Corporate manipulation of Twitter is yet another example of how “the system” can corral innovative technology for its own use.   Gen-xers, hipsters and naughts have fled Facebook in droves  once businesses hijacked the social media.  Now Twitter.  Can Instagram be far behind???