The Dark Side of Tik-Tok

In Bethany, Oklahoma, a 12-year-old child died due to strangulation marks on his neck. According to police, this wasn’t due to murder or suicide, rather a TikTok challenge that had gone horribly wrong. The challenge is known by a variety of names, including Blackout Challenge, Pass Out Challenge, Speed Dreaming, and The Fainting Game. The challenge is kids asphyxiating themselves, either by choking themselves out by hand or by using a rope or a belt, to obtain the euphoria when they wake up.

Even if the challenge does not result in death, medical professionals warn that it is extremely dangerous. Every moment you are without oxygen or blood, you risk irreversible damage to a portion of your brain.

Unfortunately, the main goal on social media is to gain as many views as possible, regardless of the danger or expense.

Because of the pandemic kids have been spending a lot of time alone and bored, which has led to preteens participating in social media challenges.

There are some social media challenges that are harmless including the 2014 Ice Bucket Challenge, which earned millions of dollars for ALS research.

However there has also been the Benadryl challenge which began in 2020 that urged people to overdose on the drug in an effort to hallucinate. People were also urged to lick surfaces in public as part of the coronavirus challenge.

One of the latest “challenges” on the social media app TikTok could have embarrassing consequences users never imagined possible. The idea of the Silhouette Challenge is to shoot a video of yourself dancing as a silhouette with a red filter covering up the details of your body. It started out as a way to empower people but has turned into a trend that could come back to haunt you. Participants generally start the video in front of the camera fully clothed. When the music changes, the user appears in less clothing, or nude, as a silhouette obscured by a red filter. But the challenge has been hijacked by people using software to remove that filter and reveal the original footage.

If these filters are removed, that can certainly create an environment where kids’ faces are being put out in the public domain, and their bodies are being shown in ways they didn’t anticipate,” said Mekel Harris licensed pediatric & family psychologist. Young people who participate in these types of challenges aren’t thinking about the long-term consequences.

These challenges reveal a darker aspect to the app, which promotes itself as a teen-friendly destination for viral memes and dancing.

TikTok said it would remove such content from its platform. In an updated post to its newsroom, TikTok said:

“We do not allow content that encourages or replicates dangerous challenges that might lead to injury. In fact, it’s a violation of our community guidelines and we will continue to remove this type of content from our platform. Nobody wants their friends or family to get hurt filming a video or trying a stunt. It’s not funny – and since we remove that sort of content, it certainly won’t make you TikTok famous.”

TikTok urged users to report videos containing the challenge. And it told BBC News there was now text reminding users to not imitate or encourage public participation in dangerous stunts and risky behavior that could lead to serious injury or death.

While the challenge may seem funny or get views on social media platforms, they can have long-lasting health consequences.

Because the First Amendment gives strong protection to freedom of speech, only publishers and authors are liable for content shared online. Section 230(c)(1) of the Communications Decency Act of 1996 states that “no provider or user of an interactive computer service shall be treated as the publisher or any information provided by another information content provider.” This act provides social media companies immunity over the content published by other authors on their platforms as long as intellectual property rights are not infringed. Although the law does not require social media sites to regulate their content, they can still decide to remove content at their discretion. Guidelines on the laws regarding discretionary content censorship are sparse. Because the government is not regulating speech, this power has fallen into the hands of social media giants like TikTok. Inevitably, the personal agendas of these companies are shaping conversations, highlighting the necessity of debating the place of social media platforms in the national media landscape.

THE ROLE OF SOCIAL MEDIA:

Social media is unique in that it offers a huge public platform, instant access to peers, and measurable feedback in the form of likes, views, and comments. This creates strong incentives to get as much favorable peer evaluation and approval as possible. Social media challenges are particularly appealing to adolescents, who look to their peers for cues about what’s cool, crave positive reinforcement from their friends and social networks, and are more prone to risk-taking behaviors, particularly when they’re aware that those whose approval they covet are watching them.

Teens won’t necessarily stop to consider that laundry detergent is a poison that can burn their throats and damage their airways. Or that misusing medications like diphenhydramine​ (Benadryl) can cause serious heart problems, seizures and coma.​ What they will focus on is that a popular kid in class did this and got hundreds of likes and comments.

WHY ARE TEENS SUSCEPTIBLE:

Children are biologically built to become much more susceptible to peer influence throughout puberty, and social media has magnified those peer influence processes, making them significantly more dangerous than ever before. Teens may find these activities entertaining and even thrilling, especially if no one is hurt, which increases their likelihood of participating. Teens are already less capable of evaluating danger than adults, so when friends reward them for taking risks – through likes and comments – it may act as a disinhibitor. These youngsters are being impacted on an unconscious level. The internet issues that are prevalent nowadays make it impossible for youngsters to avoid them. This will not occur unless they have parental engagement.

WHAT WE CAN DO TO CONTROL THE SITUATION:

Due to their lack of exposure to these effects as children, parents today struggle to address the risks of social media use with their children.

Parents, on the other hand, should address viral trends with their children. Parents should check their children’s social media history and communicate with them about their online activities, as well as block certain social media sites and educate themselves on what may be lurking behind their child’s screen.

In the case of viral infections, determine your child’s level of familiarity with any patterns you may have heard about before soliciting their opinion. You may ask as to why they think others will follow the trend and what they believe are some of the risks associated with doing so. Utilize this opportunity to explain why you are concerned about a certain trend.

HOW TO COPE WITH SOCIAL MEDIA USAGE:

It’s important to keep in mind that taking a break is completely appropriate. You are not required to join in every discussion, and disabling your notifications may provide some breathing space. You may set regular reminders to keep track of how long you’ve been using a certain app.

If you’re seeing a lot of unpleasant content in your feed, consider muting or blocking particular accounts or reporting it to the social media company.

If anything you read online makes you feel anxious or frightened, communicate your feelings to someone you trust. Assistance may come from a friend, a family member, a teacher, a therapist, or a helpline. You are not alone, and seeking help is completely OK.

Social media is a natural part of life for young people, and although it may have a number of advantages, it is essential that platforms like TikTok take responsibility for harmful content on their sites.

I welcome the government’s plan to create a regulator to guarantee that social media companies handle cyberbullying and posts encouraging self-harm and suicide.

Additionally, we must ensure that schools teach children what to do if they come across upsetting content online, as well as how to use the internet in a way that benefits their mental health.

To reduce the likelihood of misuse, protections must be implemented.

MY QUESTION TO YOU ALL:

How can social media companies improve their moderation so that children are not left to fend for themselves online? What can they do to improve their in-app security?

How Defamation and Minor Protection Laws Ultimately Shaped the Internet

Kyiv, Ukraine – September 5, 2019: A paper cubes collection with printed logos of world-famous social networks and online messengers, such as Facebook, Instagram, YouTube, Telegram and others.

The Communications Decency Act (CDA) was originally enacted with the intention of shielding minors from indecent and obscene online material. Despite its origins, Section 230 of the Communications Decency Act is now commonly used as a broad legal safeguard for social media platforms to shield themselves from legal liability for content posted on their sites by third parties. Interestingly, the reasoning behind this safeguard arises both from defamation common law, and constitutional free speech laws. As the internet has grown, however, this legal safeguard has gained increasing criticism. However, is this legislation actually undesirable? Many would disagree as section 230 contains “the 26 words that created the internet.”

 

Origin of the Communications Decency Act

The CDA was introduced and enacted as an attempt to shield minors from obscene or indecent content online. Although parts of the Act were later struck down for first amendment free speech violations, the Court left section 230 intact. The creation of section 230 was influenced by two landmark court decisions of defamation lawsuits.

The first case was in 1991, and involved an Internet site that hosted around 150 online forums. A claim was brought against the internet provider when a columnist of one of the online forums posted a defamatory comment about his competitor. The competitor sued the online distributor for the published defamation. The courts categorized the internet service provider as a distributor because they did not review any content of the forums before the content was posted to the site. As a distributor, there was no legal liability, and the case was dismissed.

 

Distributor Liability

Distributor Liability refers to the limited legal consequences that a distributor is exposed to for defamation. A common example of a distributor, is a bookstore or library. The theory behind distributor liability is that it would be impossible for distributors to moderate and censor every piece of content that they disperse because of the sheer volume, and the impossibility of knowing whether something is false or not.

The second case that influenced the creation of section 230, was Stratton Oakmont, Inc. v. Prodigy Servs. Co., in which the court used publisher liability theory to find the internet provider liable for the third party defamatory postings published on its site.  The court deemed the website a publisher because they moderated and deleted certain posts, regardless of the fact that there were far too many postings a day to regulate each one.

 

Publisher Liability

Under common law principles, a person who publishes a third-party’s defamatory statement bears the same legal responsibility as the creator of that statement. This liability is often referred to as “publisher liability,” and is based in theory that a publisher has the knowledge, opportunity, and ability to exercise control over the publication. For example, a newspaper publisher could face legal consequences for the content located within it. The court’s decision was significant because it meant that if a website attempted to moderate certain posts, it would be held liable for all posts.

 

Section 230’s Creation

In response to the Stratton-Oakmond case, and the ambiguous court decisions regarding internet services provider’s liability, members of Congress introduced an amendment to the CDA that later became Section 230. The Amendment was specifically introduced and passed with the goal of encouraging the development of unregulated, free speech online by relieving internet providers from any liability for their content.

 

Text of the Act- Subsection (c)(1) 

“No Provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

 Section 230 further provides that…

“No cause of action may be brought and no liability may be imposed under any State or local law that is inconsistent with this section.”

 The language above removes legal consequences arising from content posted on their forum. Courts have interpreted this subsection as providing broad immunity to online platforms from suits over content of third parties. Because of this, section 230 has become the principal legal safeguard from lawsuits over sites content.

 

The Good

  •  Section 230 can be viewed as being one of the most important pieces of legislation that protects free speech online. One of the unique aspects of this legislation is that it essentially extends free speech protection, applying it to private, non-governmental companies.
  • Without CDA 230, the internet would be a very different place. This section influenced some of the internet’s most distinctive characteristics. The internet promotes free speech and offers the ability for worldwide connectivity.
  • The CDA 230 does not fully eliminate liability or court remedies for victims of online defamation. Rather, it makes only the creator themselves liable for their speech, instead of the speaker and the publisher.

 

 

The Bad

  •  Because of the legal protections section 230 provides, social media networks have less of an incentive to regulate false or deceptive posts. Deceptive online posts can have an enormous impact on society. False posts have the ability to alter election results, or lead to dangerous misinformation campaigns, like the QAnon conspiracy theory, and the anti-vaccination movement.
  • Section 230 is twenty-five years old, and has not been updated to match the internet’s extensive growth.
  • Big Tech companies have been left largely unregulated regarding their online marketplaces.

 

 The Future of 230

While section 230 is still successfully used by social media platforms, concerns over the archaic legislation have mounted. Just recently, Justice Thomas, who is infamous for being a quiet Justice, wrote a concurring opinion articulating his view that the government should regulate content providers as common carriers, like utilities companies. What implications could that have on the internet? With the growing level of criticism surrounding section 230, will Congress will finally attempt to fix this legislation? If not, will the Supreme Court be left to tackle the problem themselves?

Off Campus Does Still Exist: The Supreme Court Decision That Shaped Students Free Speech

We currently live in a world centered around social media. I grew up in a generation where social media apps like Facebook, Snapchat and Instagram just became popular. I remember a time when Facebook was limited to college students, and we did not communicate back and forth with pictures that simply disappear. Currently many students across the country use social media sites as a way to express themselves, but when does that expression go too far? Is it legal to bash other students on social media? What about teachers after receiving a bad test score? Does it matter who sees this post or where the post was written? What if the post disappears after a few seconds? These are all questions that in the past we had no answer to. Thankfully, in the past few weeks the Supreme court has guided us on how to answer these important questions. In Mahanoy Area School District v B.L, the supreme court decided how far a student’s right to free speech can go and how much control a school district has in restricting a student’s off campus speech.

The question presented in the case of Mahanoy Area School District v. B.L was whether a public school has the authority to discipline a student over something they posted on social media while off campus. The student in this case was a girl named Levy. Levy was a sophomore who attended the Mahanoy Area School District. Levy was hoping to make the varsity cheerleading team that year but unfortunately, she did not.  She was very upset when she found out a freshman got the position instead and decided to express her anger about this decision on social media. Levy was in town with her friend at a local convenience store when she sent “F- School, F- Softball, F- Cheerleading, F Everything” to her list of friends on snapchat in addition to posting this on her snapchat story. One of these friends screenshotted the post and sent it to the cheerleading coach. The school district investigated this post and it resulted in Levy being suspended from cheerleading for one year. Levy, along with her parents were extremely upset with this decision and it resulted in a lawsuit that would shape a student’s right to free speech for a long time.

In the lawsuit, Levy and her parents, claimed that Levy’s cheerleading suspension violated her First Amendment right to free speech. They sued Mahanoy Area School District under 42 U.S.C § 1983 claiming that (1) her suspension from the team violated the First Amendment; (2) the school and team rules were overbroad and viewpoint discriminatory; and (3) those rules were unconstitutionally vague. The district court granted summary judgment in favor of Levy, stating that the school had violated her First Amendment rights. The U.S. Court of Appeals for the Third Circuit affirmed the district court decision. The Mahoney School District petitioned for a writ of certiorari. Finally, the case was heard by the Supreme Court.

Mahanoy School District argued that previous ruling in the case, Tinker v. Des Moines Independent Community School District, acknowledges that public schools do not possess absolute authority over students and that students possess First Amendment speech protections at school so long as the students’ expression does not become substantially disruptive to the proper functioning of school. Mahanoy emphasized that the Court intended for Tinker to extend beyond the schoolhouse gates and include not just on-campus speech, but any type of speech that was likely to result in on-campus harm. Levy countered by arguing that the ruling in Tinker only applies to speech protections on school grounds.

In an 8-1 decision the court ruled against Mahanoy. The Supreme Court held that Mahanoy School District violated Levy’s First Amendment Right by punishing her for posting a vulgar story on her snapchat while off campus.  The court ruled that the speech used did not result in severe bullying, nor was substantially disruptive to the school itself. The court also noted that this post was only visible to her friends list on snapchat and would disappear within 24 hours. It is not the school’s job to act as a parent, but it is their job to make sure actions off campus will not result in danger to the school. The Supreme Court also stated that although the student’s expression was unfavorable, if they did not protect the student’s opinions it would limit the students’ ability to think for themselves.

It is remarkably interesting to think about how the minor facts of this case determined the ruling. What if this case was posted on Facebook? One of the factors to consider that helped the court make their decision was that the story was only visible to about 200 of her friends on snapchat and would disappear within a day. One can assume that if Levy made this a Facebook status visible to all with no posting time frame the court could have ruled very differently. Another factor to consider, is that where the Snapchat post was uploaded ended up being another major factor in this case. Based on the Tinker ruling, if Levy posted this on school grounds Mahanoy School District could have the authority to discipline her for her post.

Technology is advancing each day and I am sure that in the future as more social media platforms come out the court will have to set a new precedent. I believe that the Supreme Court made the right decision regarding this case. I feel that speech which is detrimental to another individual should be monitored whether it is Off Campus Speech or On Campus Speech despite the platform that the speech is posted on. In Levy’s case no names were listed, she was expressing frustration for not making a team. I do believe that this speech was vulgar, but do not believe that the school suffered, nor any other students suffered severe detriment from this post.

If you were serving as a Justice on the Supreme Court, would you rule against Mahoney School District? Do you believe it matters which platform the speech is posted on? What about the location of where it was posted?

Advertising in the Cloud

Thanks to social media, advertising to a broad range of people across physical and man-made borders has never been easier. Social media has transformed how people and businesses can interact throughout the world. In just a few moments a marketer can create a post advertising their product halfway across the world and almost everywhere in between. Not only that, but Susan, a charming cat lady in west London, can send her friend Linda, who’s visiting her son in Costa Rica an advertisement she saw for sunglasses she thinks Linda might like. The data collected by social media sites allows marketers to target specific groups of people with their advertisements. For example, if Susan was part of a few Facebook cat groups, she would undoubtedly receive more cat tower or toy related advertisements than the average person.

 

Advertising on social media also allows local stores or venues to advertise to the local communities, targeting groups of people in the local area. New jobs in this area are being created, young entrepreneurs are selling their social media skills to help small business owners create an online presence. Social media has also transformed the way stores advertise to people as well, no longer must stores rely on solely a posterboard, or scripted advertisement. Individuals with a large enough following on social media are sought out by companies to “review” or test their products for free.

Social media has transformed and expanded the marketplace exponentially. Who we can reach in the world, who we can market to and sell to has expanded beyond physical barriers. With these changes, and newfound capabilities through technology, comes a new legal frontier.

 Today, most major brands and companies have their own social media account. Building a store’s “online presence” and promoting brand awareness has now become a priority for many marketing departments. According to Internet Advertising Revenue Report: Full Year 2019 Results & Q1 2020 Revenues, “The Interactive Advertising bureau, an industry trade association, and the research firm eMarketer estimate that U.S. social media advertising revenue was roughly $36 billion in 2019, making up approximately 30% of all digital advertising revenue,” they expect that it will increase to $43 billion in 2020.

The Pew Research Center estimated, “that in 2019, 72% of U.S. adults, or about 184 million U.S. adults, used at least one social media site, based on the results of a series of surveys.”

Companies and people are increasingly utilizing these tools, what are the legal implications? 

This area of law is quickly growing. Advertisers can now directly reach their consumers in an instant, marketing their products at comparable prices. The FTC, Federal Trade Commission has expanded its enforcement actions in this area. Some examples of this are:

  •  The Securities and Exchange Commission Regulation Fair Disclosure addresses, “ the selective disclosure of information by publicly traded companies and other issuers, and the SEC has clarified that disseminating information through social media outlets like Facebook and Twitter is allowed so long as investors have been alerted about which social media will be used to disseminate such information,” 
  • The National Labor Relations Act, “While crafting an effective social media policy regarding who can post for a company or what is acceptable content to post relating to the company is important, companies need to ensure that the policy is not overly broad or can be interpreted as limiting employees’ rights related to protected concerted activity”
  • FDA, “ Even on social media platforms, businesses running promotions or advertising online have to be careful not to run afoul of FDA disclosure requirements”

According to the ABA there are two basic principles in advertising law which apply to any media: 

  1. Advertisers must have a reasonable basis to substantiate claims made; and
  2.  If disclosure is required to prevent an ad from being misleading, such disclosure must appear in a clear and conspicuous manner.

Advertisements may be subject to more specific regulations regarding Children under the Children’s Online Privacy Protection Act (COPPA). This act gives parents control over protections and approvable ways to get verifiable parental consent.  

The Future legality of our Data 

Data brokers are companies that collect information about you and sell that data to other companies or individuals. This information can include everything from family birthdays, addresses, contacts, jobs, education, hobbies, interests, life events and health conditions. Currently, Data brokers are legal in most states. California and Vermont have enacted laws that require data brokers to register their operation in the state. Who owns your data? Should you? Should the sites you are creating the data on? Should it be free for companies to sell? Will states take this issue in different directions? If so, what would these implications be for companies and sites to keep up with?

Facebook’s market capitalization stands at $450 billion.

While there is uncertainty regarding this area of law, it is certain that it is new, expanding and will require much debate. 

According to Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media,  “Collecting user data allows operators to offer different advertisements based on its potential relevance to different users.”   The data collected by social media companies enables them to build complex strategies and sell advertising “space” targeting specific user groups to companies, organizations, and political campaigns (How Does Facebook Make Money). The capabilities here seem endless, “Social media operators place ad spaces in a marketplaces that runs an instantaneous auction with advertisers that can place automated bids.” With the ever expanding possibilities of social media comes a growing legal frontier. 

Removing Content 

 Section 230, a provision of the 1996 Communications Decency Act, states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” (47 U.S.C. § 230). This act shields social media companies from liability for content posted by their users and allows them to remove lawful but objectionable posts.

One legal issue that has been arising here is, advertisements are being taken down by the content monitoring algorithms. According to a Congressional Research Services report, during the COVID-19 pandemic social media companies relied more heavily on automated systems to monitor content. These systems could review large volumes of the content at a time however they mistakenly removed some content. “Facebook’s automated systems have reportedly removed ads from small businesses, mistakenly identifying them as content that violates its policies and causing the business to lose money during the appeals process” (Facebook’s AI Mistakenly Bans Ads for Struggling Businesses). This has affected a wide range of small businesses according to Facebook’s community standards transparency enforcement report. According to this same report, “In 2019, Facebook restored 23% of the 76 million appeals it received, and restored an additional 284 million pieces of content without an appeal—about 2% of the content that it took action on for violating its policies.” 

 

Cancel Culture….. The Biggest Misconception of the 21st Century

Cancel Culture  refers to the popular practice of withdrawing support for (canceling) public figures and companies after they have done or said something considered objectionable or offensive.

Being held accountable isn’t new.

If a public figure has done something or has said something offensive to me why can’t I express my displeasure or discontinue my support for them? Cancel culture is just accountability culture. Words have consequences, and accountability is one of them. However, this is nothing new. We are judged by what we say in our professional and personal lives. For example, whether we like it or not when we’re on a job hunt we are held accountable for what we say or may have said in the past. According to Sandeep Rathore, (2020, May 5). 90% of Employers Consider an Applicant’s Social Media Activity During Hiring Process, employers believe that social media is important to assess job candidates. This article explains that these jobs are  searching your social media for certain red flags like, anything that can be considered hate speech, illegal or illicit content, negative comments about previous jobs or client, threats to people or past employers, confidential or sensitive information about people or previous employers. Seems like a prospective employer can cancel you for a job for things you may have done or said in the past. Sound familiar?

You ever been on a first date? Has your date ever said something so objectionable or offensive that you just cancel them after the first date? I’m sure it has happened to some people. This is just another example of people being held accountable for what they say.

Most public figures who are offended by cancel culture have a feeling of entitlement. They feel they have the right to say anything, even if it’s offensive and hurtful, and bear no accountability. In Sarah Hagi, (2019 November 19). Cancel Culture is not real, at least not in the way people believe it is, Hagi explained that Cancel Culture is turned into a catch-all for when people in power face consequences for their actions or receive any type of criticism, something that they’re not used to.”

What harm is Cancel Culture causing?

Many cancel culture critics say cancel culture is limiting free speech. This I don’t get. The very essence of cancel culture is free speech. Public figures have the right to say what they want and the public has the right to express disapproval and displeasure with what they said. Sometimes this comes in the form of boycotting, blogging, social media posting etc. Public figures who feel that they have been cancelled might have bruised egos, be embarrassed, or might have their career impacted a little but that comes as a consequence of free speech. A Public figure losing fans, customers, or approval in the public eye is not an infringement on their rights. It’s just the opposite. It’s the people of the public expressing their free speech. They have the right to be a fan of who they want, a customer of who they want, and to show approval for who they want. Lastly, Cancel Culture can be open dialogue but  rarely do we see the person that is on the receiving end of a call out wanting to engage in open dialogue with the people who are calling them out.

No public figures are actually getting cancelled.

According to AJ Willingham, (2021 March 7). It’s time to Cancel this talk of cancel culture, “people who are allegedly cancelled still prevail in the end”.  The article gives an example of when Dr. Sues was supposedly cancelled due to racist depictions in his book, but instead his book sales actually went up.  Hip Hop rapper Tory Lanez was supposedly cancelled for allegedly shooting  female rapper Megan the stallion in the foot. Instead of being cancelled he dropped an album describing what happened the night of the shooting and his album skyrocketed in sales. There are numerous examples that show that people are not really being cancelled, but instead simply being called out for their objectionable or offensive behavior.

Who are the real victims here?

In AJ Willingham, (2021 March 7). It’s time to Cancel this talk of cancel culture, the article states “there are real problems that exist…. to know the difference look at the people who actually suffer when these cancel culture wars play out.  There are men and women who allege wrong doing at the risk of their own career. Those are the real victims.” This a problem that needs to be identified in cancel culture debate. To many people are are prioritizing the feelings of the person that is being called out rather than the person that is being oppressed. In Jacqui Higgins-Dailey, (2020, September 3). You need to calm down : You’re getting called out, not cancelled, Dailey explains “ When someone of a marginalized group says they are being harmed, we (the dominant group) say the harm wasn’t our intent. But impact and intent are not the same. When a person doesn’t consider the impact their beliefs, thoughts, words and actions have on a marginalized group, they continue to perpetuate the silencing of that group. Call-out culture is a tool. Ending call-out culture silences marginalized groups who have been censored far too long. The danger of cancel culture is refusing to take criticism. That is stifling debate. That is digging into a narrow world view”.

 

 

 

 

 

 

 

 

 

Blurred Boundaries: The multidimensional convergence of Social Media’s Impact on Privacy, Speech and Employment Law

Are employees and employers operating in a universe without realizing the density of the fog that obscures the boundaries of the employee-employer relationship in cyberspace because the Supreme Court prefers to decide cases on narrower grounds?

Due to narrow rulings, examining decisions beyond employment law may yield analysis that can serve as temporary guideposts for employers and employees while monitoring the developing landscape.

Over a decade ago, the unanimous Supreme Court did just that. In City of Ontario, Cal. v. Quon, the Court avoided addressing the employee privacy issue by deciding that employer acted reasonably, thereby justified their non-investigatory search of an employer-issued pager in 2002. The employee brought an action for deprivation of civil rights under 42 U.S.C. § 1983. The § 1983 claim requires a governmental actor to deprive a constitutional right while acting under the color of law. The government, as the employer, issued a policy covering emails and Internet usage, but it was not specific to text messages. However, a supervisor verbally put all employees on notice that text would be considered emails, despite the difference between the technology used during transmission. Some of the non-work-related messages sent during working hours were sexual. Despite both the District Court and the Court of Appeals for the Ninth Circuit decided that the employee had an expectation of privacy in the text messages, the Supreme Court avoided addressing that issue while finding the search constitutional.

Since most of today’s labor force has never carried a pager, the more relevant aspect of this decision is the Court forecasting the “rapid changes in the dynamics of communication and information transmission” which may be evident “in the technology itself but in what society accepts as proper behavior.” How right they were, I could not have predicted the explosion of technology. Because emerging technology’s role in society was unclear, detailing the constitutionality of other actions could have been risky. Last month this preference was reinforced. However, definitive holdings could have become the foundation upon which employers and employees could make educated decisions while technology’s role in society becomes more evident. Like an airplane flying out of cloud cover, suddenly the landscape becomes visible.

The Court had the foresight that cell phone communications would become essential in self-expression that it would require employers to communicate clear policies. However, the challenge lies in setting clear policies when privacy and protected speech boundaries are not clearly defined but obscured in the fog created by balancing tests established in other speech cases.

One such landmark ruling is the 1969 “school arm-band case” during the Vietnam War. In Tinker v. Des Moines Independent Community School Dist., the Court separately analyzed the time, place, and type of be¬havior/communication. Tinker’s substantial disruption analysis requires that the prohibition on speech needs to be due to something other than just the desire to avoid discomfort and unpleasantness.

The Court in Young v. American Mini Theatres, Inc. established that speech cannot be suppressed just because society finds the content offensive. Likewise, in Skinner v. Railway Labor Executives’ Ass’n, the Court also found that the amendments to the constitution also applied to the government when performing non-criminal functions.
Likewise, the Court ruled in Treasury Employees v. Von Raab that not only did the Fourth Amendment apply to the government as an employer but that the issue of privacy applies to private-sector employees as well.

More recently, Justice Stevens addressed a public employee’s expectation of privacy in his concurring opinion in Quon. He highlighted the significant issue: there “lacks tidy distinctions between workplace and private activities.” Today’s social media and society’s view have further blurred the boundaries to the point of non-existence.

Just last month, the Supreme Court had an opportunity to establish bright lines that would have further clarified the legal landscape of social media. The rule could have applied to the employer-employee relationship. In Mahanoy Area School District v. B.L., the Court held that the school violated the student’s free speech rights because the school’s special interests did not overcome the student’s right to freedom of expression. The decision was based primarily on the time of the speech, the location from where B.L. made it, the content, and the target audience. The school’s interests also focused on preventing disruption in the facility.

Justice Alito, in his concurrence, explains that it is not prudent to establish a general First Amendment rule for off-premise student speech but rather to examine the analytical framework. While this approach serves the parties of this case and is of some value to other students, it is so narrowly tailored that it may have little precedence in other speech disputes.

Rather than a bright-line rule, the Court is building a boundary fence around the First Amendment one panel at a time. While the legal community functions within this ever-changing reality, society pays the burden until clarity is achieved.

  The Court’s lesson from Mahanoy might be that regulations on student speech raises serious First Amendment concerns; school offi¬cials should proceed cautiously before venturing into this territory. That same caution may be prudent for both the private sector and public sector employers. Social media’s impact is not limited to situations where a person’s post impacts their employment. One example, among many, is Amy Cooper, the Central Park 911 caller, who was immediately fired for racism and later charged with filing a false police report. She has since filed a civil suit against her employer.

Unfortunately, the Court’s preference to dispose of cases narrowly while avoiding addressing all the possible issues creates tension between different interpretations until the Court adds the last panel completing the boundary fence around the First Amendment. Until then, we will have to consider how the courts will decide issues within the employment arena, such as the termination of Amy Cooper or any law enforcement officer firings due to social media posts.

Will the Courts find that employees, like students, do not “shed their constitutional rights to freedom of speech or expression” at the workplace gate in the era of social media?

Is Cyberbullying the Newest Form of Police Brutality?

Police departments across the country are calling keyboard warriors into action to help them solve crimes…but at what cost?

In a survey of 539 police departments in the U.S., 76% of departments said that they used their social media accounts to solicit tips on crimes. Departments post “arrested” photos to celebrate arrests, surveillance footage for suspect identification, and some even post themed wanted posters, like the Harford County Sheriff’s Office.

The process for using social media as an investigative tool is dangerously simple and the consequences can be brutal. A detective thinks posting on social media might help an investigation, so the department posts a video or picture asking for information. The community, armed with full names, addresses, and other personal information, responds with some tips and a lot of judgmental, threatening, and bigoted comments. Most police departments have no policy for removing posts after information has been gathered or cases are closed, even if the highlighted person is found to be innocent. A majority of people who are arrested are not even convicted of a crime.

Law enforcement’s use of social media in this way threatens the presumption of innocence, creates a culture of public humiliation, and often results in a comment section of bigoted and threatening comments.

On February 26, 2020, the Manhattan Beach Police Department posted a mugshot of Matthew Jacques on their Facebook and Instagram pages for their “Wanted Wednesday” social media series. The pages have 4,500 and 13,600, mostly local, followers, respectively. The post equated Matthew to a fugitive and commenters responded publicly with information about where he worked. Matthew tried to call off work out of fear of a citizen’s arrest. The fear turned out to be warranted when two strangers came to find him at his workplace. Matthew eventually lost his job because he was too afraid to return to work.

You may be thinking this is not a big deal. This guy was probably wanted for something really bad and the police needed help. After all, the post said the police had a warrant. Think again.

There was no active warrant for Matthew at the time, his only (already resolved) warrant came from taking too long to schedule remedial classes for a 2017 DUI. Matthew was publicly humiliated by the local police department. The department even refused to remove the social media posts after being notified of the truth. The result?

Matthew filed a complaint against the department for defamation (as well as libel per se and false light invasion of privacy). Typically, defamation requires the plaintiff to show:

1) a false statement purporting to be fact; 2) publication or communication of that statement to a third person; 3) fault amounting to at least negligence; and 4) damages, or some harm caused to the person or entity who is the subject of the statement.

Here, the department made a false statement – that there was a warrant. They published it on their social media, satisfying the second element. They did not check readily available public records that showed Matthew did not have a warrant. Finally, Matthew lived in fear and lost his job. Clearly, he was harmed.

The police department claimed their postings were protected by the California Constitution, governmental immunity, and the 1st Amendment. Fortunately, the court denied the department’s anti-SLAPP motion. Over a year after postings, the department took down the posting and settled the lawsuit with Matthew.

Some may think that Matthew’s case is an anomaly and that, usually, the negative attention is warranted and perhaps even socially beneficial because it further de-incentivizes criminal activity via humiliation and social stigma. However, most arrests don’t result in convictions, many of the police’s cyberbullying victims are likely innocent. Even if they are guilty, leaving these posts up can increase the barrier to societal re-entry, which can increase recidivism rates. A negative digital record can make finding jobs and housing more difficult. Many commenters assume the highlighted individual’s guilt and take to their keyboards to shame them.

Here’s one example of a post and comment section from the Toledo Police Department Facebook page:

Unless departments change their social media use policies, they will continue to face defamation lawsuits and continue to further the degradation of the presumption of innocence.

Police departments should discontinue the use of social media in the humiliating ways described above. At the very least, they should consider using this tactic only for violent, felonious crimes. Some departments have already changed their policies.

The San Francisco Police Department has stopped posting mugshots for criminal suspects on social media. According to Criminal Defense Attorney Mark Reichel, “The decision was made in consultation with the San Francisco Public Defender’s Office who argued that the practice of posting mugshots online had the potential to taint criminal trials and follow accused individuals long after any debt to society is paid.” For a discussion of some of the issues social media presents to maintaining a fair trial, see Social Media, Venue and the Right to a Fair Trial.

Do you think police departments should reconsider their social media policies?

The First Amendment Is Still Great For The United States…Or Is It?

In the traditional sense, of course it is. The idea of free speech should always be upheld, without question. However, when it comes to the 21st century, this two and a half centuries old amendment poses extreme roadblocks. Here, I will be discussing how the First Amendment inhibits the ability to tackle extremism and hatred on social media platforms.

One of the things I will be highlighting is how other countries are able to enact legislation to try and deal with the ever-growing hate that festers on social media. They’re able to do so because they do not have a “First Amendment.” The idea of free speech is simply engrained into democracies; they do not need an archaic document in which they are forever bound to tell them that. Here in the U.S., as we all know, congress can be woefully slow and inefficient, with a particular emphasis on refusing to update outdated laws.

The First Amendment successfully blocks any government attempt to regulate social media platforms. Any attempt to do so is met by mostly conservatives, yelling about the government wanting to take away free speech, and the courts will/would not allow the legislation to stand. This in turn means Facebook, Snapchat, Instagram, Reddit, and all the other platform never have to worry about the white supremist and other extremist rhetoric that is prevalent on their platform. Even further than that, most, if not all their algorithms, push those vile posts to hundreds of thousands of people. We are “not allowed” to introduce laws that will come up with a baseline to regulate platforms, in order to crack down on the terrorism that flourishes there. Just as you are not allowed to scream fire in a move theatre, it should not be allowed to post and form groups to spread misinformation, white supremacy, racism, etc. Those topics do not serve the interests of greater society. Yes, it would make it a lot harder for people to be able to easily share their thoughts, no matter how appalling they may be. However, not allowing it to spread online where in 30 seconds millions of people can see it, is not taking away someone’s free speech right. Platforms don’t even necessarily have to delete the posts; just change their algorithms to stop promoting misinformation and hate, promote truth instead even if the truth is boring. They won’t do that though because promoting lies is what makes them money, and it’s always money over the good of the people.  Another reason why this doesn’t limit people’s free speech is because they can still form in person groups, talk about it in private, start an email chain etc. The idea behind trying to regulate what can be posted on social media websites is to make the world a better place for all; to make it harder for racist ideas and terrorism to spread, especially to young, impressionable children/young adults. This shouldn’t be a political issue; shouldn’t we all want to limit the spread of hate?

It is hard for me to imagine the January 6th insurrection on our capital occurring had we had regulations on social media in place. A lot of the groups that planned the insurrection had “stop the steal” groups and other related election-fraud conspiracy pages on Facebook. Imagine if we had in place a law that said social media platforms had to take down posts and pages eliciting false information that could be inciteful or detrimental to the security of the United States? I realize that is broad discretion, the legislation would have to be worded very narrowly, and those decisions to remove posts should be made with the highest level of scrutiny. Had we had a regulation like that in place, these groups would not have been able to reach as wide of an audience. I think Ashley Babbitt and Officer Sicknick would still be alive had Facebook been obligated to take those pages and posts down.

Alas, we are unable to even consider legislation to help address this cause because the courts and a lot of congress people refuse to acknowledge that we must update our laws and redefine how we read the First Amendment. The founders could never have imagined the world we live in today. Congress and the Courts need to stop pretending that a piece of paper written over a hundred years ago is some untouchable work from god. The founders wrote the First Amendment to ensure no one would be thrown in jail for speaking their mind, so that people who hold different political views could not be persecuted, to give people the ability to express themselves. Enacting legislation to prevent blatant lies, terrorism, racism, and white supremacy from spreading as easily online does not go against the First Amendment. It is not telling people they can’t have those views; it is not throwing anyone in prison or handing out fines for those views, and white supremacist or other racist ideas are not “political discourse.” Part of the role of government is to protect the people, to do what is right for society as a whole, and I fail to see how telling social media platforms they need to take down these appalling posts is outweighed by this idea that “nearly everything is free speech, even if it poisons the minds of our youth and perpetuates violence because that’s what the First Amendment says.”

Let’s now look at the United Kingdom and what they are able to do because they do not have any law comparable to the First Amendment. In May of 2021, the British Parliament introduced the Online Safety Bill. If passed into law, the bill will place a duty of care on social media firms and websites to ensure they take swift action to remove illegal content, such as hate crimes, harassment and threats directed at individuals, including abuse which falls below the criminal threshold. As currently written, the bill would also require the social media companies to limit the spread of and remove terroristic material, suicidal content, and child sexual abuse. The companies would be mandated to report postings of those kind to the authorities. Lastly, the Online Safety Bill would require companies to safeguard freedom of expression, and reinstate material unfairly removed. This includes forbidding tech firms from discriminating against particular political viewpoints. The bill reserves the right for Ofcom (the UK’s communications regulator) to hold them accountable for the arbitrary removal of journalistic content.

The penalties for not complying with the proposed law would be significant. Social Media companies that do not comply could be fined up to 10% of their net profits or $25 million. Further, the bill would allow Ofcom to bring criminal actions against named senior managers whose company does not comply with Ofcom’s request for information.

It will be interesting to see how the implementation of this bill will go if it is passed. I believe it is a good steppingstone to reign in the willful ignorance displayed by these companies. Again, it is important these bills be carefully scrutinized, otherwise you may end up with a bill like the one proposed in India. While I will not be discussing their bill at length in this post, you can read more about it here. In short, India’s bill is widely seen as autocratic in nature; giving the government the ability to fine and or criminally prosecute social media companies and their employees if they fail to remove content that the government does not like (for instance, people who are criticizing their new agriculture regulations).

Bringing this ship back home, can you imagine a bill like Britain’s ever passing in the US, let alone even being introduced? I certainly can’t because we still insist on worshiping an amendment that is 230 years old. The founders wrote the bill based on the circumstances of their time, they could never have imagined what today would look like. Ultimately, the decision to allow us to move forward and adopt our own laws to start regulating social media companies is up to the Supreme Court. Until the Supreme Court wakes up and decides to allow a modern reading/interpretation of the First Amendment, any law to hold companies accountable is doomed to fail. It is illogical to put a piece of paper over the safety and well being of Americans, yet we consistently do just that. We will keep seeing reports of how red flags were missed and as a result people were murdered or how Facebook pages helped spread another “Big Lie” which results in another capital sieged. All because we cannot move away from our past to brighten our futures.

 

What would you do to help curtail this social dilemma?

Skip to toolbar