Social Media Has Gone Wild

Increasing technological advances and consumer demands have taken shopping to a new level. You can now buy clothes, food, and household items from the comfort of your couch, and in a few clicks: add to cart, pay, ship, and confirm. Not only are you limited to products sold in nearby stores, but shipping makes it possible to obtain items internationally. Even social media platforms have shopping features for users, such as Instagram Shopping, Facebook Marketplace, and WhatsApp. Despite its convenience, online shopping has also created an illegal marketplace for wildlife species and products.

Most trafficked animal-the Pangolin

Wildlife trafficking is the illegal trading or sale of wildlife species and their products. Elephant ivory, rhinoceros horns, turtle shells, pangolin scales, tiger furs, and shark fins are a few examples of highly sought after wildlife animal products. As social media platforms expand, so does wildlife trafficking.

Wildlife Trafficking Exists on Social Media?

Social media platforms make it easier for people to connect with others internationally. These platforms are great for staying in contact with distant aunts and uncles, but it also creates another method for criminals and traffickers to communicate. It provides a way to remain anonymous without having to meet in-person, which makes it harder for law enforcement to identify a user’s true identity. Even so, can social media platforms be held responsible for making it easier for criminals to commit wildlife trafficking crimes?

Thanks to Section 230 of the Communications Decency Act, the answer is most likely: no.

Section 230 provides broad immunity to websites for content a third-party user posts on the website. Even when a user posts illegal content on a website, the website cannot be held liable for such content. However, there are certain exceptions where websites have no immunity. It includes human and sex trafficking. Although these carve-outs are fairly new, it is clear that there is an interest in protecting people vulnerable to abuse.

So why don’t we apply the same logic to animals? Animals are also a vulnerable population. Many species are unmatched to guns, weapons, traps, and human encroachment on their natural habitats. Similar to children, animals may not have the ability to understand what trafficking is or even the physical strength to fight back. Social media platforms, like Facebook, attempt to combat the online wildlife trade, but its efforts continue to fall short.

How is Social Media Fighting Back?

 

In 2018, the World Wildlife Fund and 21 tech companies created the Coalition to End Wildlife Trafficking Online. The goal was to reduce illegal trade by 80% by 2020. While it is difficult to measure whether this goal is achievable, some social media platforms have created new policies to help meet this goal.

“We’re delighted to join the coalition to end wildlife trafficking online today. TikTok is a space for creative expression and content promoting wildlife trafficking is strictly prohibited. We look forward to partnering with the coalition and its members as we work together to share intelligence and best-practices to help protect endangered species.”

Luc Adenot, Global Policy Lead, Illegal Activities & Regulated Goods, TikTok

In 2019, Facebook banned the sale of animals altogether on its platform. But this did not stop users. A 2020 report showed a variety of illegal wildlife was for sale on Facebook. This clearly shows the new policies were ineffective. Furthermore, the report stated:

“29% of pages containing illegal wildlife for sale were found through the ‘Related Pages’ feature.”

This suggests that Facebook’s algorithm purposefully connects users to pages and similar content based on a user’s interest. Algorithms incentivize users to rely and depend on wildlife trafficking content. They will continue to use social media platforms because it does half of the work for them:

      • Facilitating communication
      • Connecting users to potential buyers
      • Connecting users to other sellers
      • Discovering online chat groups
      • Discovering online community pages

This fails to reduce wildlife trafficking outreach. Instead, it accelerates visibility of this type of content to other users. Does Facebook’s algorithms go beyond Section 230 immunity?

Under these circumstances, Facebook maintains immunity. In Gonzalez v. Google LLC, the court explains how websites are not liable for user content when the website employs content-neutral algorithms. This means that a website did nothing more than program an algorithm to present similar content to a user’s interest. The website did not offer direct encouragement to publish illegal content, nor did it treat the content differently from other user content.

What about when a website profits from illegal posts? Facebook receives a 5% selling fee for each shipment sold by a user. Since illegal wildlife products are rare, these transactions are highly profitable. A pound of ivory can be worth up to $3,300. If a user sells five pounds of ivory from endangered elephants on Facebook, the platform would profit $825 from one transaction. The Facebook Marketplace algorithm is similar to the algorithm based on user interest and engagement. Here, Facebook’s algorithm can push illegal wildlife products to a user who has searched for similar products. Yet, if illegal products are constantly pushed and successful sales are made, Facebook then benefits and makes a profit off these transactions. Does this mean that Section 230 will continue to protect Facebook when it profits from illegal activity?

Evading Detection

Even with Facebook’s prohibited sales policy, users get creative to avoid detection. A simple search of “animals for sale” led me to a public Facebook group. Within 30 seconds of scrolling, I found a user selling live coral, and another user selling an aquarium system with live coral, and live fish. The former reads: Leather $50. However, the picture shows a live coral in a fish tank. Leather identifies the type of coral it is, without saying it’s coral. Even if this was fake coral, a simple Google search shows a piece of fake coral is worth less than $50. If Facebook is failing to prevent users from selling live coral and live fish, it is most likely failing to prevent online wildlife trafficking on its platform.

Another method commonly used to evade detection is when users post a vague description or a photo of an item and include the words “pm me” or “dm me.” These are abbreviations for “private message me” or “direct message me.” It is a quick way to direct interested users to personally reach out to the individual and discuss details in a private chat. It is a way to communicate outside of the leering public eye. Sometimes a user will offer alternative contact methods, such as a personal phone number or an email address. This transitions the interaction off of or to a new social media platform.

Due to high profitability, there are lower stakes when transactions are conducted anonymously online. Social media platforms are great for concealing a user’s identity. Users can use fake names to maintain anonymity behind their computer and phone screen. There are no real consequences for using a fake name when the user is unknown. Nor is there any type of identity verification to truly discover the user’s true identity. Even if a user is banned, the person can create a new account under a different alias. Some users are criminals tied to organized crime syndicates or terrorist groups. Many users operate outside of the United States and are overseas, which makes it difficult to locate them. Thus, social media platforms incentivize criminals to hide among various aliases with little to lose.

Why Are Wildlife Products Popular?

Wildlife products have a high demand for human benefit and use. Common reasons why humans value wildlife products include:

Do We Go After the Traffickers or the Social Media Platform?

Taking down every single wildlife trafficker, and users that facilitate these transactions would be the perfect solution to end wildlife trafficking. Realistically, it’s too difficult to identify these users due to online anonymity and geographical limitations. On the other hand, social media platforms continue to tolerate these illegal activities.

Here, it is clear that Facebook is not doing enough to stop wildlife trafficking. With each sale made on Facebook, Facebook receives a percentage. Section 230 should not protect Facebook when it reaps the benefits of illegal transactions. This takes it a step too far and should open Facebook to the market of: Section 230 liability.

Should Facebook maintain Section 230 immunity when it receives proceeds from illegal wildlife trafficking transactions? Where do we draw the line?

Update Required: An Analysis of the Conflict Between Copyright Holders and Social Media Users

Opening

For anyone who is chronically online as yours truly, in one way or another we have seen our favorite social media influencers, artists, commentators, and content creators complain about their problems with the current US Intellectual Property (IP) system. Be it that their posts are deleted without explanation or portions of their video files are muted, the combination of factors leading to copyright issues on social media is endless. This, in turn, has a markedly negative impact on free and fair expression on the internet, especially within the context of our contemporary online culture. For better or worse, interaction in society today is intertwined with the services of social media sites. Conflict arises when the interests of copyright holders clash with this reality. They are empowered by byzantine and unrealistic laws that hamper our ability to exist as freely as we do in real life. While they do have legitimate and fundamental rights that need to be protected, such rights must be balanced out with desperately needed reform. People’s interaction with society and culture must not be hampered, for that is one of the many foundations of a healthy and thriving society. To understand this, I venture to analyze the current legal infrastructure we find ourselves in.

Current Relevant Law

The current controlling laws for copyright issues on social media are the Copyright Act of 1976 and the Digital Millennium Copyright Act (DMCA). The DMCA is most relevant to our analysis; it gives copyright holders relatively unrestrained power to demand removal of their property from the internet and to punish those using illegal methods to get ahold of their property. This broad law, of course, impacted social media sites. Title II of the law added 17 U.S. Code § 512 to the Copyright Act of 1976, creating several safe harbor provisions for online service providers (OSP), such as social media sites, when hosting content posted by third parties. The most relevant of these safe harbors to this issue is 17 U.S. Code § 512(c), which states that an OSP cannot be liable for monetary damages if it meets several requirements and provides a copyright holder a quick and easy way to claim their property. The mechanism, known as a “notice and takedown” procedure, varies by social media service and is outlined in their terms and conditions of service (YouTube, Twitter, Instagram, TikTok, Facebook/Meta). Regardless, they all have a complaint form or application that follows the rules of the DMCA and usually will rapidly strike objectionable social media posts by users. 17 U.S. Code § 512(g) does provide the user some leeway with an appeal process and § 512(f) imposes liability to those who send unjustifiable takedowns. Nevertheless, a perfect balance of rights is not achieved.

The doctrine of fair use, codified as 17 U.S. Code § 107 via the Copyright Act of 1976, also plays a massive role here. It established a legal pathway for the use of copyrighted material for “purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research” without having to acquire right to said IP from the owner. This legal safety valve has been a blessing for social media users, especially with recent victories like Hosseinzadeh v. Klein, which protected reaction content from DMCA takedowns. Cases like Lenz v. Universal Music Corp further established that fair use must be considered by copyright holders when preparing for takedowns. Nevertheless, failure to consider said rights by true copyright holders still happens, as sites are quick to react to DMCA complaints. Furthermore, the flawed reporting systems of social media sites lead to abuse by unscrupulous actors faking true ownership. On top of that, such legal actions can be psychologically and financially intimidating, especially when facing off with a major IP holder, adding to the unbalanced power dynamic between the holder and the poster.

The Telecommunications Act of 1996, which focuses primarily on cellular and landline carriers, is also particularly relevant to social media companies in this conflict. At the time of its passing, the internet was still in its infancy. Thus, it does not incorporate an understanding of the current cultural paradigm we find ourselves in. Specifically, the contentious Section 230 of the Communication Decency Act (Title V of the 1996 Act) works against social media companies in this instance, incorporating a broad and draconian rule on copyright infringement. 47 U.S. Code § 230(e)(2) states in no uncertain terms that “nothing in this section shall be construed to limit or expand any law pertaining to intellectual property.” This has been interpreted and restated in Perfect 10, Inc. v. CCBill LLC to mean that such companies are liable for user copyright infringement. This gap in the protective armor of Section 230 is a great concern to such companies, therefore they react strongly to such issues.

What is To Be Done?

Arguably, fixing the issues around copyright on social media is far beyond the capacity of current legal mechanisms. With ostensibly billions of posts each day on various sites, regulation by copyright holders and sites is far beyond reason. It will take serious reform in the socio-cultural, technological, and legal arenas before a true balance of liberty and justice can be established. Perhaps we can start with an understanding by copyright holders not to overreact when their property is posted online. Popularity is key to success in business, so shouldn’t you value the free marketing that comes with your copyrighted property getting shared honestly within the cultural sphere of social media?  Social media sites can also expand their DMCA case management teams or create tools for users to accredit and even share revenue with, if they are an influencer or content creator, the copyright holder. Finally, congressional action is desperately needed as we have entered a new era that requires new laws. That being said, achieving a balance between the free exchange of ideas and creations and the rights of copyright holders must be the cornerstone of the government’s approach to socio-cultural expression on social media. That is the only way we can progress as an ever more online society.

 

Image: Freepik.com

https://www.freepik.com/free-vector/flat-design-intellectual-property-concept-with-woman-laptop_10491685.htm#query=intellectual%20property&position=2&from_view=keyword”>Image by pikisuperstar

Social Media Addiction

Social Media was created as an educational and informational resource for American Citizens. Nonetheless, it has become a tool for AI bots and tech companies to predict our next moves by manipulating our minds on social media apps. Section 230 of the Communications Decency Act helped create the modern internet we use today. However, it was initially a 1996 law that regulated online pornography. Specifically, Section 230 provides legal immunity from liability for internet services and users for content posted online. Tech companies do not just want to advertise to social media users but instead want to predict a user’s next move. The process of these manipulative tactics used by social media apps has wreaked havoc on the human psyche and destroyed the social aspects of life by keeping people glued to a screen so big tech companies can profit off of it. 

Social media has changed a generation for the worse, causing depression and sometimes suicide, as tech designers manipulate social media users for profit. Social media companies for decades have been shielded from legal consequences for what happens on their platforms. However, due to recent studies and court cases, this may be able to change and allow for big tech social media companies to be held accountable. A former Facebook employee, France Haugen, a whistleblower to the Senate, stated not to trust Facebook as they knowingly pushed products that harm children and young adults to further profits, which Section 230 cannot sufficiently protect. Haugen further states that researchers at Instagram (a Facebook-owned Social Media App) knew their app was worsening teenagers’ body images and mental health, even as the company publicly downplayed these effects.

There is a California Bill, Social Media Platform Duty to Children Act, that aims to make tech firms liable for Social media Addiction in children; this would allow parents and guardians to use platforms that they believe addicted children in their care through advertising, push notifications and design features that promote compulsive use, particularly the continual consumption of harmful content on issues such as eating disorders and suicide. This bill would hold companies accountable regardless of whether they deliberately designed their products to be addictive.

Social Media addiction is a psychological, behavioral dependence on social media platforms such as Instagram, Snapchat, Facebook, TikTok, bereal, etc. Mental Disorders are defined as conditions that affect ones thinking, feeling, mood, and behaviors. Since the era of social media, especially from 2010 on, doctors and physicians have had a hard time diagnosing patients with social media addiction and mental disorders since they seem to go hand in hand. Social Media addiction has been seen to improve mood and boost health promotions with ads. However, at the same time, it can increase the negative aspects of activities that the youth (ages 13-21) take part in. Generation Z (“Zoomers”) are people born in the late 1990s to 2010s with an increased risk of social media addiction, which has been linked to depression. 

study measured the Difficulties in Emotion Regulation Scale (“DEES”) and Experiences in Close Relationships (“ECR”) to characterize the addictive potential that social media communication applications have based on their measure of the brain. The first measure in the study was a six-item short scale consisting of DEES that was a 36-item, six-factor self-report measure of difficulties, assessing

  1. awareness of emotional responses,
  2. lack of clarity of emotional reactions,
  3. non-acceptance of emotional responses,
  4. limited access to emotion regulation strategies perceived as applicable,
  5. difficulties controlling impulses when experiencing negative emotions, and
  6. problems engaging in goal-directed behaviors when experiencing negative emotions. 

The second measure is ECR-SV which includes a twelve-item test evaluating adult attachment. The scale comprised two six-item subscales: anxiety and avoidance. Each item was rated on a 7-point scale ranging from 1 = strongly disagree to 7 = strongly agree, which is another measure of depression, anxiety, and mania were DSM-5. The results depict that scoring at least five of the nine items on the depression scale during the same two-week period classified depression. Scoring at least three of the six symptoms on the anxiety scale was to sort anxiety. Scoring at least three of the seven traits in the mania scale has classified mania. 

The objectives of these studies were to clarify that there is a high prevalence of social media addiction among college students and confirms statistically that there is a positive relationship between social media addiction and mental disorders by reviewing previous studies. 

The study illustrates that there are four leading causes of social media abuse: 1)The increase in depression symptoms have occurred in conjunction with the rise of smartphones since 2007, 2) Young people, especially Generation Z, spend less time connecting with friends, and they spend more time connecting with digital content. Generation Z is known for quickly losing focus at work or study because they spend much time watching other people’s lives in an age of information explosion. 3) An increase in depression is low self-esteem when they feel negative on Social Media compared to those who are more beautiful, more famous, and wealthier. Consequently, social media users might become less emotionally satisfied, making them feel socially isolated and depressed. 4) Studying pressure and increasing homework load may cause mental problems for students, therefore promoting the matching of social media addiction and psychiatric disorders. 

The popularity of the internet, smartphones, and social networking sites are unequivocally a part of modern life. Nevertheless, it has contributed to the rise of depressive and suicidal symptoms in young people. Shareholders of social media apps should be more aware of the effect their advertising has on its users. Congress should regulate social media as a public policy matter to prevent harm, such as depression or suicide among young people. The best the American people can do is shine a light on the companies that exploit and abuse their users, to the public and to congress, to hold them accountable as Haugen did. There is hope for the future as the number of bills surrounding the topic of social media in conjunction with mental health effects has increased since 2020. 

Shadow Banning Does(n’t) Exist

Shadow Banning Doesn’t Exist

#mushroom

Recent posts from #mushroom are currently hidden because the community has reported some content that may not meet Instagram’s community guidelines.

 

Dear Instagram, get your mind outta the gutter! Mushrooms are probably one of the most searched hashtags in my Instagram history. It all started when I found my first batch of wild chicken-of-the-woods mushrooms. I wanted to learn more about mushroom foraging, so I consulted Instagram. I knew there were tons of foragers sharing photos, videos, and tips about finding different species. But imagine not being able to find content related to your hobby?

What if you loved eggplant varieties? But nothing came up in the search bar? Perhaps you’re an heirloom eggplant farmer trying to sell your product on social media? Yet you’ve only gotten two likes—even though you added #eggplantman to your post. Shadow banned? I think yes.

The deep void of shadow banning is a social media user’s worst nightmare. Especially for influencers whose career depends on engagement. Shadow banning comes with so many uncertainties, but there are a few factors many users agree on:

      1. Certain posts and videos remain hidden from other users
      2. It hurts user engagement
      3. It DOES exist

#Shadowbanning

Shadow banning is an act of restricting or censoring a user’s content on social media without notifying the user. This usually occurs when a user posts content deemed inappropriate or it violates the platform’s guidelines. If a user is shadow banned, the user’s content is only visible to the user and their followers.

Influencers, artists, creators, and business owners are vulnerable victims to the shadow banning void. They depend the most on user engagement, growth, and reaching new audiences. As much as it hurts them, it also hurts other users searching for this specific content. There’s no clear way of telling whether you’ve been shadow banned. You don’t get a notice. You can’t make an appeal to fix your lack of engagement. However, you will see a decline in engagement because no one can see your content in their feeds.

According to the head of Instagram, Adam Mosseri, “shadow banning is not a thing.” In an interview with the Meta CEO, Mark Zuckerberg, he stated Facebook has “no policy that is shadow banning.” Even a Twitter blog stated, “People are asking us if we shadow ban. We do not.” There is no official way of knowing if it exists, but there is evidence it does take place on various social media platforms.

#Shadowbanningisacoverup?

Pole dancing on social media probably would have been deemed inappropriate 20 years ago. But this isn’t the case today. Pole dancing is a growing sport industry. Stigmas associating strippers with pole dancing is shifting with its increasing popularity and trendy nature. However, social media standards may still be stuck in the early 2000s.

In 2019, user posts with hashtags including #poledancing, #polesportorg, and #poledancenation were hidden from Instagram’s Explore page. This affected many users who connect and share new pole dancing techniques with each other. It also had a huge impact on businesses who rely on the pole community to promote their products and services: pole equipment, pole clothing, pole studios, pole sports competitions, pole photographers, and more.

Due to a drastic decrease in user engagement, a petition directing Instagram to stop pole dancing censorship was circulated worldwide. Is pole dancing so controversial it can’t be shared on social media? I think not. There is so much to learn from sharing information virtually, and Section 230 of the Communications Decency Act supports this.

Section 230 was passed in 1996, and it provides limited federal immunity to websites from lawsuits if a user posts something illegal. This means that if User X decides to post illegal content on Twitter, the Twitter platform could not be sued because of User X’s post. Section 230 does not stop the user who posted such content from being sued, so User X can still be held accountable.

It is clear that Section 230 embraces the importance of sharing knowledge. Section 230(a)(1) tells us this. So why would Instagram want to shadow ban pole dancers who are simply sharing new tricks and techniques?

The short answer is: It’s inappropriate.

But users want to know: what makes it inappropriate?

Is it the pole? A metal pole itself does not seem so.

Is it the person on the pole? Would visibility change depending on gender?

Is it the tight clothing? Well, I don’t see how it is any different from my 17  bikini photos on my personal profile.

Section 230 also provides a carve-out for sex-related work, such as sex trafficking. But this is where the line is drawn between appropriate and inappropriate content. Sex trafficking is illegal, but pole dancing is not. Instagram’s community guidelines also support this. Under the guidelines, sharing pole dancing content would not violate it. Shadow banning clearly seeks to suppress certain content, and in this case, the pole dancing community was a target.

Cultural expression also battles with shadow banning. In 2020, Instagram shadow banned Caribbean Carnival content. The Caribbean Carnival is an elaborate celebration to commemorate slavery abolition in the West Indies and showcases ensembles representing different cultures and countries.

User posts with hashtags including #stluciacarnival, #fuzionmas, and #trinidadcarnival2020 could not be found nor viewed by other users. Some people viewed this as suppressing culture and impacting tourism. Additionally, Facebook and Instagram shadow banned #sikh for almost three months. Due to numerous user feedback, the hashtag was restored, but Instagram failed to state how or why the hashtag was blocked.

In March 2020, The Intercept obtained internal TikTok documents alluding to shadow banning methods. Documents revealed moderators were to suppress content depicting users with “‘abnormal body shape,’ ‘ugly facial looks,’ dwarfism, and ‘obvious beer belly,’ ‘too many wrinkles,’ ‘eye disorders[.]'” While this is a short excerpt of the longer list, this shows how shadow banning may not be a coincidence at all.

Does shadow banning exist? What are the pros and cons of shadow banning?

 

 

 

Corporate Use of Social Media: A Fine Line Between What Could-, Would-, and Should-be Posted

 

Introduction

In recent years, social media has taken a hold on nearly every aspect of human interaction and turned the way we communicate on its head. Social media apps’ high speed capability of disseminating information instantaneously have affected the way many sectors of business operate. From entertainment, social, environmental, educational, or financial, social media has bewildered the legal departments of many in house general counsels across all industries. Additionally, the generational shaft between the person actually posting for the account versus their supervisor has only exacerbated the potential for communications to miss their mark and cause controversy or adverse effects.

These days, most companies have social media accounts, but not all accounts are created equal, and they certainly are not all monitored the same. In most cases, these accounts are not regulated at all except by their own internal managers and #CancelCulture. Depending on the product or company, social media managers have done their best to stay abreast of changes in popular hashtags, trends and challenges, and the overall shift from a corporate tone of voice to one of relatability–more Gen-Z-esque, if you will. But with this shift, the rights and implications of corporate speech through social media has been put to the test.

Changes in Corporate Speech on Social Media 

In the last 20 years, corporate use of social media has become a battle of relevance. With the decline of print media, social media, and its apps, have emerged as a marketing necessity. Early social media use was predominantly geared towards social purposes. If we look at the origins of Facebook, Myspace, and Twitter it is clear that these apps were intended for superficial uses—not corporate communications—but this all changed with the introduction of LinkedIn, which sparked a dynamic shift towards business and professional use of social media.

Today social media is used to report on almost every aspect of our lives, from disaster preparation and emergency responses to political updates, to dating and relationship finders, and customer service based tasks, social media truly covers all. It is also more common now days to get backlash for not speaking out or using social media after a major social or political movement occurs. Social media is also increasingly being used for research with geolocation technology, for organizing demonstrations and political unrest, and in the business context, for development in sales, marketing, networking, and hiring or recruiting practices.

These changes are starting to lead to significant conversations in the business world when it comes to company speech, regulated disclosures and First Amendment rights. For example, so far, there is minimal research on how financial firms disseminate communications to investor news outlets via social media and in which format they are being responded to. And while some may view social media as an opportunity to further this kind of investor outreach, others have expressed concerns that disseminating communications in this manner could result in a company’s loss of control over such communications entirely.

The viral nature of social media allows not just investors to connect more easily with companies but also with individuals who may not directly follow that company and would therefore be a lot less likely to be informed about a company’s prior financial communications and the importance of any changes. This creates risk for a company’s investor communications via social media because of the potential to spread and possibly reach uniformed individuals which could in turn produce adverse consequences for the company when it comes to concerns about reliance and misleading information.

Corporate Use, Regulations, and Topics of Interest on Social Media 

With the rise of social media coverage on various societal issues, these apps have become a platform for news coverage, political movements, and social concerns and, for some generations, a platform that replaces traditional news media almost entirely. Specifically, when it comes to the growing interest in ESG related matters and sustainable business practices, social media poses as a great tool for information communication. For example, the Spanish company Acciona has recently been reported by the latest Epsilon Icarus Analytics Panel on ESG Sustainability, as having Spain’s highest resonating ESG content of all their social networks. Acciona demonstrates the potential leadership capabilities for a company to fundamentally impact and effectuate digital communications on ESG related topics. This developing content strategy focuses on brand values, and specifically, for Acciona, strong climate-change based values, female leadership, diversity, and other cultural, societal changes which demonstrates this new age of social media as a business marketing necessity.

Consequentially, this shift in usage of social media and the way we treat corporate speech on these platforms has left room for emerging regulation. Commercial or corporate speech is generally permissible under Constitutional Free Speech rights, so long as the corporation is not making false or misleading statements. Section 230 provides broad protection to internet content providers from accountability based on information disseminated on their platform. In most contexts, social media platforms will not be held accountable for the consequences resulting therefrom (i.e. a bad user’s speech). For example, a recent lawsuit was dismissed in favor of the defendant, TikTok, and its parent company, after a young girl died from participation in a trending challenge that went awry because under § 230 the platform was immune from liability.

In essence, when it comes to ESG-related topics, the way a company handles its social media and the actual posts they put out can greatly affect the company’s success and reputation as often ESG focused perspectives affect many aspects of the operation of the business. The type of communication, and coverage on various issues, can impact a company’s performance in the short term and long term hemispheres–the capability of which can effectuate change in corporate environmental practices, governance, labor and employment standards, human resource management and more.

With ESG trending, investors, shareholders, and regulators now face serious risk management concerns. Companies must now, more publicly, address news concerning their social responsibilities, on a much more frequent basis as ESG concerns continue to rise. Public company activities, through Consumer Service Reports, are mandated in annual 10-K filings and disclosures by the SEC, along with ESG disclosures thanks to a recent rule promulgation. These disclosures are designed to hold accountable and improve environmental, social, and economic performance when it comes to their respective stakeholders’ expectations.

Conclusion

In conclusion, social media platforms have created an entirely new mechanism for corporate speech to be implicated. Companies should proceed cautiously when covering social, political, environmental, and related concerns and their methods of information dissemination as well as the possible effects their posts may have on business performance and reputation overall.

States are ready to challenge Section 230

On January 8, 2021, Twitter permanently suspended @realDonaldTrump.  The decision followed an initial warning to the then-president and conformed to its published standards as defined in its public interest framework.   The day before, Meta (then Facebook) restricted President Trump’s ability to post content on Facebook or Instagram.   Both companies cited President Trump’s posts praising those who violently stormed the U.S. Capitol on January 6, 2021 in support of their decisions.

Members of the Texas and Florida legislatures, together with their governors, were seemingly enraged that these sites would silence President Trump’s voice.  In response, each immediately passed laws aiming to limit the scope of social media sites.   Although substantively different, the Texas and Florida laws are theoretically the same; they both seek to punish social media sites that regulate forms of conservative content that they argue liberal social media sites silence, regardless of whether the posted content violates the site’s published standards.

Shortly after each law’s adoption, two tech advocacy groups, NetChoice and Computer and Communication Industry Association, filed suits in federal district courts challenging the laws as violative of the First Amendment.  Each case has made its way through the federal courts on procedural grounds; the Eleventh Circuit upheld a lower court preliminary injunction prohibiting Florida from enforcing the statute until the case is decided on its merits.   In contrast, the Fifth Circuit overruled a lower court preliminary injunction.  Texas appealed the Fifth Circuit ruling to the Supreme Court of the United States, which, by a vote of 5-4, voted to reinstate the injunction.  The Supreme Court’s decision made clear that these cases are headed to the Supreme Court on the merits.

Can Social Media Be Regulated?

In 1996 Congress passed what is known as Section 230 of the Communications Decency Act (CDA) which provides immunity to website publishers for third-party content posted on their websites. The CDA holds that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This Act passed in 1996, was created in a different time and era, one that could hardly envision how fast the internet would grow in the coming years. In 1996, social media for instance consisted of a little-known social media website called Bolt, the idea of a global world wide web, was still very much in its infancy. The internet was still largely based on dial-up technology, and the government was looking to expand the reach of the internet. This Act is what laid the foundation for the explosion of Social Media, E-commerce, and a society that has grown tethered to the internet.

The advent of Smart-Phones in the late 2000s, coupled with the CDA, set the stage for a society that is constantly tethered to the internet and has allowed companies like Facebook, Twitter, YouTube, and Amazon to carve out niches within our now globally integrated society.   Facebook alone in the 2nd quarter of 2021 has averaged over 1.9 billion daily users.

Recent studs conducted by the Pew Research Center show that “[m]ore than eight in ten Americans get news from digital services”

Large majority of Americans get news on digital devices

While older members of society still rely on news media online, the younger generation, namely those 18-29 years of age, receive their news via social media.

Online, most turn to news websites except for the youngest, who are more likely to use social media

The role Social Media plays in the lives of the younger generation needs to be recognized. Social Media has grown at a far greater rate than anyone could imagine. Currently, Social Media operates under its modus operandi, completely free of government interference due to its classification as a private entity, and its protection under Section 230.

Throughout the 20th century when Television News Media dominated the scenes, laws were put into effect to ensure that television and radio broadcasters would be monitored by both the courts and government regulatory commissions. For example, “[t]o maintain a license, stations are required to meet a number of criteria. The equal-time rule, for instance, states that registered candidates running for office must be given equal opportunities for airtime and advertisements at non-cable television and radio stations beginning forty-five days before a primary election and sixty days before a general election.”

What these laws and regulations were put in place for was to ensure that the public interest in broadcasting was protected. To give substance to the public interest standard, Congress has from time to time enacted requirements for what constitutes the public interest in broadcasting. But Congress also gave the FCC broad discretion to formulate and revise the meaning of broadcasters’ public interest obligations as circumstances changed.

The Federal Communications Commission (FCC) authority is constrained by the first amendment but acts as an intermediary that can intervene to correct perceived inadequacies in overall industry performance, but it cannot trample on the broad editorial discretion of licensees. The Supreme Court has continuously upheld the public trustee model of broadcast regulation as constitutional. The criticisms of regulating social media center on the notion that they are purely private entities that do not fall under the purviews of the government, and yet these same issues are what presented themselves in the precedent-setting case of Red Lion Broadcasting Co. v. Federal Communications Commission (1969.  In this case, the court held that “rights of the listeners to information should prevail over those of the broadcasters.” The Court’s holding centered on the public right to information over the rights of a broadcast company to choose what it will share, this is exactly what is at issue today when we look at companies such as Facebook, Twitter, and Snapchat censuring political figures who post views that they feel may be inciteful of anger or violence.

In essence, what these organizations are doing is keeping information and views from the attention of the present-day viewer. The vessel for the information has changed, it is no longer found in television or radio but primarily through social media. Currently, television and broadcast media are restricted by Section 315(a) of the Communications Act and Section 73.1941 of the Commission’s rules which “require that if a station allows a legally qualified candidate for any public office to use its facilities (i.e., make a positive identifiable appearance on the air for at least four seconds), it must give equal opportunities to all other candidates for that office to also use the station.” This is a restriction that is nowhere to be found for Social Media organizations. 

This is not meant to argue for one side or the other but merely to point out that there is a political discourse being stifled by these social media entities, that have shrouded themselves in the veils of a private entity. However, what these companies fail to mention is just how political they truly are. For instance, Facebook proclaims itself to be an unbiased source for all parties, and yet what it fails to mention is that currently, Facebook employs one of the largest lobbyist groups in Washington D.C. Four Facebooks lobbyist have worked directly in the office of House Speaker Pelosi. Pelosi herself has a very direct connection to Facebook, she and her husband own between $550,000 to over $1,000,000 in Facebook stock. None of this is illegal, however, it raises the question of just how unbiased is Facebook.

If the largest source of news for the coming generation is not television, radio, or news publications themselves, but rather Social Media such as Facebook, then how much power should they be allowed to wield without there being some form of regulation? The question being presented here is not a new one, but rather the same question asked in 1969, simply phrased differently. How much information is a citizen entitled to, and at what point does access to that information outweigh the rights of the organization to exercise its editorial discretion? I believe that the answer to that question is the same now as it was in 1969 and that the government ought to take steps similar to those taken with radio and television. What this looks like is ensuring that through Social Media, that the public has access to a significant amount of information on public issues so that its members can make rational political decisions. At the end of that day that it was at stake, the public’s ability to make rational political decisions.

These large Social Media conglomerates such as Facebook and Twitter have long outgrown their place as a private entity, they have grown into a public medium that has tethered itself to the realities of billions of people. Certain aspects of it need to be regulated, mainly those that interfere with the Public Interest, there are ways to regulate this without interfering with the overall First Amendment right of Free Speech for all Americans. Where however Social Media blends being a private forum for all people to express their ideas under firmly stated “terms and conditions”, and being an entity that strays into the political field whether it be by censoring heads of state, or by hiring over $50,000,000 worth of lobbyist in Washington D.C, there need to be some regulations put into place that draw the line that ensures the public still maintains the ability to make rational political decisions. Rational decisions that are not influenced by anyone organization. The time to address this issue is now when there is still a middle ground on how people receive their news and formulate opinions.

Say Bye to Health Misinformation on Social Media?

A study from the Center of Countering Digital Hate, found that social media platforms failed to act on 95% of coronavirus-related disinformation reported to them.

      Over the past few weeks, social media companies have been in the hot seat regarding their lack of action against limiting the amount of fake news and misinformation on their platforms. Especially, the information regarding COVID-19 and the vaccine. Even President Biden remarked on social media platforms- stating Facebook and other companies were “killing people” by serving as platforms for misinformation about the Covid-19 vaccine. Later, Biden clarified his earlier statements by saying that he wasn’t accusing Facebook of killing people, but that he meant he wanted the companies to do something about the misinformation, the outrageous information about the vaccine.”

A few weeks later, Senator, Amy Klobuchar introduced the Health Misinformation Act, which would ultimately create an exemption to Section 230 of the Communication Decency Act. Section 230 has always shielded social media companies from being liable for almost any of the content that is posted on their platform. However, under the Health Misinformation Act, social media companies would be liable for the spread of health-related misinformation. Further, the bill would only apply to social media platforms that use an algorithm that promotes health misinformation- which most social media platforms use algorithms and would only apply to health misinformation during a health crisis. Additionally, if this bill were to pass, then the Department of Health and Human Services would be authorized to define “health misinformation.” Finally, the proposed bill would only apply during a national public health crisis, such as COVID-19. Therefore, this exemption would not apply during “normal” times, when there is no public health crisis.

        Senator Amy Klobuchar and some of her peers believe the time has come to create an exemption to Section 230 because “for far too long, online platforms have not done enough to protect the health of Americans.” Further, Klobuchar believes that the misinformation spread about COVID-19 the vaccine proves to the world that the social media companies have no desire to do anything about this because the misinformation gives the social media platform more activity and because the companies cannot be liable for this misinformation because of Section 230.
Instead, these social media companies, use this misinformation to their advantage to get more activity on their platform by creating features within to incentivizing their users to share the misinformation and to get likes, comments, and other engagements, which rewards engagement rather than accuracy.” Furthermore, a study conducted by MIT found that false news stories are 70% more likely to be retweeted than true stories. Therefore, social media platforms, have no reason to limit this information because of the activity they receive for the misinformation. Especially, when this misinformation benefits the social media platform.

What are the concerns with the Health Misinformation Act?

How will the Department of Health and Human Services define “health misinformation?” it seems very difficult to define such a narrow topic, that the majority will agree upon. Also, I believe there will be a huge amount of criticism from the social media companies about this act. For instance, I can imagine the social media companies arguing how will they be able to implement the definition of “health misinformation” to their algorithm? Such as, what if the information on the health crisis changes? Will the social media company have to constantly change the algorithms with the constant change of health information? For example, at the beginning of the pandemic the information of the importance of the mask changed; from mask not being necessary to masking being crucial to ensure the health and safety of yourself and others.

Will the Bill Pass?

With that being said, I do like the concept of the Health Misinformation Act, because it’s wanting to hold the social media companies accountable for their inaction while trying to protect the public so they receive accurate health-related information. However, I do not believe this bill will pass because of a few issues; first, it may violate the First Amendment, for people’s freedom of speech. Second, while it isn’t right; it is not illegal for individuals to post their opinion or misinformation on social media. Therefore, the bill might not pass because it may violate the First Amendment. Finally, as stated earlier how would social media companies implement these new policies and the change of “health misinformation” and how would the federal agencies regulate the social media companies?

What should be done?

“These are some of the biggest, richest companies in the world and they must do more to prevent the spread of deadly vaccine misinformation.”

     I believe we need to create more regulations and create more exemptions to Section 230. Especially, because Section 230 was created in 1996, our world looks and operates very differently than it did in 1996. Social media is an essential part of our business and cultural world.
Overall, I believe there need to be more regulations put into place to oversee social media companies. We need to create transparency with these companies, so the world can understand what is going on behind the closed doors of these companies. Transparency will allow for agencies to fully understand the algorithms and make for proper regulations.

To conclude, social media companies are a monopoly- even though there are a lot of them, there is only a handful that holds most of the popularity and power. With that being said, all major businesses and monopolies must follow strict regulations from the government. Yet, social media companies seem exempt from these types of strict regulations.

While there has been a push over the past few years to repeal or make changes to Section 230, do you think this bill can pass? If not, what can be done to create more regulations?

Free speech, should it be so free?

In the United States everybody is entitled to free speech; however, we must not forget that the First Amendment of the Constitution only protects individuals from federal and state actions. With that being said, free speech is not protected from censorship by private entities, like social media platforms. In addition, Section 230 of the Communications Decency Act (CDA) provides technology companies like Twitter, YouTube, Facebook, Snapchat, Instagram as well as other social media giants, immunity from liabilities arising from the content posted on their websites. The question becomes whether it is fair for an individual who desires to freely express himself or herself to be banned from certain social media websites by doing so? What is the public policy behind this? What are the standards employed by these social media companies when determining who should or should not be banned? On the other hand, are social media platforms being used as tools or weapons when it comes to politics? Do they play a role in how the public vote? Are the users truly seeing what they think they have chosen to see or are the contents being displayed targeted to the users and may ultimately create biases?

As we have seen earlier this year, former President Trump was banned from several social media platforms as a result of the January 6, 2021 assault at the U.S. Capitol by Trump supporters. It is no secret that our former president is not shy about his comments on a variety of topics. Some audiences view him as outspoken, direct, or perhaps provocative. When Twitter announced its permanent suspension of former President Trump’s account, its rationale was to prevent further incitement of violence. By falsely claiming that the 2020 election had been stolen from him, thousands of Trump supporters gathered in Washington, D.C. on January 5 and January 6 which ultimately led to violence and chaos. As a public figure and a politician, our former president should have known that his actions or viewpoints on social media are likely to trigger a significant impact on the public. Public figures and politicians should be held to a higher standard as they represent citizens who voted for them. As such, they are influential. Technology companies like Twitter saw the former president’s tweets as potential threats to the public as well as a violation of their company policies; hence, it was justified to ban his account. The ban was an instance of private action as opposed to government action. In other words, former President Trump’s First Amendment rights were not violated.

Spare Me Your Outrage, Your Shock. This Is America | Cognoscenti

First, let us discuss the fairness aspect of censorship. Yes, individuals possess rights to free speech; however, if the public’s safety is at stake, actions are required to avoid chaos. For example, you cannot scream “fire”  out of nowhere in a dark movie theater as it would cause panic and unnecessary disorder. There are rules you must comply with in order to use the facility and these rules are in place to protect the general welfare. As a user, if you don’t like the rules set forth by that facility, you can simply avoid using it. It does not necessarily mean that your idea or speech is strictly prohibited, just not on that particular facility. Similar to social media platforms, if users fail to follow their company policies, the companies reserve the right to ban them. Public policy probably outweighs individual freedom. As for the standards employed by these technology companies, there is no bright line. As I previously mentioned, Section 230 grants them immunity from liabilities. That being said, the contents are unregulated and therefore, these social media giants are free to implement and execute policies as they seem appropriate.

The Dangers of Social Networking - TurboFuture

In terms of politics, I believe social media platforms do play a role in shaping their users’ perspectives in some way. This is because the contents that are being displayed are targeted, if not tailored, as they collect data based on the user’s preferences and past habits. The activities each user engages in are being monitored, measured, and analyzed. In a sense, these platforms are being used as a weapon as they may manipulate users without the users even knowing. A lot of times we are not even aware that the videos or pictures that we see online are being presented to us because of past contents we had seen or selected. In other words, these social media companies may be censoring what they don’t want you to see or what they may think you don’t want to see.  For example, some technology companies are pro-vaccination. They are more likely to post information about facts about COVID-19 vaccines or perhaps publish posts that encourage their users to get vaccinated.  We think we have control over what we see or watch, but do we really?

How to Avoid Misinformation About COVID-19 | Science | Smithsonian Magazine

There are advantages and disadvantages to censorship. Censorship can reduce the negative impact of hate speech, especially on the internet. By limiting certain speeches, we create more opportunities for equality. In addition, censorship prevents the spread of racism. For example, posts and videos of racial comments could be blocked by social media companies if deemed necessary. Censorship can also protect minors from seeing harmful content. Because children can be manipulated easily, it helps promote safety.  Moreover, censorship can be a vehicle to stop false information. During unprecedented times like this pandemic, misinformation can be fatal. On the other hand, censorship may not be good for the public as it creates a specific narrative in society. This can potentially cause biases. For example, many blamed Facebook for the outcome of an election as it’s detrimental to our democracy.

Overall, I believe that some sort of social media censorship is necessary. The cyber-world is interrelated to the real world. We can’t let people do or say whatever they want as it may have dramatic detrimental effects. The issue is how do you keep the best of both worlds?

 

Private or not private, that is the question.

Section 230 of the Communications Decency Act (CDA), protects private online companies from liability for content posted by others. This immunity also grants internet service providers the freedom to regulate what is posted onto their sites. What has faced much criticism of late however, is social media’s immense power to silence any voices the platform CEOs disagree with.

Section 230(c)(2), known as the Good Samaritan clause, states that no provider shall be held liable for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”

When considered in the context of a ‘1996’ understanding of internet influence (the year the CDA was created) this law might seem perfectly reasonable. Fast forward 25 years though, with how massively influential social media has become on society and the spread of political information, there has developed a strong demand for a repeal, or at the very least, a review of Section 230.

The Good Samaritan clause is what shields Big Tech from legal complaint. The law does not define obscene, lewd, lascivious, filthy, harassing or excessively violent. And “otherwise objectionable” leaves the internet service providers’ room for discretion all the more open-ended. The issue at the heart of many critics of Big Tech, is that the censorship companies such as Facebook, Twitter, and YouTube (owned by Google) impose on particular users is not fairly exercised, and many conservatives feel they do not receive equal treatment of their policies.

Ultimately, there is little argument around the fact that social media platforms like Facebook and Twitter are private companies, therefore curbing any claims of First Amendment violations under the law. The First Amendment of the US Constitution only prevents the government from interfering with an individual’s right to free speech. There is no constitutional provision that dictates any private business owes the same.

Former President Trump’s recent class action lawsuits however, against Facebook, Twitter, Google, and each of their CEOs, challenges the characterization of these entities as being private.

In response to the January 6th  Capitol takeover by Trump supporters, Facebook and Twitter suspended the accounts of the then sitting president of the United States – President Trump.

The justification was that President Trump violated their rules by inciting violence and encouraged an insurrection following the disputed election results of 2020. In the midst of the unrest, Twitter, Facebook and Google also removed a video posted by Trump, in which he called for peace and urged protestors to go home. The explanation given was that “on balance we believe it contributes to, rather than diminishes the risk of ongoing violence” because the video also doubled down on the belief that the election was stolen.

Following long-standing contentions with Big Tech throughout his presidency, the main argument in the lawsuit is that the tech giants Facebook, Twitter and Google, should no longer be considered private companies because their respective CEOs, Mark Zuckerberg, Jack Dorsey, and Sundar Pichai, actively coordinate with the government to censor politically oppositional posts.

For those who support Trump, probably all wish to believe this case has a legal standing.

For anyone else who share concerns about the almost omnipotent power of Silicon Valley, many may admit that Trump makes a valid point. But legally, deep down, it might feel like a stretch. Could it be? Should it be? Maybe. But will Trump see the outcome he is looking for? The initial honest answer was “probably not.”

However, on July 15th 2021, White House press secretary, Jen Psaki, informed the public that the Biden administration is in regular contact with Facebook to flag “problematic posts” regarding the “disinformation” of Covid-19 vaccinations.

Wait….what?!? The White House is in communication with social media platforms to determine what the public is and isn’t allowed to hear regarding vaccine information? Or “disinformation” as Psaki called it.

Conservative legal heads went into a spin. Is this allowed? Or does this strengthen Trump’s claim that social media platforms are working as third-party state actors?

If it is determined that social media is in fact acting as a strong-arm agent for the government, regarding what information the public is allowed to access, then they too should be subject to the First Amendment. And if social media is subject to the First Amendment, then all information, including information that questions, or even completely disagrees with the left-lean policies of the current White House administration, is protected by the US Constitution.

Referring back to the language of the law, Section 230(c)(2) requires actions to restrict access to information be made in good faith. Taking an objective look at some of the posts that are removed from Facebook, Twitter and YouTube, along with many of the posts that are not removed, it begs the question of how much “good faith” is truly exercised. When a former president of the United States is still blocked from social media, but the Iranian leader Ali Khamenei is allowed to post what appears nothing short of a threat to that same president’s life, it can certainly make you wonder. Or when illogical insistence for unquestioned mass emergency vaccinations, now with continued mask wearing is rammed down our throats, but a video showing one of the creators of the mRNA vaccine expressing his doubts regarding the safety of the vaccine for the young is removed from YouTube, it ought to have everyone question whose side is Big Tech really on? Are they really in the business of allowing populations to make informed decisions of their own, gaining information from a public forum of ideas? Or are they working on behalf of government actors to push an agenda?

One way or another, the courts will decide, but Trump’s class action lawsuit could be a pivotal moment in the future of Big Tech world power.