Social Media Has Gone Wild

Increasing technological advances and consumer demands have taken shopping to a new level. You can now buy clothes, food, and household items from the comfort of your couch, and in a few clicks: add to cart, pay, ship, and confirm. Not only are you limited to products sold in nearby stores, but shipping makes it possible to obtain items internationally. Even social media platforms have shopping features for users, such as Instagram Shopping, Facebook Marketplace, and WhatsApp. Despite its convenience, online shopping has also created an illegal marketplace for wildlife species and products.

Most trafficked animal-the Pangolin

Wildlife trafficking is the illegal trading or sale of wildlife species and their products. Elephant ivory, rhinoceros horns, turtle shells, pangolin scales, tiger furs, and shark fins are a few examples of highly sought after wildlife animal products. As social media platforms expand, so does wildlife trafficking.

Wildlife Trafficking Exists on Social Media?

Social media platforms make it easier for people to connect with others internationally. These platforms are great for staying in contact with distant aunts and uncles, but it also creates another method for criminals and traffickers to communicate. It provides a way to remain anonymous without having to meet in-person, which makes it harder for law enforcement to identify a user’s true identity. Even so, can social media platforms be held responsible for making it easier for criminals to commit wildlife trafficking crimes?

Thanks to Section 230 of the Communications Decency Act, the answer is most likely: no.

Section 230 provides broad immunity to websites for content a third-party user posts on the website. Even when a user posts illegal content on a website, the website cannot be held liable for such content. However, there are certain exceptions where websites have no immunity. It includes human and sex trafficking. Although these carve-outs are fairly new, it is clear that there is an interest in protecting people vulnerable to abuse.

So why don’t we apply the same logic to animals? Animals are also a vulnerable population. Many species are unmatched to guns, weapons, traps, and human encroachment on their natural habitats. Similar to children, animals may not have the ability to understand what trafficking is or even the physical strength to fight back. Social media platforms, like Facebook, attempt to combat the online wildlife trade, but its efforts continue to fall short.

How is Social Media Fighting Back?

 

In 2018, the World Wildlife Fund and 21 tech companies created the Coalition to End Wildlife Trafficking Online. The goal was to reduce illegal trade by 80% by 2020. While it is difficult to measure whether this goal is achievable, some social media platforms have created new policies to help meet this goal.

“We’re delighted to join the coalition to end wildlife trafficking online today. TikTok is a space for creative expression and content promoting wildlife trafficking is strictly prohibited. We look forward to partnering with the coalition and its members as we work together to share intelligence and best-practices to help protect endangered species.”

Luc Adenot, Global Policy Lead, Illegal Activities & Regulated Goods, TikTok

In 2019, Facebook banned the sale of animals altogether on its platform. But this did not stop users. A 2020 report showed a variety of illegal wildlife was for sale on Facebook. This clearly shows the new policies were ineffective. Furthermore, the report stated:

“29% of pages containing illegal wildlife for sale were found through the ‘Related Pages’ feature.”

This suggests that Facebook’s algorithm purposefully connects users to pages and similar content based on a user’s interest. Algorithms incentivize users to rely and depend on wildlife trafficking content. They will continue to use social media platforms because it does half of the work for them:

      • Facilitating communication
      • Connecting users to potential buyers
      • Connecting users to other sellers
      • Discovering online chat groups
      • Discovering online community pages

This fails to reduce wildlife trafficking outreach. Instead, it accelerates visibility of this type of content to other users. Does Facebook’s algorithms go beyond Section 230 immunity?

Under these circumstances, Facebook maintains immunity. In Gonzalez v. Google LLC, the court explains how websites are not liable for user content when the website employs content-neutral algorithms. This means that a website did nothing more than program an algorithm to present similar content to a user’s interest. The website did not offer direct encouragement to publish illegal content, nor did it treat the content differently from other user content.

What about when a website profits from illegal posts? Facebook receives a 5% selling fee for each shipment sold by a user. Since illegal wildlife products are rare, these transactions are highly profitable. A pound of ivory can be worth up to $3,300. If a user sells five pounds of ivory from endangered elephants on Facebook, the platform would profit $825 from one transaction. The Facebook Marketplace algorithm is similar to the algorithm based on user interest and engagement. Here, Facebook’s algorithm can push illegal wildlife products to a user who has searched for similar products. Yet, if illegal products are constantly pushed and successful sales are made, Facebook then benefits and makes a profit off these transactions. Does this mean that Section 230 will continue to protect Facebook when it profits from illegal activity?

Evading Detection

Even with Facebook’s prohibited sales policy, users get creative to avoid detection. A simple search of “animals for sale” led me to a public Facebook group. Within 30 seconds of scrolling, I found a user selling live coral, and another user selling an aquarium system with live coral, and live fish. The former reads: Leather $50. However, the picture shows a live coral in a fish tank. Leather identifies the type of coral it is, without saying it’s coral. Even if this was fake coral, a simple Google search shows a piece of fake coral is worth less than $50. If Facebook is failing to prevent users from selling live coral and live fish, it is most likely failing to prevent online wildlife trafficking on its platform.

Another method commonly used to evade detection is when users post a vague description or a photo of an item and include the words “pm me” or “dm me.” These are abbreviations for “private message me” or “direct message me.” It is a quick way to direct interested users to personally reach out to the individual and discuss details in a private chat. It is a way to communicate outside of the leering public eye. Sometimes a user will offer alternative contact methods, such as a personal phone number or an email address. This transitions the interaction off of or to a new social media platform.

Due to high profitability, there are lower stakes when transactions are conducted anonymously online. Social media platforms are great for concealing a user’s identity. Users can use fake names to maintain anonymity behind their computer and phone screen. There are no real consequences for using a fake name when the user is unknown. Nor is there any type of identity verification to truly discover the user’s true identity. Even if a user is banned, the person can create a new account under a different alias. Some users are criminals tied to organized crime syndicates or terrorist groups. Many users operate outside of the United States and are overseas, which makes it difficult to locate them. Thus, social media platforms incentivize criminals to hide among various aliases with little to lose.

Why Are Wildlife Products Popular?

Wildlife products have a high demand for human benefit and use. Common reasons why humans value wildlife products include:

Do We Go After the Traffickers or the Social Media Platform?

Taking down every single wildlife trafficker, and users that facilitate these transactions would be the perfect solution to end wildlife trafficking. Realistically, it’s too difficult to identify these users due to online anonymity and geographical limitations. On the other hand, social media platforms continue to tolerate these illegal activities.

Here, it is clear that Facebook is not doing enough to stop wildlife trafficking. With each sale made on Facebook, Facebook receives a percentage. Section 230 should not protect Facebook when it reaps the benefits of illegal transactions. This takes it a step too far and should open Facebook to the market of: Section 230 liability.

Should Facebook maintain Section 230 immunity when it receives proceeds from illegal wildlife trafficking transactions? Where do we draw the line?

Mental Health Advertisements on #TikTok

The stigma surrounding mental illness has persisted since the mid-twentieth century. This stigma is one of the many reasons why 60% of adults with a mental illness often go untreated. The huge treatment disparity demonstrates a significant need to spread awareness and make treatment more readily available. Ironically, social media, which has been ridiculed for its negative impact on the mental health of its users, has become a really important tool for spreading awareness about and de-stigmatizing mental health treatment.

The content shared on social media is a combination of users sharing their experiences with a mental health condition and companies who treat mental health using advertisements to attract potential patients. At the first glance, this appears to be a very powerful way to use social media to bridge treatment gaps. However, it highlights concerns over vulnerable people seeing content and self-diagnosing themselves with a condition that they might not have and undergoing unnecessary, and potentially dangerous, treatment. Additionally, they might fail to undergo needed treatment because they are overlooking the true cause of their symptoms due to the misinformation they were subjected to.

Attention Deficit Hyperactivity Disorder (“ADHD”) is an example of a condition that social media has jumped on. #ADHD has 14.5 billion views on TikTok and 3 million posts on Instagram. Between 2007 and 2016, diagnoses of ADHD increased by 123%. Further, prescriptions for stimulants, which treat ADHD, have increased 16% since the pandemic. Many experts are attributing this, in large part, to the use of social media in spreading awareness about ADHD and the rise of telehealth companies that have emerged to treat ADHD during the pandemic. These companies have jumped on viral trends with targeted advertisements that oversimplify what ADHD actually looks like and then offers treatment to those that click on the advertisement.

The availability and reliance of telemedicine grew rapidly during the COVID-19 pandemic and many restrictions regarding telehealth were suspended. This created an opening in the healthcare industry for these new companies. ‘Done’ and ‘Cerebral’ are two examples of companies that have emerged during the pandemic to treat ADHD. These companies attract, accept, and treat patients through a very simplistic procedure: (1) social media advertisements, (2) short online questionnaire, (2) virtual visit, and (3) prescription.

Both Done and Cerebral have utilized social media platforms like Instagram and TikTok to lure potential patients to their services. The advertisements vary, but they all highlight how easy and affordable treatment is by emphasizing convenience, accessibility, and low cost. Accessing the care offered is as simple as swiping up on an advertisements that appear as users are scrolling on the platform. These targeted ads depict images of people seeking treatment, taking medication, and having their symptoms go away. Further, these companies utilize viral trends and memes to increase the effectiveness of the advertisements, which typically oversimplify complex ADHD symptoms and mislead consumers.

ADHD content is popular on TikTok, as America faces an Adderall shortage - Vox

While these companies are increasing healthcare access for many patients due to the low cost and virtual platform, this speedy version of healthcare is blurring the line between offering treatment to patients and selling prescriptions to customers through social media. Further, medical professionals are concerned with how these companies are marketing addictive stimulants to young users, and, yet, remain largely unregulated due to outdated guidelines on advertisements for medical services.

The advertising model utilized by these telemedicine companies emphasize a need to modify existing laws to ensure that these advertisements are subjected to the FDA’s unique oversight to protect consumers. These companies are targeting young consumers and other vulnerable people to self-diagnose themselves with misleading information as to the criteria for a diagnosis. There are eighteen symptoms of ADHD and the average person meets at least one or two of those in the criteria, which is what these ads are emphasizing.

Advertisements in the medical sphere are regulated by either the FDA or the FTC. The FDA has unique oversight to regulate the marketing of prescription drugs by manufacturers and drug distributors in what is known as direct-to-consumer (“DTC”) drug advertising. The critics of prescription drug advertisements highlight the negative impact that DTC advertising has on the patient-provider relationship because patients go to providers expecting or requesting particular prescription treatment. In order to minimize these risks, the FDA requires that a prescription drug advertisement must be truthful, present a fair balance of the risks and benefits associated with the medications, and state an approved used of the medication. However, if the advertisement does not mention a particular drug or treatment, it eludes the FDA’s oversight.

Thus, the marketing of medical services, which does not market prescription drugs, is regulated only by the Federal Trade Commission (“FTC”) in the same manner as any other consumer good, which just means that the advertisement must not be false or misleading.

The advertisements these Telehealth companies are putting forward demonstrate that it is time for the FDA to step in because they are combining medical services and prescription drug treatment. They use predatory tactics to lure consumers into believing they have ADHD and then provide them direct treatment on a monthly subscription basis.

The potential for consumer harm is clear and many experts are pointing to the similarities between the opioid epidemic and stimulant drugs. However, the FDA has not currently made any changes to how they regulate advertising in light of social media. The laws regarding DTC drug advertising were prompted in part by the practice of self-diagnosis/self-medication by consumers and the false therapeutic claims made by manufacturers. The telemedicine model these companies are using is emphasizing these exact concerns by targeting consumers, convincing them they have a specific condition, and then offering the medication to treat it after quick virtual visit. Instead of patients going to their doctors requesting a specific prescription that may be inappropriate for a patient’s medical needs, patients are going to the telehealth providers that only prescribe a particular prescription that may also be inappropriate for a patient’s medical needs.

Through the use of social media, diagnosis and treatment with addictive prescription drugs can be initiated by an interactive advertisement in a manner that was not possible when the FDA made the distinctions that these types of advertisements would not be subject to its oversight. Thus, to protect consumers, it is vital that telemedicine advertisements are subjected to a more intrusive monitoring than consumer goods. This will require the companies making these advertisements to properly address the complex symptoms associated with conditions like ADHD and give fair balance to the harms of treatment.

According to the Pew Research Center, 69% of adults and 81% of teens in the United States use social media. Further, about 48% of Americans get their information regularly from social media. We often talk about misinformation in politics and news stories, but it’s permeating every corner of the internet. As these numbers continue to grow, it’s crucial to develop new methods to protect consumers, and regulating these advertisements is only the first step.

Is it HIGH TIME we allow Cannabis Content on Social Media?

 

Is it HIGHT TIME we allow Cannabis Content on Social Media?

The Cannabis Industry is Growing like a Weed

Social media provides a relationship between consumers and their favorite brands. Just about every company has a social media presence to advertise its products and grow its brand. Large companies command the advertising market, but smaller companies and one-person startups have their place too. The opportunity to expand your brand using social media is limitless to just about everyone. Except for the cannabis industry. With the developing struggle between social media companies and the politics of cannabis, comes an onslaught of problems facing the modern cannabis market. With recreational marijuana use legal in 21 states and Washington, D.C., and medical marijuana legal in 38 states, it may be time for this community to join the social media metaverse.

We know now that algorithms determine how many followers on a platform see a business’ content, whether or not the content is permitted, and whether the post or the user should be deleted. The legal cannabis industry has found itself in a similar struggle to legislators with social media giants ( like Facebook, Twitter, and Instagram) for increased transparency about their internal processes for filtering information, banning users, and moderating its platform. Mainstream cannabis businesses have been prevented from making their presence known on social media in the past, but legitimate businesses are being placed in a box with illicit drug users and prevented from advertising on public social media sites. The Legal cannabis industry is expected to be worth over $60 billion by 2024, and support for federal legalization is at an all-time high (68%). Now more than ever, brands are fighting for higher visibility amongst cannabis consumers.

Recent Legislation Could Open the Door for Cannabis

The question remains, whether the legal cannabis businesses have a place in the ever-changing landscape of the social media metaverse. Marijuana is currently a Schedule 1 narcotic on the Controlled Substances Act (1970). This categorization of Marijuana as Schedule 1 means that it has no currently accepted medical use and has a high potential for abuse. While that definition was acceptable when cannabis was placed on the DEAs list back in 1971, there has been evidence presented in opposition to that decision. Historians note, overt racism, combined with New Deal reforms and bureaucratic self-interest is often blamed for the first round of federal cannabis prohibition under the Marihuana Tax Act of 1937, which restricted possession to those who paid a steep tax for a limited set of medical and industrial applications.    The legitimacy of cannabis businesses within the past few decades based on individual state legalization (both medical and recreational) is at the center of debate for the opportunity to market as any other business has. Legislation like the MORE act (Marijuana Opportunity Reinvestment and Expungement) which was passed by The House of Representatives gives companies some hope that they can one day be seen as legitimate businesses. If passed into law, Marijuana will be lowered or removed from the schedule list which would blow the hinges off the cannabis industry, legitimate businesses in states that have legalized its use are patiently waiting in the wings for this moment.

States like New York have made great strides in passing legislation to legalize marijuana the “right” way and legitimize business, while simultaneously separating themselves from the illegal and dangerous drug trade that has parasitically attached itself to this movement. The  Marijuana Regulation and Tax Act (MRTA)  establishes a new framework for the production and sale of cannabis, creates a new adult-use cannabis program, and expands the existing medical cannabis and cannabinoid (CBD) hemp programs. MRTA also established the Office of Cannabis Management (OCM), which is the governing body for cannabis reform and regulation, particularly for emerging businesses that wish to establish a presence in New York. The OCM also oversees the licensure, cultivation, production, distribution, sal,e and taxation of medical, adult-use, and cannabinoid hemp within New York State. This sort of regulatory body and structure are becoming commonplace in a world that was deemed to be like the “wild-west” with regulatory abandonment, and lawlessness.

 

But, What of the Children?

In light of all the regulation that is slowly surrounding the Cannabis businesses, will the rapidly growing social media landscape have to concede to the demands of the industry and recognize their presence? Even with regulations cannabis exposure is still an issue to many about the more impressionable members of the user pool. Children and young adults are spending more time than ever online and on social media.  On average, daily screen use went up among tweens (ages 8 to 12) to five hours and 33 minutes from four hours and 44 minutes, and to eight hours and 39 minutes from seven hours and 22 minutes for teens (ages 13 to 18). This group of social media consumers is of particular concern to both the legislators and the social media companies themselves. MRTA offers protection from companies advertising with the intent of looking like common brands marketed to children. Companies are restricted to using their name and their logo, with explicit language that the item inside of the wrapper has cannabis or Tetrahydrocannabinol (THC) in it. MRTA restrictions along with strict community guidelines from several social media platforms and government regulations around the promotion of marijuana products, many brands are having a hard time building their communities’ presence on social media. The cannabis companies have resorted to creating their own that promote the content they are being prevented from blasting on other sites. Big-name rapper and cannabis enthusiast, Berner who created the popular edible brand “Cookies”, has been approached to partner with the creators to bolster their brand and raise awareness.  Unfortunately, the sites became what mainstream social media sites feared in creating their guideline, an unsavory haven for illicit drug use and other illegal behavior. One of the pioneer apps in this field Social Club was removed from the app store after multiple reports of illegal behavior. The apps have since been more internally regulated but have not taken off like the creators intended. Legitimate cannabis businesses are still being blocked from advertising on mainstream apps.

These Companies Won’t go Down Without a Fight

While cannabis companies aren’t supposed to be allowed on social media sites, there are special rules in place if a legal cannabis business were to have a presence on a social media site. Social media is the fastest and most efficient way to advertise to a desired audience. With appropriate regulatory oversight and within the confines of the changing law, social media sites may start to feel pressure to allow more advertising from cannabis brands.

A Petition has been generated to bring META, the company that owns Facebook and Instagram among other sites, to discuss the growing frustrations and strict restrictions on their social media platforms. The petition on Change.org has managed to amass 13,000 signatures. Arden Richard, the founder of WeedTube, has been outspoken about the issues saying  “This systematic change won’t come without a fight. Instagram has already begun deleting posts and accounts just for sharing the petition,”. He also stated, “The cannabis industry and community need to come together now for these changes and solutions to happen,”. If not, he fears, “we will be delivering this industry into the hands of mainstream corporations when federal legalization happens.”

Social media companies recognize the magnitude of the legal cannabis community because they have been banning its content nonstop since its inception. However, the changing landscape of the cannabis industry has made their decision to ban their content more difficult. Until federal regulation changes, businesses operating in states that have legalized cannabis will be force banned by the largest advertising platforms in the world.

 

The Rise of E-personation

Social media allows millions of users to communicate with one another on a daily basis, but do you really know who is behind the computer screen?

As social media continues to expand into the enormous entity that we know it to be today, the more susceptible users are to abuse online. Impersonation through electronic means, often referred to as e-personation is a rapidly growing trend on social media. E-personation is extremely troublesome because it requires far less information than the other typical forms of identity theft. In order to create a fake social media page, all an e-personator would need is the victim’s name, and maybe a profile picture. While creating a fake account is relatively easy for the e-personator, the impact on the victim’s life can be detrimental.

E-personation Under State Law

It wasn’t until 2008, when New York became the first state to recognized e-personation as a criminally punishable form of identity theft. Under New York law, “a person is guilty of criminal impersonation in the second degree when he … impersonates another by communication by internet website or electronic means with intent to obtain a benefit or injure or defraud another, or by such communication pretends to be a public servant in order to induce another to submit to such authority or act in reliance on such pretense.”

Since 2008, other states, such as California, New Jersey, and Texas, have also amended their identity theft statutes to include online impersonation as a criminal offense. New Jersey amended their impersonation and identity theft statute in 2014, after an e-personator case revealed their current statute lacked any mention of “electronic communication” as means of unlawful impersonation. In 2011, New Jersey Superior Court Judge David Ironson in Morris County, declined to dismiss an indictment of identity theft against Dana Thornton. Ms. Thornton allegedly created a fictitious Facebook page that portrayed her ex-boyfriend, a narcotics detective, unfavorably. On the Facebook page, Thornton, pretending to be her ex, posted admitting to hiring prostitutes, using drugs, and even contracting a sexually transmitted disease. Thornton’s defense counsel argued that New Jersey’s impersonation statute was not applicable because online impersonation was not explicitly mentioned in the statute and therefore, Thornton’s actions do not fall within the scope of activity the statute proscribes. Judge Ironson disagreed by noting the New Jersey statute is “clear and unambiguous” in forbidding impersonation activities that cause injury and does not need to specify the means by which the injury occurs.

Currently under New Jersey law, a person is guilty of impersonation or theft of identity if … “the person engages in one or more of the following actions by any means, but not limited to, the use of electronic communications or an internet website:”

    1. Impersonates another or assumes a false identity … for the purpose of obtaining a benefit for himself or another or to injure or defraud another;
    2. Pretends to be a representative of some person or organization … for the purpose of obtaining a benefit for himself or another or to injure or defraud another;
    3. Impersonates another, assumes a false identity or makes a false or misleading statement regarding the identity of any person, in an oral or written application for services, for the purpose of obtaining services;
    4. Obtains any personal identifying information pertaining to another person and uses that information, or assists another person in using the information … without that person’s authorization and with the purpose to fraudulently obtain or attempt to obtain a benefit or services, or avoid the payment of debt … or avoid prosecution for a crime by using the name of the other person; or
    5. Impersonates another, assumes a false identity or makes a false or misleading statement, in the course of making an oral or written application for services, with the purpose of avoiding payment for prior services.

As social media continues to grow it is likely that more state legislators will amend their statutes to incorporate e-personation into their impersonation and identify theft statutes.

E-personators Twitter Takeover

Over the last week, e-personation has erupted into chaos on Twitter. Elon Musk brought Twitter on October 27, 2022, for $44 billion dollars. He immediately began firing the top Twitter executives including the chief executive and chief financial officer. On the verge of bankruptcy, Elon needed a plan to generate more subscription revenue. At last, the problematic Twitter Blue subscription was created. Under the Twitter Blue policy users could purchase a subscription for $8 a month and receive the blue verification check mark next to their Twitter handle.

The unregulated distribution of the blue verification check mark has led to chaos on Twitter by allowing e-personators to run amuck. Traditionally the blue check mark has been a symbol of authentication for celebrities, politicians, news outlets, and other companies. It was created to protect those susceptible to e-personation. The rollout of Twitter Blue began on November 9, 2022, the policy did not specify any requirements needed to verify a user’s authenticity beyond payment of the monthly fee.

Shortly after the rollout, e-personators began to take advantage of their newly purchased verification subscription by impersonating celebrities, pharmaceutical companies, politicians, and even the new CEO of Twitter, Elon Musk. For example, comedian Kathy Griffin was one of the first Twitter accounts suspended after Twitter Blue’s launch for changing her Twitter name and profile photo to Elon Musk and impersonating the new CEO. Kathy was not the only Twitter user to impersonate Elon and in response Elon tweeted “Going forward, any Twitter handles engaging in impersonation without clearly specifying ‘parody’ will be permanently suspended.”

Elon’s threats of permanent suspension did not stop e-personators from trolling on Twitter. One e-personator used their blue check verification to masquerade as Eli Lilly and Company, an American pharmaceutical company. The fake Eli Lilly account tweeted the company would be providing free insulin to its customers. The real Eli Lilly account tweeted an apology shortly thereafter. Another e-personator used their verification to impersonate former United States President George W. Bush. The fake Bush account tweeted “I miss killing Iraqis” along with a sad face emoji. The e-personators did not stop there, many more professional athletes, politicians, and companies were impersonated under the new Twitter Blue subscription policy. An internal Twitter log seen by the New York Times indicated that 140,000 accounts had signed up for the new Twitter Blue subscription. It is unlikely that Elon will be able to discover every e-personator account and remedy this spread of misinformation.

Twitter’s Term and Conditions 

Before the rollout of Twitter Blue, Twitter’s guidelines included a policy for misleading and deceptive identities. Under Twitter’s policy “you many not impersonate individuals, groups, or organizations to mislead, confuse, or deceive others, nor use a fake identity in a manner that disrupts the experience of others on Twitter.” The guidelines further explain that impersonation is prohibited, specifically “you can’t pose as an existing person, group, or organization in a confusing or deceptive manner.” Based on the terms of Twitter’s guidelines, the recent e-personators are in direct violation of Twitter’s policy, but are these users also criminally liable?

Careful, You Could Get a Criminal Record

Social media networks, such as Facebook, Instagram, and Twitter, have little incentive to protect the interests of individual users because they cannot be held liable for anything their users post. Under section 230 of the Communications Decency Act, “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Because of the lack responsibility placed on social media platforms, victims of e-personation often have a hard time trying to remove the fake online presence. Ironically, in order for a victim to gain control of an e-personator’s fake account, the victim must provide the social media platform with confidential identifying information, while the e-personator effectively remains anonymous.

By now you’re probably asking yourself, but what about the e-personators criminal liability? Under some state statutes, like those mentioned above, e-personators can be found criminally liable. However, there are some barriers that effect the effectiveness of these prosecutions. For example, e-personators maintain great anonymity, therefore finding the actual person behind the fake account could be difficult. Furthermore, many of the state statutes that criminalize e-personation include proving the perpetrator’s intent, which may also pose a risk to prosecution. Lastly, social media is a global phenomenon which means jurisdictional issues will arise when bringing these cases to court. Unfortunately, only a minority of states have amended their impersonation statutes to include e-personation. Hopefully as social media continues to grow more states will follow suit and e-personation will be prosecuted more efficiently and effectively. Remember, not everyone on social media is who they claim to be, so be cautious.

Update Required: An Analysis of the Conflict Between Copyright Holders and Social Media Users

Opening

For anyone who is chronically online as yours truly, in one way or another we have seen our favorite social media influencers, artists, commentators, and content creators complain about their problems with the current US Intellectual Property (IP) system. Be it that their posts are deleted without explanation or portions of their video files are muted, the combination of factors leading to copyright issues on social media is endless. This, in turn, has a markedly negative impact on free and fair expression on the internet, especially within the context of our contemporary online culture. For better or worse, interaction in society today is intertwined with the services of social media sites. Conflict arises when the interests of copyright holders clash with this reality. They are empowered by byzantine and unrealistic laws that hamper our ability to exist as freely as we do in real life. While they do have legitimate and fundamental rights that need to be protected, such rights must be balanced out with desperately needed reform. People’s interaction with society and culture must not be hampered, for that is one of the many foundations of a healthy and thriving society. To understand this, I venture to analyze the current legal infrastructure we find ourselves in.

Current Relevant Law

The current controlling laws for copyright issues on social media are the Copyright Act of 1976 and the Digital Millennium Copyright Act (DMCA). The DMCA is most relevant to our analysis; it gives copyright holders relatively unrestrained power to demand removal of their property from the internet and to punish those using illegal methods to get ahold of their property. This broad law, of course, impacted social media sites. Title II of the law added 17 U.S. Code § 512 to the Copyright Act of 1976, creating several safe harbor provisions for online service providers (OSP), such as social media sites, when hosting content posted by third parties. The most relevant of these safe harbors to this issue is 17 U.S. Code § 512(c), which states that an OSP cannot be liable for monetary damages if it meets several requirements and provides a copyright holder a quick and easy way to claim their property. The mechanism, known as a “notice and takedown” procedure, varies by social media service and is outlined in their terms and conditions of service (YouTube, Twitter, Instagram, TikTok, Facebook/Meta). Regardless, they all have a complaint form or application that follows the rules of the DMCA and usually will rapidly strike objectionable social media posts by users. 17 U.S. Code § 512(g) does provide the user some leeway with an appeal process and § 512(f) imposes liability to those who send unjustifiable takedowns. Nevertheless, a perfect balance of rights is not achieved.

The doctrine of fair use, codified as 17 U.S. Code § 107 via the Copyright Act of 1976, also plays a massive role here. It established a legal pathway for the use of copyrighted material for “purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research” without having to acquire right to said IP from the owner. This legal safety valve has been a blessing for social media users, especially with recent victories like Hosseinzadeh v. Klein, which protected reaction content from DMCA takedowns. Cases like Lenz v. Universal Music Corp further established that fair use must be considered by copyright holders when preparing for takedowns. Nevertheless, failure to consider said rights by true copyright holders still happens, as sites are quick to react to DMCA complaints. Furthermore, the flawed reporting systems of social media sites lead to abuse by unscrupulous actors faking true ownership. On top of that, such legal actions can be psychologically and financially intimidating, especially when facing off with a major IP holder, adding to the unbalanced power dynamic between the holder and the poster.

The Telecommunications Act of 1996, which focuses primarily on cellular and landline carriers, is also particularly relevant to social media companies in this conflict. At the time of its passing, the internet was still in its infancy. Thus, it does not incorporate an understanding of the current cultural paradigm we find ourselves in. Specifically, the contentious Section 230 of the Communication Decency Act (Title V of the 1996 Act) works against social media companies in this instance, incorporating a broad and draconian rule on copyright infringement. 47 U.S. Code § 230(e)(2) states in no uncertain terms that “nothing in this section shall be construed to limit or expand any law pertaining to intellectual property.” This has been interpreted and restated in Perfect 10, Inc. v. CCBill LLC to mean that such companies are liable for user copyright infringement. This gap in the protective armor of Section 230 is a great concern to such companies, therefore they react strongly to such issues.

What is To Be Done?

Arguably, fixing the issues around copyright on social media is far beyond the capacity of current legal mechanisms. With ostensibly billions of posts each day on various sites, regulation by copyright holders and sites is far beyond reason. It will take serious reform in the socio-cultural, technological, and legal arenas before a true balance of liberty and justice can be established. Perhaps we can start with an understanding by copyright holders not to overreact when their property is posted online. Popularity is key to success in business, so shouldn’t you value the free marketing that comes with your copyrighted property getting shared honestly within the cultural sphere of social media?  Social media sites can also expand their DMCA case management teams or create tools for users to accredit and even share revenue with, if they are an influencer or content creator, the copyright holder. Finally, congressional action is desperately needed as we have entered a new era that requires new laws. That being said, achieving a balance between the free exchange of ideas and creations and the rights of copyright holders must be the cornerstone of the government’s approach to socio-cultural expression on social media. That is the only way we can progress as an ever more online society.

 

Image: Freepik.com

https://www.freepik.com/free-vector/flat-design-intellectual-property-concept-with-woman-laptop_10491685.htm#query=intellectual%20property&position=2&from_view=keyword”>Image by pikisuperstar

Social Media Addiction

Social Media was created as an educational and informational resource for American Citizens. Nonetheless, it has become a tool for AI bots and tech companies to predict our next moves by manipulating our minds on social media apps. Section 230 of the Communications Decency Act helped create the modern internet we use today. However, it was initially a 1996 law that regulated online pornography. Specifically, Section 230 provides legal immunity from liability for internet services and users for content posted online. Tech companies do not just want to advertise to social media users but instead want to predict a user’s next move. The process of these manipulative tactics used by social media apps has wreaked havoc on the human psyche and destroyed the social aspects of life by keeping people glued to a screen so big tech companies can profit off of it. 

Social media has changed a generation for the worse, causing depression and sometimes suicide, as tech designers manipulate social media users for profit. Social media companies for decades have been shielded from legal consequences for what happens on their platforms. However, due to recent studies and court cases, this may be able to change and allow for big tech social media companies to be held accountable. A former Facebook employee, France Haugen, a whistleblower to the Senate, stated not to trust Facebook as they knowingly pushed products that harm children and young adults to further profits, which Section 230 cannot sufficiently protect. Haugen further states that researchers at Instagram (a Facebook-owned Social Media App) knew their app was worsening teenagers’ body images and mental health, even as the company publicly downplayed these effects.

There is a California Bill, Social Media Platform Duty to Children Act, that aims to make tech firms liable for Social media Addiction in children; this would allow parents and guardians to use platforms that they believe addicted children in their care through advertising, push notifications and design features that promote compulsive use, particularly the continual consumption of harmful content on issues such as eating disorders and suicide. This bill would hold companies accountable regardless of whether they deliberately designed their products to be addictive.

Social Media addiction is a psychological, behavioral dependence on social media platforms such as Instagram, Snapchat, Facebook, TikTok, bereal, etc. Mental Disorders are defined as conditions that affect ones thinking, feeling, mood, and behaviors. Since the era of social media, especially from 2010 on, doctors and physicians have had a hard time diagnosing patients with social media addiction and mental disorders since they seem to go hand in hand. Social Media addiction has been seen to improve mood and boost health promotions with ads. However, at the same time, it can increase the negative aspects of activities that the youth (ages 13-21) take part in. Generation Z (“Zoomers”) are people born in the late 1990s to 2010s with an increased risk of social media addiction, which has been linked to depression. 

study measured the Difficulties in Emotion Regulation Scale (“DEES”) and Experiences in Close Relationships (“ECR”) to characterize the addictive potential that social media communication applications have based on their measure of the brain. The first measure in the study was a six-item short scale consisting of DEES that was a 36-item, six-factor self-report measure of difficulties, assessing

  1. awareness of emotional responses,
  2. lack of clarity of emotional reactions,
  3. non-acceptance of emotional responses,
  4. limited access to emotion regulation strategies perceived as applicable,
  5. difficulties controlling impulses when experiencing negative emotions, and
  6. problems engaging in goal-directed behaviors when experiencing negative emotions. 

The second measure is ECR-SV which includes a twelve-item test evaluating adult attachment. The scale comprised two six-item subscales: anxiety and avoidance. Each item was rated on a 7-point scale ranging from 1 = strongly disagree to 7 = strongly agree, which is another measure of depression, anxiety, and mania were DSM-5. The results depict that scoring at least five of the nine items on the depression scale during the same two-week period classified depression. Scoring at least three of the six symptoms on the anxiety scale was to sort anxiety. Scoring at least three of the seven traits in the mania scale has classified mania. 

The objectives of these studies were to clarify that there is a high prevalence of social media addiction among college students and confirms statistically that there is a positive relationship between social media addiction and mental disorders by reviewing previous studies. 

The study illustrates that there are four leading causes of social media abuse: 1)The increase in depression symptoms have occurred in conjunction with the rise of smartphones since 2007, 2) Young people, especially Generation Z, spend less time connecting with friends, and they spend more time connecting with digital content. Generation Z is known for quickly losing focus at work or study because they spend much time watching other people’s lives in an age of information explosion. 3) An increase in depression is low self-esteem when they feel negative on Social Media compared to those who are more beautiful, more famous, and wealthier. Consequently, social media users might become less emotionally satisfied, making them feel socially isolated and depressed. 4) Studying pressure and increasing homework load may cause mental problems for students, therefore promoting the matching of social media addiction and psychiatric disorders. 

The popularity of the internet, smartphones, and social networking sites are unequivocally a part of modern life. Nevertheless, it has contributed to the rise of depressive and suicidal symptoms in young people. Shareholders of social media apps should be more aware of the effect their advertising has on its users. Congress should regulate social media as a public policy matter to prevent harm, such as depression or suicide among young people. The best the American people can do is shine a light on the companies that exploit and abuse their users, to the public and to congress, to hold them accountable as Haugen did. There is hope for the future as the number of bills surrounding the topic of social media in conjunction with mental health effects has increased since 2020. 

Shadow Banning Does(n’t) Exist

Shadow Banning Doesn’t Exist

#mushroom

Recent posts from #mushroom are currently hidden because the community has reported some content that may not meet Instagram’s community guidelines.

 

Dear Instagram, get your mind outta the gutter! Mushrooms are probably one of the most searched hashtags in my Instagram history. It all started when I found my first batch of wild chicken-of-the-woods mushrooms. I wanted to learn more about mushroom foraging, so I consulted Instagram. I knew there were tons of foragers sharing photos, videos, and tips about finding different species. But imagine not being able to find content related to your hobby?

What if you loved eggplant varieties? But nothing came up in the search bar? Perhaps you’re an heirloom eggplant farmer trying to sell your product on social media? Yet you’ve only gotten two likes—even though you added #eggplantman to your post. Shadow banned? I think yes.

The deep void of shadow banning is a social media user’s worst nightmare. Especially for influencers whose career depends on engagement. Shadow banning comes with so many uncertainties, but there are a few factors many users agree on:

      1. Certain posts and videos remain hidden from other users
      2. It hurts user engagement
      3. It DOES exist

#Shadowbanning

Shadow banning is an act of restricting or censoring a user’s content on social media without notifying the user. This usually occurs when a user posts content deemed inappropriate or it violates the platform’s guidelines. If a user is shadow banned, the user’s content is only visible to the user and their followers.

Influencers, artists, creators, and business owners are vulnerable victims to the shadow banning void. They depend the most on user engagement, growth, and reaching new audiences. As much as it hurts them, it also hurts other users searching for this specific content. There’s no clear way of telling whether you’ve been shadow banned. You don’t get a notice. You can’t make an appeal to fix your lack of engagement. However, you will see a decline in engagement because no one can see your content in their feeds.

According to the head of Instagram, Adam Mosseri, “shadow banning is not a thing.” In an interview with the Meta CEO, Mark Zuckerberg, he stated Facebook has “no policy that is shadow banning.” Even a Twitter blog stated, “People are asking us if we shadow ban. We do not.” There is no official way of knowing if it exists, but there is evidence it does take place on various social media platforms.

#Shadowbanningisacoverup?

Pole dancing on social media probably would have been deemed inappropriate 20 years ago. But this isn’t the case today. Pole dancing is a growing sport industry. Stigmas associating strippers with pole dancing is shifting with its increasing popularity and trendy nature. However, social media standards may still be stuck in the early 2000s.

In 2019, user posts with hashtags including #poledancing, #polesportorg, and #poledancenation were hidden from Instagram’s Explore page. This affected many users who connect and share new pole dancing techniques with each other. It also had a huge impact on businesses who rely on the pole community to promote their products and services: pole equipment, pole clothing, pole studios, pole sports competitions, pole photographers, and more.

Due to a drastic decrease in user engagement, a petition directing Instagram to stop pole dancing censorship was circulated worldwide. Is pole dancing so controversial it can’t be shared on social media? I think not. There is so much to learn from sharing information virtually, and Section 230 of the Communications Decency Act supports this.

Section 230 was passed in 1996, and it provides limited federal immunity to websites from lawsuits if a user posts something illegal. This means that if User X decides to post illegal content on Twitter, the Twitter platform could not be sued because of User X’s post. Section 230 does not stop the user who posted such content from being sued, so User X can still be held accountable.

It is clear that Section 230 embraces the importance of sharing knowledge. Section 230(a)(1) tells us this. So why would Instagram want to shadow ban pole dancers who are simply sharing new tricks and techniques?

The short answer is: It’s inappropriate.

But users want to know: what makes it inappropriate?

Is it the pole? A metal pole itself does not seem so.

Is it the person on the pole? Would visibility change depending on gender?

Is it the tight clothing? Well, I don’t see how it is any different from my 17  bikini photos on my personal profile.

Section 230 also provides a carve-out for sex-related work, such as sex trafficking. But this is where the line is drawn between appropriate and inappropriate content. Sex trafficking is illegal, but pole dancing is not. Instagram’s community guidelines also support this. Under the guidelines, sharing pole dancing content would not violate it. Shadow banning clearly seeks to suppress certain content, and in this case, the pole dancing community was a target.

Cultural expression also battles with shadow banning. In 2020, Instagram shadow banned Caribbean Carnival content. The Caribbean Carnival is an elaborate celebration to commemorate slavery abolition in the West Indies and showcases ensembles representing different cultures and countries.

User posts with hashtags including #stluciacarnival, #fuzionmas, and #trinidadcarnival2020 could not be found nor viewed by other users. Some people viewed this as suppressing culture and impacting tourism. Additionally, Facebook and Instagram shadow banned #sikh for almost three months. Due to numerous user feedback, the hashtag was restored, but Instagram failed to state how or why the hashtag was blocked.

In March 2020, The Intercept obtained internal TikTok documents alluding to shadow banning methods. Documents revealed moderators were to suppress content depicting users with “‘abnormal body shape,’ ‘ugly facial looks,’ dwarfism, and ‘obvious beer belly,’ ‘too many wrinkles,’ ‘eye disorders[.]'” While this is a short excerpt of the longer list, this shows how shadow banning may not be a coincidence at all.

Does shadow banning exist? What are the pros and cons of shadow banning?

 

 

 

#ad : The Rise of Social Media Influencer Marketing

 

 

 

 

 

 

 

 

#Ad : The rise of social media influence marketing.

When was the last time you bought something from a billboard or a newspaper? Probably not recently. Instead, advertisers are now spending their money on digital market platforms. And at the pinnacle of these marketing platforms are influencers. Since millennial, generation Y, and generation Z consumers spend so much time consuming user-generated content, the creator begins to become their acquaintance and could even be categorized as a friend. Once that happens, the influencer has more power to do what their name suggests and influence the user to purchase. This is where our current e-commerce market is headed.

Imagine this:

If a person you know and trust suggests you try a brand new product, you would probably try it. Now, if that same person were to divulge to you that they were paid to tell you all about how wonderful this product is, you would probably have some questions about the reality of their love for this product, right?

Lucky for us consumers, the Federal Trade Commission (FTC) has established an Endorsement Guide so we can all have that information when we are being advertised to by our favorite social media influencers.

 

The times have changed, quickly.

Over the past 8 years, there has been a resounding shift in the way companies market their products, to the younger generation specifically. The unprecedented changes throughout the physical and digital marketplace have forced brands to think thoroughly through their strategies on how to reach the desired consumer. Businesses are now forced to rely on digital and social media marketing more than they ever have before.

With the rise of social media and apps like Vine, and Tik Tok, came a new metaverse with almost untapped potential for marketing. This was the way companies would be able to reach this younger generation of consumers, you know, the ones with their heads craned over a phone and their thumbs constantly scrolling. These were the people that advertisers had trouble reaching, until now.

 

What the heck is an “ Influencer”?

The question “What is an influencer?” has become standard in conversations among social media users. We know who they are, but the term is very loosely defined. Rachel David, a popular, YouTube personality, defined it with the least ambiguity as “Someone like you and me, except they chose to consistently post stuff online”. This definition seems harmless enough until you understand that it is much more nuanced than that and these individuals are being paid huge sums of money to push products that they most likely don’t use themselves, despite what their posts may say. The reign of celebrity-endorsed marketing is shifting to a new form of celebrity called an “Influencer”. High-profile celebrities were too far removed from the average consumer. A new category emerged with the rise of social media use, and the only difference between a celebrity and a famous influencer is…relatability. Consumers could now see themselves in the influencer and would default to trusting them and their opinion.

One of the first instances we saw influencers flexing their advertising muscle was the popular app Vine .Vine was a revolutionary app and frankly existed before its time. It introduced the user to a virtual experience that matched their dwindling attention span. Clips were no more than 6 seconds long and would repeat indefinitely until the user swiped to the next one. This short clip captured the user’s attention and provided that much-needed dopamine hit. This unique platform began rising in popularity, rivaling other apps like the powerhouse of user engagement, YouTube. Unlike YouTube, however, Vine required less work on the shorter videos, and more short videos were produced by the creator. Since the videos were so short, the consumers wanted more and more videos (content), which opened the door for other users to blast their content, creating an explosion of “Vine Famous” creators. Casual creators were now, almost overnight, amassing millions of followers, followers they can now influence. Vine failed to capitalize on its users and its inability to monetize on its success, it ultimately went under in 2016. But, what happened to all of those influencers? They made their way to alternate platforms like YouTube, Instagram, and Facebook taking with them their followers and subsequently their influencer status. These popular influencers went from being complete strangers to people the users inherently trusted because of the perceived transparency into their daily life.

 

Here come the #ads.

Digital marketing was not introduced by Vine, but putting a friendly influencer face behind the product has some genesis there. Consumerism changed when social media traffic increased. E-commerce rose categorically when the products were right in front of the consumer’s face, even embedded into the content they were viewing. Users were watching advertisements and didn’t even care. YouTube channels that were dedicated solely to reviewing different products and giving them a rating became an incredibly popular genre of video. Advertisers saw content becoming promotion for a product and the shift from traditional marketing strategies took off. Digital, inter-content advertising was the new way to reach this generation.

Now that influencer marketing is a mainstream form of marketing, the prevalence of the FTC Endorsement Guide has amplified. Creators are required to be transparent about their intentions in marketing a product. The FTC guide suggests ways influencers can effectively market the product they are endorsing while remaining transparent about their motivations to the user. The FTC guide provides examples of how and when to disclose the fact that a creator is sponsoring or endorsing a particular product that must be followed to avoid costly penalties. Most users prefer to have their content remain as “on brand” as possible and will resort to the most surreptitious option and choose to disguise the “#ad” within a litany of other relevant hashtags.

The age of advertising has certainly changed right in front of our eyes, literally. As long as influencers remain transparent about their involvement with the products they show in their content, consumers will inherently trust them and their opinion on the product. So sit back, relax, and enjoy your scrolling. But, always be cognizant that your friendly neighborhood influencer may have monetary motivation behind their most recent post.

 

 

 

 

Jonesing For New Regulations of Internet Speech

From claims that the moon landing was faked to Area 51, the United States loves its conspiracy theories. In fact, a study sponsored by the University of Chicago found that more than half of Americans believe at least one conspiracy theory. While this is not a new phenomenon, the increasing use and reliance on social media has allowed misinformation and harmful ideas to spread with a level of ease that wasn’t possible even twenty years ago.

Individuals with a large platform can express an opinion that creates a harm to the people that are personally implicated in the ‘information’ being spread. Presently, a plaintiff’s best option to challenge harmful speech is through a claim for defamation. The inherent problem is that opinions are protected by the First Amendment and, thus, not actionable as defamation.

This leaves injured plaintiffs limited in their available remedies because statements in the context of the internet are more likely to be seen as an opinion. The internet has created a gap where we have injured plaintiffs and no available remedy. With this brave new world of communication, interaction, and the spread of information by anyone with a platform comes a need to ensure that injuries sustained by this speech will have legal recourse.

Recently, Alex Jones lost a defamation claim and was ordered to pay $965 million to the families of the Sandy Hook victims after claiming that the Sandy Hook shooting that occurred in 2012 was a “hoax.” Despite prevailing at trial, the statements that were the subject of the suit do not fit neatly into the well-established law of defamation, which makes reversal on appeal likely.

The elements of defamation require that the defendant publish a false statement purporting it to be true, which results in some harm to the plaintiff. However, just because a statement is false does not mean that the plaintiff can prove defamation because, as the Supreme Court has recognized, false statements still receive certain First Amendment protections. In Milkovich v. Lorain Journal Co., the Court held that “imaginative expression” and “loose, figurative, or hyperbolic language” is protected by the First Amendment.

The characterization of something as a “hoax” has been held by courts to fall into this category of protected speech. In Montgomery v. Risen, a software developer brought a defamation action against an author who made a statement claiming that plaintiff’s software was a “hoax.” The D.C. Circuit held that characterization of something as an “elaborate and dangerous hoax” is hyperbolic speech, which creates no basis for liability. This holding was mirrored by several courts including the District Court of Kansas in Yeagar v. National Public Radio, the District Court of Utah in Nunes v. Rushton, and the Superior Court of Delaware in Owens v. Lead Stories, LLC.

The other statements made by Alex Jones regarding Sandy Hook are also hyperbolic language. These statements include: “[i]t’s as phony as a $3 bill”, “I watched the footage, it looks like a drill”, and “my gut is… this is staged. And you know I’ve been saying the last few months, get ready for big mass shootings, and then magically, it happens.” While these statements are offensive and cruel to the suffering families, it is really difficult to characterize them as something objectively claimed to be true. ‘Phony’, ‘my gut is’, ‘looks like’, and ‘magically’ are qualifying the statement he is making as a subjective opinion based on his interpretation of the events that took place.

It is indisputable that the statements Alex Jones made caused harm to these families. They have been subjected to harassment, online abuse, and death threats from his followers. However, no matter how harmful these statements are, that does not make it defamation. Despite this, a reasonable jury was so appalled by this conduct that they found for the plaintiffs. This is essentially reverse jury nullification. They decided that Jones was culpable and should be held legally responsible even if there is no adequate basis for liability.

The jury’s determination demonstrates that current legal remedies are inadequate to regulate potentially harmful speech that can spread like wildfire on the internet. The influence that a person like Alex Jones has over his followers establishes a need for new or updated laws that hold public figures to a higher standard even when they are expressing their opinion.

A possible starting point for regulating harmful internet speech at a federal level might be through the commerce clause, which allows Congress to regulate instrumentalities of commerce. The internet, by its design, is an instrumentality of interstate commerce by enabling for the communication of ideas across state lines.

Further, the Federal Anti-Riot Act, which was passed in 1968 to suppress civil rights protestors might be an existing law that can serve this purpose. This law makes it a felony to use a facility of interstate commerce to (1) incite a riot; or (1) to organize, promote, encourage, participate in, or carry on a riot. Further, the act defines riot as:

 [A] public disturbance involving (1) an act or acts of violence by one or more persons part of an assemblage of three or more persons, which act or acts shall constitute a clear and present danger of, or shall result in, damage or injury to the property of any other person or to the person of any other individual or (2) a threat or threats of the commission of an act or acts of violence by one or more persons part of an assemblage of three or more persons having, individually or collectively, the ability of immediate execution of such threat or threats, where the performance of the threatened act or acts of violence would constitute a clear and present danger of, or would result in, damage or injury to the property of any other person or to the person of any other individual.

Under this definition, we might have a basis for holding Alex Jones accountable for organizing, promoting, or encouraging a riot through a facility (the internet) of interstate commerce. The acts of his followers in harassing the families of the Sandy Hook victims might constitute a public disturbance within this definition because it “result[ed] in, damage or injury… to the person.” While this demonstrates one potential avenue of regulating harmful internet speech, new laws might also need to be drafted to meet the evolving function of social media.

In the era of the internet, public figures have an unprecedented ability to spread misinformation and incite lawlessness. This is true even if their statements would typically constitute an opinion because the internet makes it easier for groups to form that can act on these ideas. Thus, in this internet age, it is crucial that we develop a means to regulate the spread of misinformation that has the potential to harm individual people and the general public.

Corporate Use of Social Media: A Fine Line Between What Could-, Would-, and Should-be Posted

 

Introduction

In recent years, social media has taken a hold on nearly every aspect of human interaction and turned the way we communicate on its head. Social media apps’ high speed capability of disseminating information instantaneously have affected the way many sectors of business operate. From entertainment, social, environmental, educational, or financial, social media has bewildered the legal departments of many in house general counsels across all industries. Additionally, the generational shaft between the person actually posting for the account versus their supervisor has only exacerbated the potential for communications to miss their mark and cause controversy or adverse effects.

These days, most companies have social media accounts, but not all accounts are created equal, and they certainly are not all monitored the same. In most cases, these accounts are not regulated at all except by their own internal managers and #CancelCulture. Depending on the product or company, social media managers have done their best to stay abreast of changes in popular hashtags, trends and challenges, and the overall shift from a corporate tone of voice to one of relatability–more Gen-Z-esque, if you will. But with this shift, the rights and implications of corporate speech through social media has been put to the test.

Changes in Corporate Speech on Social Media 

In the last 20 years, corporate use of social media has become a battle of relevance. With the decline of print media, social media, and its apps, have emerged as a marketing necessity. Early social media use was predominantly geared towards social purposes. If we look at the origins of Facebook, Myspace, and Twitter it is clear that these apps were intended for superficial uses—not corporate communications—but this all changed with the introduction of LinkedIn, which sparked a dynamic shift towards business and professional use of social media.

Today social media is used to report on almost every aspect of our lives, from disaster preparation and emergency responses to political updates, to dating and relationship finders, and customer service based tasks, social media truly covers all. It is also more common now days to get backlash for not speaking out or using social media after a major social or political movement occurs. Social media is also increasingly being used for research with geolocation technology, for organizing demonstrations and political unrest, and in the business context, for development in sales, marketing, networking, and hiring or recruiting practices.

These changes are starting to lead to significant conversations in the business world when it comes to company speech, regulated disclosures and First Amendment rights. For example, so far, there is minimal research on how financial firms disseminate communications to investor news outlets via social media and in which format they are being responded to. And while some may view social media as an opportunity to further this kind of investor outreach, others have expressed concerns that disseminating communications in this manner could result in a company’s loss of control over such communications entirely.

The viral nature of social media allows not just investors to connect more easily with companies but also with individuals who may not directly follow that company and would therefore be a lot less likely to be informed about a company’s prior financial communications and the importance of any changes. This creates risk for a company’s investor communications via social media because of the potential to spread and possibly reach uniformed individuals which could in turn produce adverse consequences for the company when it comes to concerns about reliance and misleading information.

Corporate Use, Regulations, and Topics of Interest on Social Media 

With the rise of social media coverage on various societal issues, these apps have become a platform for news coverage, political movements, and social concerns and, for some generations, a platform that replaces traditional news media almost entirely. Specifically, when it comes to the growing interest in ESG related matters and sustainable business practices, social media poses as a great tool for information communication. For example, the Spanish company Acciona has recently been reported by the latest Epsilon Icarus Analytics Panel on ESG Sustainability, as having Spain’s highest resonating ESG content of all their social networks. Acciona demonstrates the potential leadership capabilities for a company to fundamentally impact and effectuate digital communications on ESG related topics. This developing content strategy focuses on brand values, and specifically, for Acciona, strong climate-change based values, female leadership, diversity, and other cultural, societal changes which demonstrates this new age of social media as a business marketing necessity.

Consequentially, this shift in usage of social media and the way we treat corporate speech on these platforms has left room for emerging regulation. Commercial or corporate speech is generally permissible under Constitutional Free Speech rights, so long as the corporation is not making false or misleading statements. Section 230 provides broad protection to internet content providers from accountability based on information disseminated on their platform. In most contexts, social media platforms will not be held accountable for the consequences resulting therefrom (i.e. a bad user’s speech). For example, a recent lawsuit was dismissed in favor of the defendant, TikTok, and its parent company, after a young girl died from participation in a trending challenge that went awry because under § 230 the platform was immune from liability.

In essence, when it comes to ESG-related topics, the way a company handles its social media and the actual posts they put out can greatly affect the company’s success and reputation as often ESG focused perspectives affect many aspects of the operation of the business. The type of communication, and coverage on various issues, can impact a company’s performance in the short term and long term hemispheres–the capability of which can effectuate change in corporate environmental practices, governance, labor and employment standards, human resource management and more.

With ESG trending, investors, shareholders, and regulators now face serious risk management concerns. Companies must now, more publicly, address news concerning their social responsibilities, on a much more frequent basis as ESG concerns continue to rise. Public company activities, through Consumer Service Reports, are mandated in annual 10-K filings and disclosures by the SEC, along with ESG disclosures thanks to a recent rule promulgation. These disclosures are designed to hold accountable and improve environmental, social, and economic performance when it comes to their respective stakeholders’ expectations.

Conclusion

In conclusion, social media platforms have created an entirely new mechanism for corporate speech to be implicated. Companies should proceed cautiously when covering social, political, environmental, and related concerns and their methods of information dissemination as well as the possible effects their posts may have on business performance and reputation overall.

Skip to toolbar