Destroying Defamation

The explosion of Fake News spread among social media sites is destroying a plaintiff’s ability to succeed in a defamation action. The recent proliferation of rushed journalism, online conspiracy theories, and the belief that most stories are, in fact, “Fake News” have created a desert of veracity. Widespread public skepticism about even the most mainstream social media reporting means plaintiffs need help convincing jurors that third parties believed any reported statement to be true. Such proof is necessary for a plaintiff to prove the elements of defamation.

Fake News Today

Fake News is any journalistic story that knowingly and intentionallyincludes untrue factual statements. Today, many speak of Fake News as a noun. There is no shortage of examples of Fake News and its impact.

      • Pizzagate: During the 2016 Presidential Election, Edgar Madison Welch, 28, read a story on (then) Facebook that Hilary Clinton was running a child trafficking ring out of the basement of a pizzeria. Welch, a self-described vigilante, shot open a locked door of the pizzeria with his AR-15.
      • A study by three MIT scholars found that false news stories spread faster on Twitter than true stories, with the former being 70% more likely to be retweeted than the latter.
      • During the defamation trial of Amber Heard and Johnny Depp, a considerable number of “Fake News” reports circulated across social media platforms, particularly TikTok, Twitter, and YouTube, attacking Ms. Heard at a disproportionality more significant rate than Mr. Depp.

 

What is Defamation?

To establish defamation, a plaintiff must show the defendant published a false assertion of fact that damages the plaintiff’s reputation. Hyperbolic language or other indications that a statement was not meant to be taken seriously are not actionable. Today’s understanding that everything on the Internet is susceptible to manipulation destroys defamation.

Because the factuality of a statement is a question of law, a plaintiff must first convince a judge that the offending statement is fact and not opinion. Courts often find that Internet and social media statements are hyperbole or opinion. If a plaintiff succeeds in persuading the judge, then the issue of whether the statement defamed the plaintiff heads to the jury. A jury faced with defamation must determine whether the statement of fact harmed the defendant’s reputation or livelihood to the extent that it caused the plaintiff to incur damages. The prevalence of Fake News creates another layer of difficulty for the Internet plaintiff, who must convince the jury that the statement was true.

Defamation’s Slow and Steady Erosion

Since the 1960s, the judiciary has limited plaintiffs’ ability to succeed in defamation claims. The decisions in Sullivan v. New York Times and Gertz increased the difficulty for public figures, and those with limited public figure status, to succeed by requiring them to prove actual malice against a defendant, a standard higher than the mere negligence standard allowed for individuals who are not of community interest.

The rise of Internet use, mainly social media, presents plaintiffs with yet another hurdle. Plaintiffs can only succeed if the challenged statement is fact, not opinion. However, judges find that statements made on the Internet are opinions and not points. The combined effect of Supreme Court limitations on proof and the increased belief that social media posts are mostly opinions has limited the plaintiff’s ability to succeed in a defamation claim.

Destroying Defamation

If the Supreme Court and social media have eroded defamation, Fake News has destroyed it. Today, convincing a jury that a false statement purporting to be fact has defamed a plaintiff is difficult given the dual issues of society’s objective mistrust of the media and the understanding that information on the Internet is generally opinion, not fact. Fake News sows confusion and makes it almost impossible for jurors to believe any statement has the credibility necessary to cause harm.

To be clear, in some instances, fake News is so intolerable that a jury will find for the plaintiffs. A Connecticut jury found conspiracy theorist Alex Jones liable for defamation based on his assertion that the government had faked the Sandy Hook shootings.

But often, plaintiffs are unsuccessful where the challenged language is conflated with untruths. Fox News successfully defended itself against a lawsuit claiming that it had aired false and deceptive content about the coronavirus, even though its reporting was, in fact, untrue.

Similarly, a federal judge dismissed a defamation case against Fox News for Tucker Carlson’s report that the plaintiff had extorted then-President Donald Trump. In reaching its conclusion, the judge observed that Carlson’s comments were rhetorical hyperbole and that the reasonable viewer “‘arrive[s] with the appropriate amount of skepticism.”‘ Reports of media success in defending against defamation claims further fuel media mistrust.

The current polarization caused by identity politics is furthering the tendency for Americans to mistrust the media. Sarah Palin announced that the goal of her recent defamation case against The New York Times was to reveal that the “lamestream media” publishes “fake news.”

If jurors believe that no reasonable person could credit a challenged statement as accurate, they cannot find that the statement the plaintiff asserts is defamatory caused harm. An essential element of defamation is that the defendant’s remarks damaged the plaintiff’s reputation. The large number of people who believe News is fake, the media’s rush to publish, and external attacks on credible journalism have created a problematization of truth among members of society. The potential for defamatory harm is minimal when every news story is questionable. Ultimately, the presence of Fake News is a blight on the tort of defamation and, like the credibility of present-day news organizations, will erode it to the point of irrelevance.

Is there any hope for a world without Fake News?

 

MODERN WARFARE OF COMMUNICATION

As an individual that has been around, and currently living with, a call-of-dutier  (i.e. one that plays Call of Duty), if I heard @yoUrD0gzM1n3 scream “S*** THAT YOU F**** N****,”  over the microphone, I wouln’t flinch. The vulgar and violent communication between video game players, is not only normalized in today’s society, but vigilantly evades regulation. The video game industry, once thought of as an innocent pass-time or hobby, has since developed into a weapon for communication, and from the looks of it, it was my responsibility to hold @yoUrD0gzM1n3 accountable for his discriminatory remarks.

THE VIRTUAL SNIPER

The video game industry began on a some-what yellow brick road. Beginning in the ‘70s and well into the ‘80s, video game launches such as Space Invaders, Pac-Man, Donkey Kong, and Flight started the mark of a new era: the gamer life.

But as video games began to increase in popularity, and new technology began to emerge, the video game industry, too, was greeted with a dark upgrade. The creation, mass production, and incorporation of computers, consoles, and P.C. monitors in our day-to-day lives created a new opportunity for video game Developers. Slowly, first person shooter (FPS), a genre of video games played from the point of view of the protagonist carrying a weapon, began to emerge.

Today’s FPS games are a general ode to one of the first series that pioneered the gaming industry down this gruesome path: Doom. The Doom franchise, developed by id Software, is a series that includes various FPS video games. Among the first, the Doom series introduced “3D graphics, third-dimension spatiality, networked multiplayer gameplay, and support for player-created modifications” that the gaming industry had seen at the time. 
Since, the FPS genre quickly grew, and more games, as well as players began to emerge, creating a gaming franchise obsessed with virtually-simulated violence. Today, certain games are so intwined with violence and gore, that their reputations have been entirely built upon these society-created gamer-pillars. Major gaming franchises such as League of Legends, Call of Duty, Counter-Strike, Dota 2, Overwatch, Ark and Valorant have caused uproar since their creation due to the gross level of violence and harm that is depicted and encouraged, and the emergence of hostile communities that support them, both through action and speech.

BEFORE I SHOOT, LET ME SAY SOMETHING REAL QUICK

The FPS and violent video game genre, however wouldn’t have grown, if it not for the creation of “Voice Over Internet Protocol” (“VoIP”). VoIP allows for “Voice Chat,” and other subsequent communicative functions that allow game players to interact with others while playing. The infamous launch of Xbox Live on the original Xbox in 2002 marked a groundbreaking milestone in gaming history, allowing gamers to now “chat with both friends and strangers, in and out of games, across multiple games” all while from the comfort of their couch.  
But with creation of VoIP and mass incorporation into consoles and games, along came certain issues, as with all online communication tools and devices typically do. The introduction of communicative tools in the gaming industry allowed users to not only communicate, but also harass one-another too. Today, essentially all video games include some sort of communication tool. Users of most video games can usually chat, talk, or communicate with symbols or gestures with other users while also playing. Video games, however, are still socially considered games. To purchase, they are found in the video-game isle of Target or Best Buy, or in a video-game store like Game Stop.

But, despite being a game, they are considered more similar to Shakespeare’s Romeo and Juliet, and are constitutionally protected as such.

@SCOTUS on ‘Live’

The turning point of whether video games deserved constitutional protection ultimately rested on whether video games were to be considered more like mechanical entertainment devices or, rather, mediums of expression.

The Supreme Court ultimately put the nail in the coffin, and went with the latter.

 In Brown v. Entertainment Merchants Ass’n (2011), video games first received constitutional protection. In Brown, the Court invalidated a California law that prohibited the sale or rental of violent video games to minors without a parent present. The Court stated:

Like the protected books, plays, and movies that preceded them, video games communicate ideas — and even social messages — through many familiar literary devices … and through features distinctive to the medium. That suffices to confer First Amendment protection.

But, what if the Supreme Court went…the other way? Had the Court viewed video games more like “pinball machines”, today’s video game world and culture would be unrecognizable. Before Brown, courts generally viewed video games as lacking the communicative, informative element that is required of free speech in order for its protection to kick in. Video games were viewed to be more like “mechanical entertainment devices” and “recreational pass times” rather than a tool to spread knowledge or information.

BLIND PROTECTION

Objectively, video games consist of overall rules that are essentially the same as other non-online games and pass times such as chess, baseball, and poker. The key distinction, however, is the iron shield protection of First Amendment speech that has permitted, and continues to permit, societal issues of verbal racial discrimination and harassment to grow.

Video games, although as a product are a form of expression, include certain communicative elements that shouldn’t necessarily be protected as such. What about the expression within the game, between players? Should @PIgSl@y3r’s chat to @B100DpR1NC3$$ stating “f*** y**, your mom is a b****” be considered ‘the spreading of knowledge or information’?

Brown protects video games as a whole, but fails to address or elude to the harmful effects that certain toxic communications between players because of the availability of communicative tools within video games, specifically violent ones. All communications between
players are therefore at the disposal of monitoring, or not, by game developers and software creators. But rather, the plight of violent and toxic communication and its impact on society is left in the hands of the gamer itself. It’s now up to @B100DpR1NC3$$ to bring justice for @PIgSl@y3r’s potty mouth by effectively remembering to submit a complaint after he finished slaying the dragon and never knowing or remembering to check if the user was ever banned.

JUSTICE IN THE HANDS OF @B100DpR1NC3$$

To combat overall video game toxicity (generally encompassing all in-game and game-related harassment, hate speech, discrimination, bullying, sexualization, incitement of violence, and like conduct), developers have met calls for a solution with mediocre monitoring and reporting systems. Creators across the gaming industry largely rely on in-game player reporting systems, and artificial intelligence-backed automated filtering systems to find and detect abusive players. Community standards and guidelines are posted and updated, gamer-submitted reports are reviewed, and the automated systems continue to filter. Developers have also had to curtail their video games overseas, in order to abide by International censorship guidelines. Most recently in 2009, Russia took argument with the overall terroristic portrayal of the Russians in Activision’s Call of Duty: Modern Warfare 2, forcing Activsion to make edits in certain versions of the game, and banning the console version as a whole.

Censorship policies, however, are ultimately upheld by users and players themselves. In order to monitor speech, developers have created varieties of reporting mechanisms in which users can report other users for harassment, discrimination, and other forms of harmful speech. Players not only have the responsibility of beating the next level and unlocking the next perk, but in order to play the game, Activsion say’s they must help out, too.

A CLOSER LOOK: ACTIVISION BLIZZARD

Activision Blizzard, Inc. the first third-party game developer (solely developing software, and not physical consoles), first emerged in 1979 and has since made a core presence in the gaming-realm. Activsion’s world renowned games, such as Candy Crush, The Call of Duty Series, and World of Warcraft (oh my!). But along with the Developer’s positive impact and developments on the industry, also came the bad. The Call of Duty Series, the most violent of them all, has notoriously been scrutinized for its incessant depiction of violence and racism, as well as vulgar, hostile gamer-to-gamer communication. If you compare Atari’s 1980 Battlezone with Activision’s Call of Duty Series, the overall deadliness and gore depicted within the game has greatly expanded, as much as the harassment, hate speech, violence, and discrimination both portrayed and encouraged.

Most recently, Activision’s latest update to it’s Code of Conduct for the Call of Duty series outlines efforts in “combat[ting] toxic behavior.” Before it’s latest release of the series, Call of Duty: Warzone 2.0, Activision publicly reiterated its commitment in “delivering a positive gameplay experience.” The three key elements outlining the new code include: treat everyone with respect, compete with integrity, and stay vigilant.

The Developer introduced “automated filtering systems” that monitor and review both text-chat as well as account names, and announced that as a result 500,000 accounts have been banned and 300,000 more have been re-named. The Call of Duty team stated that the implementation of such filtering systems resulted in seeing “more than [a] 55% drop in the number of offensive username and clan tags reports from our players, year-over-year, in the month of August alone in Call of Duty: Warzone.”

The anti-toxicity upgrade includes new features for in-game reporting including an optional “dialog box” that allow players to communicate more about the situation, as well as more tools to help report offensive or inappropriate behavior. Players found to engage in offensive voice chat by the moderation team are also muted the use from all in-game voice chat. Activision explained:

“We know addressing toxicity requires a 24/7 sustained effort. Since our last Call of Duty® community update, our enforcement and anti-toxicity teams have continued to progress, including scrubbing our global player database to remove toxic users.”

 @B100DpR1NC3$$ does it all: Virtually beheading dragons and monitoring speech. 

Although Activision’s efforts to reduce overall gamer-toxicity have proved to be seemingly successful, the true credit should be given to the players that reported misusers. Activision has repeatedly credited their so-called “enforcement and anti-toxicity teams,” effectively creating a glare over the fact that these teams, aren’t exactly team-players. These teams instead rely on the leg work of in-game reports by actively-playing gamers, and artificial intelligence. As it turns out, the anti-toxicity team doesn’t even play the game. The team, as acknowledged by Activision, merely review reports that have already been submitted by game players. 

Rather than actively dropping-in on live games to monitor and view the live talking and chat functions as they are in use,  players themselves are required to do the monitoring, for them. Once players take the time to independently submit and report other users, only then will the “enforcement and anti-toxicity teams” review the report. Effectively, instead of a true monitoring system, inappropriate and non-conforming gamers will only ever be banned if someone else cares enough to report them.

LOSING THE MEANING OF VIDEO ‘GAME

So, going back to video games being protected as a medium of expression because they communicate ideas and social messages….. How can video games be considered a “medium of expression” to “communicate ideas,” yet evade any real monitoring of the expression within? If video games are to be considered in the boat of “books, plays, and movies” deserving of the protection of freedom of speech and expression, then it’s time for Developers and Software companies to do the leg work.

Communications between video game players are protected by the First Amendment, the same as posts by users on Facebook. Yet in society, video games are not thought of as a ‘way to communicate with someone’, the way that Facebook is, but rather as a game to play for entertainment. The reality, however, is that video games are no longer merely just games. With the rise in technology and incorporation of communication tools, video games are now a platform for toxic communication. Developers lack pressure or incentive to actually monitor the content and speech of video game players to one another, and evade further attention by publishing standards and mediocre efforts. Although Activision states that the new system “allows our moderation teams to restrict player features in response to confirmed player reports,” it’s up to players to start the process by taking the time to report in the first place. After a report is confirmed, only then, will the anti-toxicity get on their feet.

I Get High With a Little Help From My (Online) Friends: The Role of Social Media in the Marketing of Illegal and Gray-Market Drugs

Opening

For better or worse, social media has changed our society forever. We all see and experience its impact in our daily lives, no matter the national, cultural, or social context. Nowhere is this more true than in the realm of commerce. Social media has proven to be an incredibly effective tool for the creation and maintenance of business, arguably making massive inroads into the world of marketing and sales. Above all, those with even a drop of entrepreneurial spirit no longer need to rely solely upon external investment and institutional gatekeepers to get their product out to the masses; they only need an internet connection, a device, and a willingness to build a social media presence. However, social media marketing has also enabled the growth of unethical and illegal business, including the world of illicit and gray-market drug sales.

After 40 years of the War on Drugs, many experts, commentators, and members of law enforcement argue that the illegal drug trade is alive and well. We need only to look at the evidence: drug overdoses in the US are rising, organized crime has more power than ever, and transnational shipments are becoming more common. Furthermore, the drugs being traded are becoming stronger and more dangerous. Many countries are therefore forced to search for alternative legal solutions to this crisis. For example, a growing number of jurisdictions, domestically and internationally, are (rightfully) decriminalizing or even legalizing the production and sale of cannabis. Some are pursuing the decriminalization of possession for all drugs in an effort to combat the resulting health, economic, and social equity crises from criminalization policies.

Regardless of what you think of these various policies, we cannot ignore how social media has impacted and accelerated the sale of illegal and gray-market drugs. Therefore, it behooves us to understand how dealers and companies are marketing on social media, what law is relevant in the US, and what social media companies and policy makers are doing to deal with these challenges if we are to even begin to search for solutions for this complex problem.

Examples of Marketing

The most common form of illegal drug marketing on social media is achieved by the use of timed stories functions on major image-oriented sites (e.g. Snapchat, Instagram) or by quickly posting and manually removing the advertisement (e.g. Facebook, Twitter). Essentially, the dealer posts the advertisement, sometimes showing off the product, and lists other relevant information. Once the time period expires, the post is removed and dealers feel as if they have protected their anonymity. On top of this, dealers may use emojis, other text symbols, or slang as a code to communicate the nature or type of product. These methods are used for all kinds of illegal drugs, from fentanyl to MDMA to cocaine. After that, customers usually reach out to the dealer directly. Some use the direct messaging systems of the relevant social media services. Others reach out to the dealer on a wide range of messaging applications, especially those that market privacy and security (WhatsApp, Signal, Telegram, etc).

For gray-market drug sales, we must turn to the major example of THC isomer products. Δ-9-Tetrahydrocannabinol (THC) is the main psychoactive substance found in cannabis. It is a Schedule 1 controlled substance under US federal law banned in about half of the States. However, creative chemists and growers in cannabis-legal states have engineered a wide range of alternative isomer products, meaning products that are chemically different from the traditional THC understood by the law. While many of them are naturally occurring, their deliberate concentration essentially creates the same desired effect for users as traditional cannabis. Due to legal confusion and inaction, and alongside real world advertising and product availability, social media companies have shown that they are quite comfortable with running advertisements for such products from formal companies and letting individuals post about them

The Law

When it comes to illegal drug dealing, the law is, as one would hope, unfavorable towards the social media companies. Most importantly, Section 230 of the Communications Decency Act (Title V of the Telecommunications Act of 1996) does not help them at all. Specifically, 47 USC §§ 230(e)(1) and (3) clearly delineate that federal criminal law and state laws (including state criminal law) are not impacted in any way when it comes to enforcement against them. Therefore, the § 230(c) Good Samaritan provisions, which protect social media companies from legal liability for the posts of their users so long as they actively remove them, is not relevant.

The main law of concern for these companies is the Controlled Substances Act of 1970 which, along with various amendments and international treaties, regulates the production and sale of illegal drugs at the federal level. The most relevant part is 21 USC § 843(c), which makes it illegal for anyone to advertise illegal drugs, including on the internet. While the liability balance between the user and the social media service is unclear in both statutory and case law, the lack of Section 230 protection makes these companies uneasy.

For the issue of gray-market THC isomers, the main problem is that a loophole exists in federal law via the Agriculture Improvement Act of 2018. This omnibus bill, among other things, descheduled low Δ-9-Tetrahydrocannabinol cannabis, also known as hemp, from the Controlled Substances Act. While the goal was to introduce it back into farming as a useful industrial crop, the vagueness and broadness of the bill here accidentally legalized Δ-8-Tetrahydrocannabinol cannabis, another psychoactive THC isomer that can be found in hemp, and a wide range of other isomers. This fluke arguably has opened up the floodgates on these products, in the forms of vapes, edibles, tincture drops, and smokeable flower. At the federal level, the DEA has failed to address this issue under the Federal Analogue Act. Specifically, 21 USC § 802(32) defines what analogues, including isomers, are and how they can be regulated under the Controlled Substances Act. States are trying to keep up with all the isomers but clearly fighting a losing battle; just go to your local gas station or convenience store and you will find a wide array of these items available.

Role of Social Media Companies and Policy Makers

Due to the fact that the law is quite underdeveloped and scattered surrounding illegal drug advertising in the US, many social media companies have attempted to moderate such content on their own accord. The major platforms all have policies that ban the sale, display, or solicitation of illegal drugs in one form or another (Facebook/Meta, Instagram, Twitter, TikTok, Snapchat). Nevertheless, this self-regulation has arguably failed

However, the companies are not the only ones with a share of the blame for this problem; Congress needs to act by passing new statutes that force the companies to regulate and report the marketing of illegal drugs. Surprisingly, a handful of bills have been proposed to alleviate this legal quandary. Senator Marshall’s “Cooper Davis Act” (S.4858) aims to amend the Controlled Substances Act by obliging all social media companies to report any attempt to market or sell illegal drugs to the DEA within a certain time frame. This would include all user data, history, and anything else deemed relevant by investigators. Representative Wasserman Schultz is currently drafting the “Let Parents Choose Protection Act” (aka Sammy’s Law), which would force social media companies to allow parents to track the social media activity of their kids, including their interaction with drug dealers or posts about illegal drugs. These laws, among many others, raise significant and obvious concerns about privacy and free speech rights. These need to be taken seriously and included in any such bill going forward.

On the issue of the gray market for THC isomers, social media companies and Congress must also act. While I am an advocate for the federal legalization of cannabis, allowing an unregulated market to exist is quite reckless. On top of the fact that the effects of the various isomers are not well known and not regulated by the FDA, their advertising, in person and online, as a cure-all snake oil is unethical and unjustifiable.  All of the major social media platforms have advertiser and business policies against unethical practices such as false advertising but fail to use them. Congress, on the other hand, has not introduced any bills in this specific area. Likewise, state lawmakers are not exempt from acting here. They need to pursue policies to regulate this gray-market in their jurisdictions to fill in the shortcomings of Congress, as New York and Kentucky are attempting to do.

Overall, the impact of social media marketing at-large must be taken seriously by the federal and state governments. While it brings about some good in spurring business, the current paradigm enables bad actors to sell seriously dangerous illegal drugs and irresponsible businessmen to push unregulated, untested, and poorly understood gray-market drugs with little to no serious oversight. Can we, as a society, change for the better? Or will we be beholden to an unsustainable status quo of techno-anarchy that will cause unnecessary and preventable harm and suffering? Only time will tell.

Social Media Has Gone Wild

Increasing technological advances and consumer demands have taken shopping to a new level. You can now buy clothes, food, and household items from the comfort of your couch, and in a few clicks: add to cart, pay, ship, and confirm. Not only are you limited to products sold in nearby stores, but shipping makes it possible to obtain items internationally. Even social media platforms have shopping features for users, such as Instagram Shopping, Facebook Marketplace, and WhatsApp. Despite its convenience, online shopping has also created an illegal marketplace for wildlife species and products.

Most trafficked animal-the Pangolin

Wildlife trafficking is the illegal trading or sale of wildlife species and their products. Elephant ivory, rhinoceros horns, turtle shells, pangolin scales, tiger furs, and shark fins are a few examples of highly sought after wildlife animal products. As social media platforms expand, so does wildlife trafficking.

Wildlife Trafficking Exists on Social Media?

Social media platforms make it easier for people to connect with others internationally. These platforms are great for staying in contact with distant aunts and uncles, but it also creates another method for criminals and traffickers to communicate. It provides a way to remain anonymous without having to meet in-person, which makes it harder for law enforcement to identify a user’s true identity. Even so, can social media platforms be held responsible for making it easier for criminals to commit wildlife trafficking crimes?

Thanks to Section 230 of the Communications Decency Act, the answer is most likely: no.

Section 230 provides broad immunity to websites for content a third-party user posts on the website. Even when a user posts illegal content on a website, the website cannot be held liable for such content. However, there are certain exceptions where websites have no immunity. It includes human and sex trafficking. Although these carve-outs are fairly new, it is clear that there is an interest in protecting people vulnerable to abuse.

So why don’t we apply the same logic to animals? Animals are also a vulnerable population. Many species are unmatched to guns, weapons, traps, and human encroachment on their natural habitats. Similar to children, animals may not have the ability to understand what trafficking is or even the physical strength to fight back. Social media platforms, like Facebook, attempt to combat the online wildlife trade, but its efforts continue to fall short.

How is Social Media Fighting Back?

 

In 2018, the World Wildlife Fund and 21 tech companies created the Coalition to End Wildlife Trafficking Online. The goal was to reduce illegal trade by 80% by 2020. While it is difficult to measure whether this goal is achievable, some social media platforms have created new policies to help meet this goal.

“We’re delighted to join the coalition to end wildlife trafficking online today. TikTok is a space for creative expression and content promoting wildlife trafficking is strictly prohibited. We look forward to partnering with the coalition and its members as we work together to share intelligence and best-practices to help protect endangered species.”

Luc Adenot, Global Policy Lead, Illegal Activities & Regulated Goods, TikTok

In 2019, Facebook banned the sale of animals altogether on its platform. But this did not stop users. A 2020 report showed a variety of illegal wildlife was for sale on Facebook. This clearly shows the new policies were ineffective. Furthermore, the report stated:

“29% of pages containing illegal wildlife for sale were found through the ‘Related Pages’ feature.”

This suggests that Facebook’s algorithm purposefully connects users to pages and similar content based on a user’s interest. Algorithms incentivize users to rely and depend on wildlife trafficking content. They will continue to use social media platforms because it does half of the work for them:

      • Facilitating communication
      • Connecting users to potential buyers
      • Connecting users to other sellers
      • Discovering online chat groups
      • Discovering online community pages

This fails to reduce wildlife trafficking outreach. Instead, it accelerates visibility of this type of content to other users. Does Facebook’s algorithms go beyond Section 230 immunity?

Under these circumstances, Facebook maintains immunity. In Gonzalez v. Google LLC, the court explains how websites are not liable for user content when the website employs content-neutral algorithms. This means that a website did nothing more than program an algorithm to present similar content to a user’s interest. The website did not offer direct encouragement to publish illegal content, nor did it treat the content differently from other user content.

What about when a website profits from illegal posts? Facebook receives a 5% selling fee for each shipment sold by a user. Since illegal wildlife products are rare, these transactions are highly profitable. A pound of ivory can be worth up to $3,300. If a user sells five pounds of ivory from endangered elephants on Facebook, the platform would profit $825 from one transaction. The Facebook Marketplace algorithm is similar to the algorithm based on user interest and engagement. Here, Facebook’s algorithm can push illegal wildlife products to a user who has searched for similar products. Yet, if illegal products are constantly pushed and successful sales are made, Facebook then benefits and makes a profit off these transactions. Does this mean that Section 230 will continue to protect Facebook when it profits from illegal activity?

Evading Detection

Even with Facebook’s prohibited sales policy, users get creative to avoid detection. A simple search of “animals for sale” led me to a public Facebook group. Within 30 seconds of scrolling, I found a user selling live coral, and another user selling an aquarium system with live coral, and live fish. The former reads: Leather $50. However, the picture shows a live coral in a fish tank. Leather identifies the type of coral it is, without saying it’s coral. Even if this was fake coral, a simple Google search shows a piece of fake coral is worth less than $50. If Facebook is failing to prevent users from selling live coral and live fish, it is most likely failing to prevent online wildlife trafficking on its platform.

Another method commonly used to evade detection is when users post a vague description or a photo of an item and include the words “pm me” or “dm me.” These are abbreviations for “private message me” or “direct message me.” It is a quick way to direct interested users to personally reach out to the individual and discuss details in a private chat. It is a way to communicate outside of the leering public eye. Sometimes a user will offer alternative contact methods, such as a personal phone number or an email address. This transitions the interaction off of or to a new social media platform.

Due to high profitability, there are lower stakes when transactions are conducted anonymously online. Social media platforms are great for concealing a user’s identity. Users can use fake names to maintain anonymity behind their computer and phone screen. There are no real consequences for using a fake name when the user is unknown. Nor is there any type of identity verification to truly discover the user’s true identity. Even if a user is banned, the person can create a new account under a different alias. Some users are criminals tied to organized crime syndicates or terrorist groups. Many users operate outside of the United States and are overseas, which makes it difficult to locate them. Thus, social media platforms incentivize criminals to hide among various aliases with little to lose.

Why Are Wildlife Products Popular?

Wildlife products have a high demand for human benefit and use. Common reasons why humans value wildlife products include:

Do We Go After the Traffickers or the Social Media Platform?

Taking down every single wildlife trafficker, and users that facilitate these transactions would be the perfect solution to end wildlife trafficking. Realistically, it’s too difficult to identify these users due to online anonymity and geographical limitations. On the other hand, social media platforms continue to tolerate these illegal activities.

Here, it is clear that Facebook is not doing enough to stop wildlife trafficking. With each sale made on Facebook, Facebook receives a percentage. Section 230 should not protect Facebook when it reaps the benefits of illegal transactions. This takes it a step too far and should open Facebook to the market of: Section 230 liability.

Should Facebook maintain Section 230 immunity when it receives proceeds from illegal wildlife trafficking transactions? Where do we draw the line?

Mental Health Advertisements on #TikTok

The stigma surrounding mental illness has persisted since the mid-twentieth century. This stigma is one of the many reasons why 60% of adults with a mental illness often go untreated. The huge treatment disparity demonstrates a significant need to spread awareness and make treatment more readily available. Ironically, social media, which has been ridiculed for its negative impact on the mental health of its users, has become a really important tool for spreading awareness about and de-stigmatizing mental health treatment.

The content shared on social media is a combination of users sharing their experiences with a mental health condition and companies who treat mental health using advertisements to attract potential patients. At the first glance, this appears to be a very powerful way to use social media to bridge treatment gaps. However, it highlights concerns over vulnerable people seeing content and self-diagnosing themselves with a condition that they might not have and undergoing unnecessary, and potentially dangerous, treatment. Additionally, they might fail to undergo needed treatment because they are overlooking the true cause of their symptoms due to the misinformation they were subjected to.

Attention Deficit Hyperactivity Disorder (“ADHD”) is an example of a condition that social media has jumped on. #ADHD has 14.5 billion views on TikTok and 3 million posts on Instagram. Between 2007 and 2016, diagnoses of ADHD increased by 123%. Further, prescriptions for stimulants, which treat ADHD, have increased 16% since the pandemic. Many experts are attributing this, in large part, to the use of social media in spreading awareness about ADHD and the rise of telehealth companies that have emerged to treat ADHD during the pandemic. These companies have jumped on viral trends with targeted advertisements that oversimplify what ADHD actually looks like and then offers treatment to those that click on the advertisement.

The availability and reliance of telemedicine grew rapidly during the COVID-19 pandemic and many restrictions regarding telehealth were suspended. This created an opening in the healthcare industry for these new companies. ‘Done’ and ‘Cerebral’ are two examples of companies that have emerged during the pandemic to treat ADHD. These companies attract, accept, and treat patients through a very simplistic procedure: (1) social media advertisements, (2) short online questionnaire, (2) virtual visit, and (3) prescription.

Both Done and Cerebral have utilized social media platforms like Instagram and TikTok to lure potential patients to their services. The advertisements vary, but they all highlight how easy and affordable treatment is by emphasizing convenience, accessibility, and low cost. Accessing the care offered is as simple as swiping up on an advertisements that appear as users are scrolling on the platform. These targeted ads depict images of people seeking treatment, taking medication, and having their symptoms go away. Further, these companies utilize viral trends and memes to increase the effectiveness of the advertisements, which typically oversimplify complex ADHD symptoms and mislead consumers.

ADHD content is popular on TikTok, as America faces an Adderall shortage - Vox

While these companies are increasing healthcare access for many patients due to the low cost and virtual platform, this speedy version of healthcare is blurring the line between offering treatment to patients and selling prescriptions to customers through social media. Further, medical professionals are concerned with how these companies are marketing addictive stimulants to young users, and, yet, remain largely unregulated due to outdated guidelines on advertisements for medical services.

The advertising model utilized by these telemedicine companies emphasize a need to modify existing laws to ensure that these advertisements are subjected to the FDA’s unique oversight to protect consumers. These companies are targeting young consumers and other vulnerable people to self-diagnose themselves with misleading information as to the criteria for a diagnosis. There are eighteen symptoms of ADHD and the average person meets at least one or two of those in the criteria, which is what these ads are emphasizing.

Advertisements in the medical sphere are regulated by either the FDA or the FTC. The FDA has unique oversight to regulate the marketing of prescription drugs by manufacturers and drug distributors in what is known as direct-to-consumer (“DTC”) drug advertising. The critics of prescription drug advertisements highlight the negative impact that DTC advertising has on the patient-provider relationship because patients go to providers expecting or requesting particular prescription treatment. In order to minimize these risks, the FDA requires that a prescription drug advertisement must be truthful, present a fair balance of the risks and benefits associated with the medications, and state an approved used of the medication. However, if the advertisement does not mention a particular drug or treatment, it eludes the FDA’s oversight.

Thus, the marketing of medical services, which does not market prescription drugs, is regulated only by the Federal Trade Commission (“FTC”) in the same manner as any other consumer good, which just means that the advertisement must not be false or misleading.

The advertisements these Telehealth companies are putting forward demonstrate that it is time for the FDA to step in because they are combining medical services and prescription drug treatment. They use predatory tactics to lure consumers into believing they have ADHD and then provide them direct treatment on a monthly subscription basis.

The potential for consumer harm is clear and many experts are pointing to the similarities between the opioid epidemic and stimulant drugs. However, the FDA has not currently made any changes to how they regulate advertising in light of social media. The laws regarding DTC drug advertising were prompted in part by the practice of self-diagnosis/self-medication by consumers and the false therapeutic claims made by manufacturers. The telemedicine model these companies are using is emphasizing these exact concerns by targeting consumers, convincing them they have a specific condition, and then offering the medication to treat it after quick virtual visit. Instead of patients going to their doctors requesting a specific prescription that may be inappropriate for a patient’s medical needs, patients are going to the telehealth providers that only prescribe a particular prescription that may also be inappropriate for a patient’s medical needs.

Through the use of social media, diagnosis and treatment with addictive prescription drugs can be initiated by an interactive advertisement in a manner that was not possible when the FDA made the distinctions that these types of advertisements would not be subject to its oversight. Thus, to protect consumers, it is vital that telemedicine advertisements are subjected to a more intrusive monitoring than consumer goods. This will require the companies making these advertisements to properly address the complex symptoms associated with conditions like ADHD and give fair balance to the harms of treatment.

According to the Pew Research Center, 69% of adults and 81% of teens in the United States use social media. Further, about 48% of Americans get their information regularly from social media. We often talk about misinformation in politics and news stories, but it’s permeating every corner of the internet. As these numbers continue to grow, it’s crucial to develop new methods to protect consumers, and regulating these advertisements is only the first step.

Social Media Got Me Fired!

Have you ever wondered if in the age of Social Media, your employers ever looked you up and if that affected you getting the job? Well, continue reading to find out!     

Employers look at your social media profiles to find out anything they can about you before interviewing or hiring you. A 2022 Harris Poll found that 70% of employers in the survey would screen potential employees’ social media profiles before offering them a position. A Career Builder Poll found that 54% of employers ruled out a candidate due to discovering something they disagreed with on their social media profile. 

Pre-employment background checks now go beyond criminal and public records and employment history. If hiring managers can’t find you online, there is an increased chance they will not move forward with your application. In fact, 21% of employers polled said they are not likely to consider a candidate who does not have a social media presence.

However, don’t fret; social media can also be why you get your next job. The Aberdeen Group found that 73% of job seekers between 18 and 34 obtained their last job through social media. People seeking employment have unlimited domains to find jobs on platforms such as Linkedin, Stackoverflow, GitHub, Facebook, TikTok, and other websites. A Career Builder survey found that 44% of hiring managers and employers have discovered content on a candidate’s social media profile that caused them to hire the candidate. Due to this change in hiring and recruitment, employers have to engage the newer generation in the job force through competitive social media advertisements. Job seekers and employers alike use their social media profiles for networking, sourcing, and building recognition.

Is social media a double-edged sword? If you have social media, it can lessen your chances of being employed. At the same time, many jobs are posted on social media, which can be why you get hired. You can use your social media, i.e., Linkedin, to promote yourself and stay active on the platform at least once a week. Employers are interested in how you use your social media.

Regarding Facebook, Instagram, and TikTok, keep them neutral and clean. Have your account private, and before you post a picture, ask yourself if you are comfortable with the CEO or your boss seeing that picture. If the answer is yes, go ahead and post; if you are unsure, the best bet is to not post it. 

We walk a fine line in the great age of Social media; many do’s and don’t vary depending on your job and in what area. Someone working for Google would have a different social media presence and posts than someone working for the Prosecutor’s Office. You can always turn to your employee handbook once you are hired or ask HR to be on the safe side.

Harry Kazakian stated in an article for Forbes Magazine that he screens potential employees’ social media to eliminate potential risks. He does this to ensure employee harmony in the workplace. Specifically, Kazakian is looking to avoid candidates who post: constant negative content, patterns of overt anger, suggestions of violence, associations with questionable characters, signs of crass behavior, or even too many political posts. 

Legally, employers may use social media to recruit candidates by advertising job openings or performing background checks to confirm that a job candidate or applicant is qualified. This allows employers to monitor your website activity, e-mail account, and instant messages. This right, however, cannot be used as a means of discrimination. 

Half the states in the US have enacted laws that do not allow employers to access employees’ social media accounts. California prohibits employers from asking for social media passwords of their current or prospective employees. Maryland, Virginia, and Illinois offer protections to job seekers, so they do not have to divulge their social media passwords or provide account access. California, Illinois, New Jersey, and New York, among other states, have enacted laws prohibiting employers from discriminating based on an employee’s lawful off-duty conduct. 

Federal laws prohibit employers from discriminating against a prospective or current employee based on information on the employee’s social media relating to their race, color, national origin, gender, age, disability, and immigration or citizenship status. Employees should be conscious of what information they display on social media websites. However, federal law prohibits companies of a specified size from illegally discriminating against their employees. Title VII, ADA, and GINA apply to private employers, educational institutions, and state and local governments with 15 or more employees. The ADEA applies to employers with 20 or more employees.

California, Colorado, Connecticut, Illinois, Minnesota, Nevada, New York, North Dakota, and Tennessee all have laws that prohibit employers from firing an employee for engaging in lawful activity, on the employer’s premises, during nonworking hours, even if this activity is not in direct conflict with the essential business-related interests of the employer, are unwelcome, objectionable, or not acceptable. However, the Courts in the states mentioned above will weigh the employee protections against an employer’s business interests. If the Court rules that the employer’s interests outweigh employee privacy concerns, the employer is exempt from the law. Be aware that some laws provide explicit exemptions for employers.

Legal risks based on Employment Discrimination, Title VII of the Civil Rights Act, the Age Discrimination in Employment Act, the Equal Pay Act, Title II of the Genetic Information Nondiscrimination Act, and the Equal Employment Opportunity Commission can occur when employers use social media to screen a job applicant. Section 8(a)(3) of the National Labor Relations Act prohibits discrimination against applicants based on union affiliation or support. Therefore, using social media to screen out applicants on this basis may lead to an unfair labor charge against the company.

 A critical case decided in 2016, Hardin v. Dadlani, regarded a hiring manager who had previously preferred white female employees. The hiring manager instructed an employee to look up an applicant on Facebook and invite her for an interview “if she looks good.” The Court ruled that this statement made by the hiring manager can be reasonably construed to refer to her race, which can establish discriminatory animus. Discriminatory animus is when an employee may prove discrimination by either direct or circumstantial evidence. Direct evidence is evidence, if true, that proves the fact of discriminatory animus without inference or presumption. A single remark can show a discriminatory animus. 

Be an intelligent job candidate by knowing your rights. Companies that use third-party social media background checks must comply with disclosure and authorization requirements since it is considered consumer reporting agency under the Fair Credit Reporting Act. Therefore, an employer must give notice to the prospective employee or current employee that they want to acquire a consumer report for employment purposes and obtain written consent from the prospective employee or existing employee. 

Happy job hunting, and think before you post!

Is it HIGH TIME we allow Cannabis Content on Social Media?

 

Is it HIGHT TIME we allow Cannabis Content on Social Media?

The Cannabis Industry is Growing like a Weed

Social media provides a relationship between consumers and their favorite brands. Just about every company has a social media presence to advertise its products and grow its brand. Large companies command the advertising market, but smaller companies and one-person startups have their place too. The opportunity to expand your brand using social media is limitless to just about everyone. Except for the cannabis industry. With the developing struggle between social media companies and the politics of cannabis, comes an onslaught of problems facing the modern cannabis market. With recreational marijuana use legal in 21 states and Washington, D.C., and medical marijuana legal in 38 states, it may be time for this community to join the social media metaverse.

We know now that algorithms determine how many followers on a platform see a business’ content, whether or not the content is permitted, and whether the post or the user should be deleted. The legal cannabis industry has found itself in a similar struggle to legislators with social media giants ( like Facebook, Twitter, and Instagram) for increased transparency about their internal processes for filtering information, banning users, and moderating its platform. Mainstream cannabis businesses have been prevented from making their presence known on social media in the past, but legitimate businesses are being placed in a box with illicit drug users and prevented from advertising on public social media sites. The Legal cannabis industry is expected to be worth over $60 billion by 2024, and support for federal legalization is at an all-time high (68%). Now more than ever, brands are fighting for higher visibility amongst cannabis consumers.

Recent Legislation Could Open the Door for Cannabis

The question remains, whether the legal cannabis businesses have a place in the ever-changing landscape of the social media metaverse. Marijuana is currently a Schedule 1 narcotic on the Controlled Substances Act (1970). This categorization of Marijuana as Schedule 1 means that it has no currently accepted medical use and has a high potential for abuse. While that definition was acceptable when cannabis was placed on the DEAs list back in 1971, there has been evidence presented in opposition to that decision. Historians note, overt racism, combined with New Deal reforms and bureaucratic self-interest is often blamed for the first round of federal cannabis prohibition under the Marihuana Tax Act of 1937, which restricted possession to those who paid a steep tax for a limited set of medical and industrial applications.    The legitimacy of cannabis businesses within the past few decades based on individual state legalization (both medical and recreational) is at the center of debate for the opportunity to market as any other business has. Legislation like the MORE act (Marijuana Opportunity Reinvestment and Expungement) which was passed by The House of Representatives gives companies some hope that they can one day be seen as legitimate businesses. If passed into law, Marijuana will be lowered or removed from the schedule list which would blow the hinges off the cannabis industry, legitimate businesses in states that have legalized its use are patiently waiting in the wings for this moment.

States like New York have made great strides in passing legislation to legalize marijuana the “right” way and legitimize business, while simultaneously separating themselves from the illegal and dangerous drug trade that has parasitically attached itself to this movement. The  Marijuana Regulation and Tax Act (MRTA)  establishes a new framework for the production and sale of cannabis, creates a new adult-use cannabis program, and expands the existing medical cannabis and cannabinoid (CBD) hemp programs. MRTA also established the Office of Cannabis Management (OCM), which is the governing body for cannabis reform and regulation, particularly for emerging businesses that wish to establish a presence in New York. The OCM also oversees the licensure, cultivation, production, distribution, sal,e and taxation of medical, adult-use, and cannabinoid hemp within New York State. This sort of regulatory body and structure are becoming commonplace in a world that was deemed to be like the “wild-west” with regulatory abandonment, and lawlessness.

 

But, What of the Children?

In light of all the regulation that is slowly surrounding the Cannabis businesses, will the rapidly growing social media landscape have to concede to the demands of the industry and recognize their presence? Even with regulations cannabis exposure is still an issue to many about the more impressionable members of the user pool. Children and young adults are spending more time than ever online and on social media.  On average, daily screen use went up among tweens (ages 8 to 12) to five hours and 33 minutes from four hours and 44 minutes, and to eight hours and 39 minutes from seven hours and 22 minutes for teens (ages 13 to 18). This group of social media consumers is of particular concern to both the legislators and the social media companies themselves. MRTA offers protection from companies advertising with the intent of looking like common brands marketed to children. Companies are restricted to using their name and their logo, with explicit language that the item inside of the wrapper has cannabis or Tetrahydrocannabinol (THC) in it. MRTA restrictions along with strict community guidelines from several social media platforms and government regulations around the promotion of marijuana products, many brands are having a hard time building their communities’ presence on social media. The cannabis companies have resorted to creating their own that promote the content they are being prevented from blasting on other sites. Big-name rapper and cannabis enthusiast, Berner who created the popular edible brand “Cookies”, has been approached to partner with the creators to bolster their brand and raise awareness.  Unfortunately, the sites became what mainstream social media sites feared in creating their guideline, an unsavory haven for illicit drug use and other illegal behavior. One of the pioneer apps in this field Social Club was removed from the app store after multiple reports of illegal behavior. The apps have since been more internally regulated but have not taken off like the creators intended. Legitimate cannabis businesses are still being blocked from advertising on mainstream apps.

These Companies Won’t go Down Without a Fight

While cannabis companies aren’t supposed to be allowed on social media sites, there are special rules in place if a legal cannabis business were to have a presence on a social media site. Social media is the fastest and most efficient way to advertise to a desired audience. With appropriate regulatory oversight and within the confines of the changing law, social media sites may start to feel pressure to allow more advertising from cannabis brands.

A Petition has been generated to bring META, the company that owns Facebook and Instagram among other sites, to discuss the growing frustrations and strict restrictions on their social media platforms. The petition on Change.org has managed to amass 13,000 signatures. Arden Richard, the founder of WeedTube, has been outspoken about the issues saying  “This systematic change won’t come without a fight. Instagram has already begun deleting posts and accounts just for sharing the petition,”. He also stated, “The cannabis industry and community need to come together now for these changes and solutions to happen,”. If not, he fears, “we will be delivering this industry into the hands of mainstream corporations when federal legalization happens.”

Social media companies recognize the magnitude of the legal cannabis community because they have been banning its content nonstop since its inception. However, the changing landscape of the cannabis industry has made their decision to ban their content more difficult. Until federal regulation changes, businesses operating in states that have legalized cannabis will be force banned by the largest advertising platforms in the world.

 

Artificial Intelligence: Putting the AI in “brAIn”

What thinks like a human, acts like a human, and now even speaks like a human…but isn’t actually human? The answer is: Artificial Intelligence.

Yes, that’s right, the futuristic self-driving smart cars, talking robots, and video calling that we once saw in the Jetsons TV Show are now more or less a reality in 2022. Much of this is thanks to the development of Artificial Intelligence.

What is Artificial Intelligence?

Artificial Intelligence (AI) is an umbrella term that has many sub-definitions. Scientists have not yet fully agreed upon one single definition, but AI generally refers to a phrase coined by Stanford Professor John McCarthy…all the way back in 1955. McCarthy defined artificial intelligence as “the science and engineering of making intelligent machines”. He then went on to invent the list processing language LISP, which is now used by numerous industry leaders including Boeing (Boeing Simplified English Checker assists aerospace technical writers) and Grammarly (a grammar computer add-on that many of us use, and that coincidentally, I am using as I write this piece).  McCarthy is thought of as one of the founders of AI and recognized for his contributions to machine language.

Sub Categories and Technologies

Within the overarching category of AI are smaller subcategories such as Narrow AI and Strong AI. Beneath the subcategories are the technologies of Machine Learning and Algorithms that help the subcategories function and meet their objectives.

Narrow AI: Also known as “weak AI” is task-focused intelligence. These systems only focus on specific jobs like internet searches or autonomous driving rather than complete human intelligence.  Examples of this are Apple’s Siri, Amazon Alexa, and autonomous vehicles.
General AI: Also known as “strong AI”, is the overall combined AI components that rival a human’s ability to think for themselves. Think the robots in your favorite science-fiction novel. Science today still seems to be far from reaching General AI, as it proves to be much more difficult to develop as opposed to Narrow AI.

Technologies within AI Subcategories.

Machine Learning requires human involvement to learn. Humans create hierarchiesand pathways for both data input and outputs. These pathways allow the machine to learn with human intervention, but this requires more structured data for the computer.

Deep Learning allows the machine to make the pathway decisions by itself without human intervention. Between the simple input and output layers are multiple hidden layers referred to as a “neural network”. This network can receive unstructured raw data such as images and text and automatically distinguish them from each other and how they should be processed.

Both Machine and Deep Learning have allowed businesses, healthcare, and other industries to flourish from the increased efficiency and time saved through minimizing human decisions. It is possible that because this technology is so new and unregulated, we have been able to see how fast innovation can grow uninhibited.  Government regulations have been hesitant to tread in the murky waters of this new and unknown technology sector

Regulations.
Currently, there is no federal law regulating the use of AI. States seem to be in a trial-and-error phase, attempting to pass a range of laws. Many of these laws attempt to deploy AI-specific task forces to monitor and evaluate AI use in that state or prohibit the use of algorithms in a way that unfairly discriminates based on ethnicity, race, sex, disability, or religion. A live list of pending failed and enacted AI legislation in each state can be found here on the National Conference of State Legislatures’ website.

But what must go up, must come down. While AI increases efficiency and convenience, it also poses a variety of ethical concerns, making it a double-edged sword. We explore the ups and downs of AI below and pose ethical questions that might make you stop and think twice about letting robots control our world.

Employment

With AI emerging in the workforce, many are finding that administrative and mundane tasks can now be automated through the use of AI. Smart Contract systems allow for Optical Character Recognition (OCR) which can scan documents and recognize text from the uploaded image.  The AI can then pull-out standard clauses or noncompliant language and flag it for human review. This, however, still ultimately requires human intervention.

One growing concern with AI and employment lies in the possibility that AI may take over certain jobs completely. An example of this comes with the innovation of self-driving cars and truck drivers. If autonomous vehicles become mainstream for the large-scale transportation of goods, what will happen to those who once held this job? Does the argument that there may be “fewer accidents” outweigh the unemployment that accompanies this switch? And what if the AI fails? Could there be more accidents?

Chatbots

Chatbots are computer programs designed to simulate human communication.  We see these types in online customer service settings. The AI allows customers to hold a conversation with the Chatbot and ask questions about a specific product and receive instant feedback. This cuts down on waiting times and improves service levels for the company.

While customer service Chatbots may not spark any concern to the average consumer, the fact that these bots are able to engage in conversation that is almost indistinguishable from an actual human may pose a threat to other industries. We can forget about catfishing, now individuals will have to worry about if the “person” on the other side of their chatroom is even a “person” at all, or if it is someone who has designed a bot to elicit emotional responses from victims and eventually scam them out of their money.

Privacy

AI now gives consumers the ability to unlock their devices with facial recognition. It can also use these faces to recognize people in photos and tag them on social media sites. Aside from our faces, AI follows our behaviors and slowly learns our likes and dislikes, building a profile on us. Recently, the Netflix documentary “The Social Dilemma” discussed the controversy surrounding AI and Social Media use. In this film, we see the algorithm as three small men “inside the phone” who begin to build a profile on one of the main characters, sending notifications during periods of inactivity from apps that are likely to generate a response. With AI there seems to be a very fine line of what information is left undisclosed. We must be diligently aware of what we are opting into (or out of) to protect our personally identifiable information. While this may not be a major concern of those in the United States, it may raise concerns for civilians in foreign countries under a dictatorship that may use facial recognition as a tool to retain ultimate control.

Spread of Disinformation and Bias

AI is only as smart as the data it learns from. If it is fed data with a discriminatory bias or any bias at all (be it political, musical, or even your favorite movie genre) it will begin to make decisions based on that information.

We see the good in this – new movie suggestions in your favorite genre, advertising a sweater that you didn’t know you needed – but we have also seen the spread of false information across social media sites. Oftentimes, algorithms will only show us news from sources that align with our political affiliation because that is whom we tend to follow and engage with. This leaves us with a one-sided view of the world and grows the gap between parties even further.

As AI develops, we will be faced with new ethical questions every day. How do we prevent bias when it is almost human nature to begin with? How do we protect individuals’ privacy while still letting them enjoy the convenience of AI technology?

Can we have our cake and eat it too? Stay tuned in the next few years to find out…

 

Memes, Tweets, and Stocks . . . Oh, My!

 

Pop-Culture’s Got A Chokehold on Your Stocks

In just three short weeks, early in January 2021, Reddit meme-stock traders garnered up enough of GameStop’s stock to increase its value from a mere $17.25 per share to $325 a pop. This reflected almost an 1,800% increase in the stock’s value. In light of this, hedge funds, like New York’s Melvin Capital Management, were left devastated, some smaller hedge funds even went out of business.

For Melvin, because they were holding their GameStop stock in a short position (a trading technique in which the intention is to sell a security with the plan to buy it back later, at a lower cost, in an anticipated short term drop), they lost over 50% of their stock’s value, which translated to nearly $7 billion, in just under a month.

Around 2015, the rise of a new and free online trading platform geared towards a younger generation, emerged in Robinhood. Their mission was simple — “democratize” finance. By putting the capacity to understand and participate in trading, without needing an expensive broker, Robinhood made investing accessible to the masses. However, the very essence of Robinhood putting the power back in the hands of the people, was also what caused a halt in GameStop’s takeover rise. After three weeks, Robinhood had to cease all buying or selling of GameStop’s shares and options because the sheer volume of trading had exceeded their cash-on-hand capacity, or collateral that is required by regulators to function as a legal trade exchange.

But what exactly is a meme-stock? For starters, a meme is an idea or element of pop-culture that spreads and intensifies across people’s minds. As social media has increased in popularity, viral pop-culture references  and trends have as well. Memes allow people to instantaneously spread videos, tweets, pictures, or posts that are humorous, interesting, or sarcastic. This in turns goes viral. Meme-stocks therefore originate on the internet, usually in sub-Reddit threads, where users work together to identify a target stock and then promote it. The goal of promoting a meme stock largely involves shorting the stock—as explained above—which means buying, holding, selling, and rebuying as prices fluctuate to turn a profit.

GameStop is not the first, and certainly not the last, stock to be traded in this fashion. But it represents an important shift in the power of social media and its ability to affect the stock market. Another example of the power meme-culture can have on real-world finances and the economy, is Dogecoin.

Dogecoin was created as satirical new currency, in a way mocking the hype around existing cryptocurrencies. But its positive reaction and bolstered interest on social media turned the joke crypto into a practical reality. This “fun” version of Bitcoin was celebrated, listed on the crypto exchange Binance, and even cryptically endorsed by Elon Musk. More recently, in 2021, cinema chain AMC announced it would accept Dogecoin in exchange for digital gift card purchases, further bolstering the credibility of this meme-originated cryptocurrency.

Tricks of the Trade, Play at Your Own Risk

Stock trading is governed by the Securities Act of 1933, which boils down to two basic objectives: (1) to require that investors receive financial and other material information concerning securities being offered for public sale; and (2) to prohibit any deceit, misrepresentations, and other fraud in the sale of securities. In order to buy, sell, or trade most securities, it must first be registered with the SEC—the primary goal of registration is to facilitate information disclosures, so investors are informed before engaging. Additionally, the Securities Exchange Act of 1934 provides the SEC with broad authority over the securities industry, to regulate, register, and oversee brokerage firms, agents, and SROs. Other regulations at play include the Investment Company Act of 1940 and the Investment Advisers Act of 1940 which regulate investment advisers and their companies, respectively. These Acts require firms and agents that receive compensation for their advising practices are registered with the SEC and adhere to certain qualifications and strict guidelines designed to promote fair, informed investment decisions.

Cryptocurrency has over the years grown from a speculative investment to a new class of assets and regulation is imminent. The Biden Administration has recently added some clarification on crypto use and its regulation through a new directive designating power to the SEC and the Commodity Futures Trading Commission (CFTC), which were already the prominent securities regulators. In the recent Ripple Labs lawsuit, the SEC began to make some strides in regulating cryptocurrency by working to classify it as a security which would bring crypt into their domain of regulation.

Consequentially, the SEC’s Office of Investor Education and Advocacy has adapted with the times and now cautions against  making any investment decisions based solely off of information seen on social media platforms. Because social media has become integral to our daily lives, investors are increasingly relying and turning to it for information when deciding when, where, and on what to invest. This has increased the likelihood of scams, fraud, and other misinformation consequences. These problems can arise through fraudsters disseminating false information anonymously or impersonating someone else.

 

However, there is also an increasing concern with celebrity endorsements and testimonials regarding investment advice. The most common types of social media online scam schematics are impersonation and fake crypto investment advertisements.

 

With this rise in social media use, the laws governing investment advertisements and information are continuously developing. Regulation FD (Fair Disclosure) provides governance on the selective disclosure of information for publicly traded companies. Reg. FD prescribes that when an issuer discloses any material, nonpublic information to certain individuals or entities, they must also make a public disclosure of that information. In 2008, the SEC issued new guidance allowing information to be distributed on websites so long as shareholders, investors, and the market in general were aware it was the company’s “recognized channel of distribution.” In 2013 this was again amended to allow publishing earnings and other material information on social media, provided that investors knew to expect it there.

This clarification came in light of the controversial boast by Netflix co-founder and CEO Reed Hastings on Facebook that Netflix viewers had consumed 1 billion hours of watch time, per month. Hasting’s Facebook page had never previously disclosed performance stats and therefore investors were not on notice that this type of potentially material information, relevant to their investment decisions, would be located there. Hastings also failed to immediately remedy the situation with a public disclosure of the same information via a press release or Form 8-K filing.

In the same vein, a company’s employees may also be the target of consequence if they like or share a post, publish a third-party link, or friend certain people without permission if any of those actions could be viewed as an official endorsement or means of information dissemination.

The SEC requires that certain company information be accompanied by a disclosure or cautionary disclaimer statement. Section 17(b) of the 1933 Act, more commonly known as the Anti-Touting provision, requires any securities endorsement be accompanied by a disclosure of the “nature, source, and amount of any compensation paid, directly or indirectly, by the company in exchange for such endorsement.”

To Trade, or Not to Trade? Let Your Social Media Feed Decide

With the emergence of non-professional trading schematics and platforms like Robinhood, low-cost financial technology has brought investing to the hands of younger users. Likewise, the rise of Bitcoin and blockchain technologies in the early-to-mid 2010’s have changed the way financial firms must think about and approach new investors. The discussion of investments and information sharing that happens on these online forums creates a cesspool ripe for misinformation breeding. Social media sites are vulnerable to information problems for several reasons. For starters, which posts gain attention is not always something that can be calculated in advance—if the wrong post goes viral, hundreds to thousands to millions of users may read improper recommendations. Algorithm rabbit-holes also pose a risk to extremist views and strategically places ads further on this downward spiral.

Additionally, the presence of fake or spam-based accounts and internet trolls pose an ever more difficult problem to contain. Lastly, influencers can sway large groups of followers by mindlessly promoting or interacting with bad information or not properly disclosing required information. There are many more obvious risks associated but “herding” remains one of the largest. Jeff Kreisler, Head of Behavioral Science at J.P. Morgan & Chase explains that:

“Herding has been a common investment trap forever. Social media just makes it worse because it provides an even more distorted perception of reality. We only see what our limited network is talking about or promoting, or what news is ‘trending’ – a status that has nothing to do with value and everything to do with hype, publicity, coolness, selective presentation and other things that should have nothing to do with our investment decisions.”

This shift to a digital lifestyle and reliance on social media for information has played a key role in the information dissemination for investor decision-making. Nearly 80% of institutional investors now use social media as a part of their daily workflow. Of those, about 30% admit that information gathered on social media has in some way influenced an investment recommendation or decision and another third have maintained that because of announcements they saw on social media, they made at least one change to their investments as a direct result. In 2013, the SEC began to allow publicly traded companies to report news and earnings via their social media platforms which has resulted in an increased flow of information to investors on these platforms. Social media also now plays a large role in financial literacy for the younger generations.

The Tweet Heard Around the Market

A notable and recent example of how powerful social media warriors and internet trolls can be in relation to the success of a company’s stock came just days after Elon Musk’s acquisition of Twitter and only hours after launching his pay-for-verification Twitter Blue debacle.  Insulin manufacturing company Eli Lilly saw a stark drop in their stock value after a fake parody account was created under the guise of their name and tweeted out that “insulin is now free.”

This account acting under the Twitter handle @EliLillyandCo labeled itself, bought a blue check mark, and appended the same logo as the real company to its profile making it almost indistinguishable from the real thing. Consequently, the actual Eli Lilly corporate account had to tweet out an apology “to those who have been served a misleading message from a fake Lilly account.” And clarifying that, “Our official Twitter account is @Lillypad.”

This is a perfect example for Elon Musk and other major companies and CEOs just how powerful pop-culture, meme-culture, and internet trolls are by the simple fact that this parody account casually dropped the stock of a multi-billion dollar pharmaceutical company almost 5% in the matter of a few hours and weaponized with $8 and a single tweet.

So, what does all this mean for the future of digital finance? It’s difficult to say exactly where we might be headed, but social media’s growing tether on all facets of our lives leave much up for new regulation. Consumers should be cautious when scrolling through investment-related material, and providers should be transparent with their relationships and goals in promoting any such materials. Social media is here to stay, but the regulation and use of it are still up for grabs.

The Rise of E-personation

Social media allows millions of users to communicate with one another on a daily basis, but do you really know who is behind the computer screen?

As social media continues to expand into the enormous entity that we know it to be today, the more susceptible users are to abuse online. Impersonation through electronic means, often referred to as e-personation is a rapidly growing trend on social media. E-personation is extremely troublesome because it requires far less information than the other typical forms of identity theft. In order to create a fake social media page, all an e-personator would need is the victim’s name, and maybe a profile picture. While creating a fake account is relatively easy for the e-personator, the impact on the victim’s life can be detrimental.

E-personation Under State Law

It wasn’t until 2008, when New York became the first state to recognized e-personation as a criminally punishable form of identity theft. Under New York law, “a person is guilty of criminal impersonation in the second degree when he … impersonates another by communication by internet website or electronic means with intent to obtain a benefit or injure or defraud another, or by such communication pretends to be a public servant in order to induce another to submit to such authority or act in reliance on such pretense.”

Since 2008, other states, such as California, New Jersey, and Texas, have also amended their identity theft statutes to include online impersonation as a criminal offense. New Jersey amended their impersonation and identity theft statute in 2014, after an e-personator case revealed their current statute lacked any mention of “electronic communication” as means of unlawful impersonation. In 2011, New Jersey Superior Court Judge David Ironson in Morris County, declined to dismiss an indictment of identity theft against Dana Thornton. Ms. Thornton allegedly created a fictitious Facebook page that portrayed her ex-boyfriend, a narcotics detective, unfavorably. On the Facebook page, Thornton, pretending to be her ex, posted admitting to hiring prostitutes, using drugs, and even contracting a sexually transmitted disease. Thornton’s defense counsel argued that New Jersey’s impersonation statute was not applicable because online impersonation was not explicitly mentioned in the statute and therefore, Thornton’s actions do not fall within the scope of activity the statute proscribes. Judge Ironson disagreed by noting the New Jersey statute is “clear and unambiguous” in forbidding impersonation activities that cause injury and does not need to specify the means by which the injury occurs.

Currently under New Jersey law, a person is guilty of impersonation or theft of identity if … “the person engages in one or more of the following actions by any means, but not limited to, the use of electronic communications or an internet website:”

    1. Impersonates another or assumes a false identity … for the purpose of obtaining a benefit for himself or another or to injure or defraud another;
    2. Pretends to be a representative of some person or organization … for the purpose of obtaining a benefit for himself or another or to injure or defraud another;
    3. Impersonates another, assumes a false identity or makes a false or misleading statement regarding the identity of any person, in an oral or written application for services, for the purpose of obtaining services;
    4. Obtains any personal identifying information pertaining to another person and uses that information, or assists another person in using the information … without that person’s authorization and with the purpose to fraudulently obtain or attempt to obtain a benefit or services, or avoid the payment of debt … or avoid prosecution for a crime by using the name of the other person; or
    5. Impersonates another, assumes a false identity or makes a false or misleading statement, in the course of making an oral or written application for services, with the purpose of avoiding payment for prior services.

As social media continues to grow it is likely that more state legislators will amend their statutes to incorporate e-personation into their impersonation and identify theft statutes.

E-personators Twitter Takeover

Over the last week, e-personation has erupted into chaos on Twitter. Elon Musk brought Twitter on October 27, 2022, for $44 billion dollars. He immediately began firing the top Twitter executives including the chief executive and chief financial officer. On the verge of bankruptcy, Elon needed a plan to generate more subscription revenue. At last, the problematic Twitter Blue subscription was created. Under the Twitter Blue policy users could purchase a subscription for $8 a month and receive the blue verification check mark next to their Twitter handle.

The unregulated distribution of the blue verification check mark has led to chaos on Twitter by allowing e-personators to run amuck. Traditionally the blue check mark has been a symbol of authentication for celebrities, politicians, news outlets, and other companies. It was created to protect those susceptible to e-personation. The rollout of Twitter Blue began on November 9, 2022, the policy did not specify any requirements needed to verify a user’s authenticity beyond payment of the monthly fee.

Shortly after the rollout, e-personators began to take advantage of their newly purchased verification subscription by impersonating celebrities, pharmaceutical companies, politicians, and even the new CEO of Twitter, Elon Musk. For example, comedian Kathy Griffin was one of the first Twitter accounts suspended after Twitter Blue’s launch for changing her Twitter name and profile photo to Elon Musk and impersonating the new CEO. Kathy was not the only Twitter user to impersonate Elon and in response Elon tweeted “Going forward, any Twitter handles engaging in impersonation without clearly specifying ‘parody’ will be permanently suspended.”

Elon’s threats of permanent suspension did not stop e-personators from trolling on Twitter. One e-personator used their blue check verification to masquerade as Eli Lilly and Company, an American pharmaceutical company. The fake Eli Lilly account tweeted the company would be providing free insulin to its customers. The real Eli Lilly account tweeted an apology shortly thereafter. Another e-personator used their verification to impersonate former United States President George W. Bush. The fake Bush account tweeted “I miss killing Iraqis” along with a sad face emoji. The e-personators did not stop there, many more professional athletes, politicians, and companies were impersonated under the new Twitter Blue subscription policy. An internal Twitter log seen by the New York Times indicated that 140,000 accounts had signed up for the new Twitter Blue subscription. It is unlikely that Elon will be able to discover every e-personator account and remedy this spread of misinformation.

Twitter’s Term and Conditions 

Before the rollout of Twitter Blue, Twitter’s guidelines included a policy for misleading and deceptive identities. Under Twitter’s policy “you many not impersonate individuals, groups, or organizations to mislead, confuse, or deceive others, nor use a fake identity in a manner that disrupts the experience of others on Twitter.” The guidelines further explain that impersonation is prohibited, specifically “you can’t pose as an existing person, group, or organization in a confusing or deceptive manner.” Based on the terms of Twitter’s guidelines, the recent e-personators are in direct violation of Twitter’s policy, but are these users also criminally liable?

Careful, You Could Get a Criminal Record

Social media networks, such as Facebook, Instagram, and Twitter, have little incentive to protect the interests of individual users because they cannot be held liable for anything their users post. Under section 230 of the Communications Decency Act, “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Because of the lack responsibility placed on social media platforms, victims of e-personation often have a hard time trying to remove the fake online presence. Ironically, in order for a victim to gain control of an e-personator’s fake account, the victim must provide the social media platform with confidential identifying information, while the e-personator effectively remains anonymous.

By now you’re probably asking yourself, but what about the e-personators criminal liability? Under some state statutes, like those mentioned above, e-personators can be found criminally liable. However, there are some barriers that effect the effectiveness of these prosecutions. For example, e-personators maintain great anonymity, therefore finding the actual person behind the fake account could be difficult. Furthermore, many of the state statutes that criminalize e-personation include proving the perpetrator’s intent, which may also pose a risk to prosecution. Lastly, social media is a global phenomenon which means jurisdictional issues will arise when bringing these cases to court. Unfortunately, only a minority of states have amended their impersonation statutes to include e-personation. Hopefully as social media continues to grow more states will follow suit and e-personation will be prosecuted more efficiently and effectively. Remember, not everyone on social media is who they claim to be, so be cautious.

Skip to toolbar