The Dark Side of Tik-Tok

In Bethany, Oklahoma, a 12-year-old child died due to strangulation marks on his neck. According to police, this wasn’t due to murder or suicide, rather a TikTok challenge that had gone horribly wrong. The challenge is known by a variety of names, including Blackout Challenge, Pass Out Challenge, Speed Dreaming, and The Fainting Game. The challenge is kids asphyxiating themselves, either by choking themselves out by hand or by using a rope or a belt, to obtain the euphoria when they wake up.

Even if the challenge does not result in death, medical professionals warn that it is extremely dangerous. Every moment you are without oxygen or blood, you risk irreversible damage to a portion of your brain.

Unfortunately, the main goal on social media is to gain as many views as possible, regardless of the danger or expense.

Because of the pandemic kids have been spending a lot of time alone and bored, which has led to preteens participating in social media challenges.

There are some social media challenges that are harmless including the 2014 Ice Bucket Challenge, which earned millions of dollars for ALS research.

However there has also been the Benadryl challenge which began in 2020 that urged people to overdose on the drug in an effort to hallucinate. People were also urged to lick surfaces in public as part of the coronavirus challenge.

One of the latest “challenges” on the social media app TikTok could have embarrassing consequences users never imagined possible. The idea of the Silhouette Challenge is to shoot a video of yourself dancing as a silhouette with a red filter covering up the details of your body. It started out as a way to empower people but has turned into a trend that could come back to haunt you. Participants generally start the video in front of the camera fully clothed. When the music changes, the user appears in less clothing, or nude, as a silhouette obscured by a red filter. But the challenge has been hijacked by people using software to remove that filter and reveal the original footage.

If these filters are removed, that can certainly create an environment where kids’ faces are being put out in the public domain, and their bodies are being shown in ways they didn’t anticipate,” said Mekel Harris licensed pediatric & family psychologist. Young people who participate in these types of challenges aren’t thinking about the long-term consequences.

These challenges reveal a darker aspect to the app, which promotes itself as a teen-friendly destination for viral memes and dancing.

TikTok said it would remove such content from its platform. In an updated post to its newsroom, TikTok said:

“We do not allow content that encourages or replicates dangerous challenges that might lead to injury. In fact, it’s a violation of our community guidelines and we will continue to remove this type of content from our platform. Nobody wants their friends or family to get hurt filming a video or trying a stunt. It’s not funny – and since we remove that sort of content, it certainly won’t make you TikTok famous.”

TikTok urged users to report videos containing the challenge. And it told BBC News there was now text reminding users to not imitate or encourage public participation in dangerous stunts and risky behavior that could lead to serious injury or death.

While the challenge may seem funny or get views on social media platforms, they can have long-lasting health consequences.

Because the First Amendment gives strong protection to freedom of speech, only publishers and authors are liable for content shared online. Section 230(c)(1) of the Communications Decency Act of 1996 states that “no provider or user of an interactive computer service shall be treated as the publisher or any information provided by another information content provider.” This act provides social media companies immunity over the content published by other authors on their platforms as long as intellectual property rights are not infringed. Although the law does not require social media sites to regulate their content, they can still decide to remove content at their discretion. Guidelines on the laws regarding discretionary content censorship are sparse. Because the government is not regulating speech, this power has fallen into the hands of social media giants like TikTok. Inevitably, the personal agendas of these companies are shaping conversations, highlighting the necessity of debating the place of social media platforms in the national media landscape.

THE ROLE OF SOCIAL MEDIA:

Social media is unique in that it offers a huge public platform, instant access to peers, and measurable feedback in the form of likes, views, and comments. This creates strong incentives to get as much favorable peer evaluation and approval as possible. Social media challenges are particularly appealing to adolescents, who look to their peers for cues about what’s cool, crave positive reinforcement from their friends and social networks, and are more prone to risk-taking behaviors, particularly when they’re aware that those whose approval they covet are watching them.

Teens won’t necessarily stop to consider that laundry detergent is a poison that can burn their throats and damage their airways. Or that misusing medications like diphenhydramine​ (Benadryl) can cause serious heart problems, seizures and coma.​ What they will focus on is that a popular kid in class did this and got hundreds of likes and comments.

WHY ARE TEENS SUSCEPTIBLE:

Children are biologically built to become much more susceptible to peer influence throughout puberty, and social media has magnified those peer influence processes, making them significantly more dangerous than ever before. Teens may find these activities entertaining and even thrilling, especially if no one is hurt, which increases their likelihood of participating. Teens are already less capable of evaluating danger than adults, so when friends reward them for taking risks – through likes and comments – it may act as a disinhibitor. These youngsters are being impacted on an unconscious level. The internet issues that are prevalent nowadays make it impossible for youngsters to avoid them. This will not occur unless they have parental engagement.

WHAT WE CAN DO TO CONTROL THE SITUATION:

Due to their lack of exposure to these effects as children, parents today struggle to address the risks of social media use with their children.

Parents, on the other hand, should address viral trends with their children. Parents should check their children’s social media history and communicate with them about their online activities, as well as block certain social media sites and educate themselves on what may be lurking behind their child’s screen.

In the case of viral infections, determine your child’s level of familiarity with any patterns you may have heard about before soliciting their opinion. You may ask as to why they think others will follow the trend and what they believe are some of the risks associated with doing so. Utilize this opportunity to explain why you are concerned about a certain trend.

HOW TO COPE WITH SOCIAL MEDIA USAGE:

It’s important to keep in mind that taking a break is completely appropriate. You are not required to join in every discussion, and disabling your notifications may provide some breathing space. You may set regular reminders to keep track of how long you’ve been using a certain app.

If you’re seeing a lot of unpleasant content in your feed, consider muting or blocking particular accounts or reporting it to the social media company.

If anything you read online makes you feel anxious or frightened, communicate your feelings to someone you trust. Assistance may come from a friend, a family member, a teacher, a therapist, or a helpline. You are not alone, and seeking help is completely OK.

Social media is a natural part of life for young people, and although it may have a number of advantages, it is essential that platforms like TikTok take responsibility for harmful content on their sites.

I welcome the government’s plan to create a regulator to guarantee that social media companies handle cyberbullying and posts encouraging self-harm and suicide.

Additionally, we must ensure that schools teach children what to do if they come across upsetting content online, as well as how to use the internet in a way that benefits their mental health.

To reduce the likelihood of misuse, protections must be implemented.

MY QUESTION TO YOU ALL:

How can social media companies improve their moderation so that children are not left to fend for themselves online? What can they do to improve their in-app security?

If I were to sue “Gossip Girl.”

If you grew up in New York and were a teenager in the early 2000s, you probably know the top-rated show “Gossip Girl.” “Gossip Girl” is the alias for an anonymous blogger who creates chaos by making public the very intimate and personal lives of upper-class high school students. The show is very scandalous due to the nature of these teenagers’ activities, but what stands out is the influence gossip girl had on these young teenagers. And it makes one think, what could I do if Gossip Girl came after me?

 

Anonymity

When bringing a claim for internet defamation against an anonymous blogger, the trickiest part is getting over the anonymity. In Cohen v. Google, Inc., 887 N.Y.S.2d 424 (N.Y. Sup. Ct. 2009), a New York state trial court granted plaintiff, model Liskula Cohen, pre-suit discovery from Google to reveal the identity of the anonymous publisher of the “Skanks in NYC” blog. Cohen alleged that the blog author defamed her by calling her a “skank” and a “ho” and posting photographs of her in provocative positions with sexually suggestive captions, all creating the false impression that she is sexually promiscuous. The court analyzed the discovery request under New York CPLR § 3102(c), which allows for discovery “to aid in bringing an action.” The court ruled that, under CPLR § 3102(c), a party seeking pre-action discovery must make a prima facie showing a meritorious cause of action before obtaining the identity of an anonymous defendant. The court acknowledges the First Amendment issues at stake, and citing Dendrite; the court opined that New York law’s requirement of a prima facie showing appears to address the constitutional concerns raised in the context of this case. The court held that Cohen adequately made this prima facie showing defamation, finding that the “skank” and “ho” statements, along with the sexually suggestive photographs and captions, conveyed a factual assertion that Cohen was sexually promiscuous, rather than an expression of protected opinion.

In Cohen, the court decided that Kiskula Cohen was entitled to the pre-suit discovery under CPLR § 3102(c). To legally obtain “Gossip Girl’s” true identity under this statute, we would have to prove that the statement posted on her blog against us is on its face defamatory and not simply an expression of protected opinion.

 

Defamation

Now that we may have uncovered our anonymous blogger, “Gossip Girl,” aka Dan Humphrey now we may dive into the defamation issue. There are two types of defamation: 1) Libel is the written form of defamation, and 2) Slander is the oral form of defamation. Because Gossip Girl’s choice of media is a written blog, our case would fall under Libel. But does our claim meet the legal elements of defamation?

In New York, there are four elements that the alleged defamation must meet:

  1. A false statement;
  2. Published to a third-party without privilege or authorization;
  3. With fault amounting to at least negligence;
  4. That caused special harm or ‘defamation per se.’

Dillon v. City of New York, 261 AD2d 34, 38, 704 NYS2d1 (1999)

Furthermore, our defamation claim for the plaintiff must “set forth the particular words allegedly constituting defamation and it must also allege time when, place where, and the manner in which the false statement was made, and specific to whom it was made.” Epifani v. Johnson, 65 A.D.3d 224, 233, 882 N.Y.S.2d 234 (2d Dept. 2009). The court simply means that we must provide details such as: what specific words were used? What were the terms used? Was the plaintiff labeled a “how” or “skank” like in Cohen, or did they simply call you “ugly”? When? The time said words were spoken, written, or published. Where? The place where they were spoken, written, or published (platform). How? The manner in which they were spoken, written, or published. Lastly Whom? The party or source to whom the statement was made to.

The plaintiff’s status determines the level of burden of proof in defamation lawsuits in N.Y. Is the plaintiff considered a “public” figure or a “private” citizen? To determine this status New York State courts use the “vortex notion.” This term simply means that a person who would generally qualify as a “private” citizen is considered a “public” figure if they draw public attention to themselves, like jumping right into a tornado vortex. Defamation for a “public” figure has a higher preponderance of evidence in defamation lawsuits. The plaintiff must prove that the defendant acted with actual malice (reckless disregard for the truth or falsity of the statement). For defamation of a “private” citizen, the plaintiff the N.Y. court apply a negligence standard of fault for the defendant unless the statements were related to a matter of legitimate public concern.

When the plaintiff is a private figure, and the allegedly defamatory statements relate to a matter of legitimate public concern, they must prove that the defendant acted “in a grossly irresponsible manner without due consideration for the standards of information gathering and dissemination ordinarily followed by responsible parties.” Chapeau v. Utica Observer-Dispatch, 28 N.Y.S.2d 196, 199 (N.Y. 1975) This standard focuses on the objective evaluation of the defendant’s actions rather than looking at the defendant’s state of mind at the time of publication.

If the statements Gossip Girl published are so inherently apparent, we may explore defamation per se. There are four elements to defamation per se in New York:

  1. Statement charging a plaintiff with a serious crime.
  2. Statements that tend to injure another in his or her trade, business, or profession
  3. Statements imputing a loathsome disease on a plaintiff, &
  4. Statements imputing unchastity on a woman

Liberman v. Gelstein, 80 NY2d 429, 435, 605 NE2d 344, 590 NYS2d 857 (1992). If the statements meet these elements, the court may find that the statements were inherently injurious that the damages to the plaintiff’s person are presumed. Another option to consider is defamation per quod which requires the plaintiff to provide extrinsic and supporting evidence to prove the defamatory nature of the alleged statement(s) in question that is not inherently apparent.

 

Privileges and Defenses

After concluding that Gossip Girl defamed the plaintiff, we must ensure that the defamatory statement is not protected under any privileges. New York courts recognize several privileges and defenses in the context of defamation actions, including the fair report privilege (a defamation lawsuit cannot be sustained against any person making a “fair and true report of any judicial proceeding, legislative proceeding or other official proceeding.”) N.Y.Civ.Rights §74, the opinion and fair comment privileges, substantial truth (the maker cannot be held liable for saying things that are actually true), and the wire service defense. There is also Section 230 of the Communications Decency Act, which may protect media platforms or publishers if a third party, not acting under their direction, posts something on their blog or website that is defamatory. Suppose a statement is privileged or defense applies. In that case, the maker of that statement may be immune from any lawsuit arising from those privileged statements.

 

Statute of Limitations

A New York plaintiff must start an action within one (1) year of the date the defamatory material was published or communicated to a third-party CPLR § 15 Sub 3. New York has also adopted a law directed explicitly to internet posts. The “single publication,” a party that causes the mass publication of defamatory content, may only be sued once for its initial publication of that content. For example, suppose a blog publishes a defamatory article that is circulated to thousands of people. In the case above, the blog may only be sued once. The Statute of Limitations begins to run at the time of first publication. “Republication” of the allegedly defamatory content will restart the statute of limitations. A republication occurs when “a separate aggregate publication from the original, on a different occasion, which is not merely a ‘delayed circulation of the original edition.'” Firth v. State, 775 N.E.2d 463, 466 (N.Y. 2002). Courts examine whether the republication was intended to and actually reached new audiences. Altering the allegedly defamatory content and moving web content to a different web address may trigger republication.

 

Damages

Damages to defamation claims are proportionate to the harm suffered by the plaintiff. If a plaintiff is awarded damages, it may be in the form of compensatory, nominal, or punitive damages. There are two types of compensatory damages 1) special damages and 2) general damages. Special damages are based on economic harm and must have a specific amount identified. General damages are challenging to assess. The jury has the discretion to determine the award amount after weighing all the facts. Nominal damages are small monetary sums awarded to vindicate the plaintiff’s name. Punitive damages are intended to punish the defendant and are meant to deter the defendant from repeating defamatory conduct.

 

When Gossip Girl first aired, the idea of a blog holding cultural relevance was not yet mainstream. Gossip Girl’s unchecked power kept many characters from living their lives freely and without scrutiny. After Gossip Girl aired, an anonymous blog, “Socialite Rank,” emerged. It damaged the reputation of the targeted victim, Olivia Palermo, who eventually dropped the suit she had started against the blog. The blog “Skanks in NYC” painted a false image of who Kiskula Cohen was and caused her to lose potential jobs. In the series finale, after the identity of Gossip Girl is revealed, the characters laugh. Still, one of the characters exclaimed, “why do you all think that this is funny? Gossip Girl ruined our lives!” Defamation can ruin lives. As technology advances, the law should as well. New York has adopted its defamation laws that were in place to ensure that person cannot hide behind anonymity to ruin another person’s life.

 

Do you feel protected against online defamation?

XOXO

Who Pays When Your Amazon Purchase Catches Fire?

As technology develops, one of the most debated issues remains: how much responsibility should internet service providers bear in respect to third party content published through their website?  Is Section 230 of the of the Communications Decency Act a relic of primitive internet usage?  When products are sold through the internet, does responsibility shift to the marketplace provider?
To shed light on the issue, a parallel arises between how consumer law and internet usage is developing.
Take the Texas case where a third-party sold a remote control through Amazon.com.  The remote was purchased and delivered to a customer with no issues.  However, the customer’s nineteen-month-old child later ingested the remote’s battery which resulted in permanent esophagus damage.  Who is responsible for the damages – Amazon or the third-party seller?  (Amazon.com, Inc. v. McMillan, No. 20-0979, 2021 WL 2605885 (Tex. June 25, 2021))
The customer, Ms. McMillin, brought a lawsuit against both.  Ultimately, the Supreme Court of Texas found that legal liability for the personal injury did not lie with Amazon but remained with the third-party seller.  This decision determined who was the “Seller” under Tex. Civ. Prac. & Rem. Code Ann. § 82.001 and the legal framework behind placing items into a stream of commerce.  The dispositive factor was whether or not, at any point during the “chain of distribution”, title to the remote had been transferred to Amazon.  In other words, who owned the remote?
The court found that unless Amazon held and relinquished title, or the “legal right to control and dispose of property” (TITLE, Black’s Law Dictionary (11th ed. 2019)), they could not be considered an actual “Seller” under the law and therefore were not liable for injury.  Even though throughout use of the marketplace Amazon “controlled the process of the transaction and the delivery of the product,” the third-party seller retained title and was thus the liable “Seller.”
These ideas run parallel to those behind Section 230, where internet service providers are not liable for content published through their services.  Under this section, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” (47 U.S.C. § 230, subd. (c)(1)).  In the McMillin case, the court notes the “significant potential consequences of holding online marketplaces responsible for third-party sellers’ faulty products.”
In a case originating in Arizona, the deciding factor in a strict liability consumer law case again comes down to ownership.  Here, personal injury was incurred by a defective battery in a hoverboard that caught fire which had been sold through Amazon.com.  The court ruled in favor of Amazon after applying state law in a “contextual analysis [that] balanced multiple factors to determine whether a company ‘participate[d] significantly in a stream of commerce.’”  (State Farm Fire & Cas. Co. v. Amazon.com, Inc., 835 F. App’x 213 (9th Cir. 2020))
Under Arizona law, the “realities of the marketplace” bear on the outcome and evaluated using a seven-part test to determine whether strict liability can be held against a party.  This occurs when:
“(1) provide a warranty for the product’s quality; (2) are responsible for the product during transit; (3) exercise enough control over the product to inspect or examine it; (4) take title or *216 ownership over the product; (5) derive an economic benefit from the transaction; (6) have the capacity to influence a product’s design and manufacture; or (7) foster consumer reliance through their involvement.”
In these factors we again see the idea of ownership being used to help draw the line.  Just because Amazon facilitated the sale of defective products through their website, they were not involved enough with the product to actually be liable for its deficiencies.
This trend is not limited to Amazon.com.  Take the example of the consumer in Indiana who purchased a 3D printer through Walmart.com that later caught fire and damaged property.  The consumers, the Terrys, brought a lawsuit for merchantability issues under Indiana Code § 26-1-2-314, which provides that the “warranty that the goods shall be merchantable is implied in a contract for their sale if the seller is a merchant with respect to goods of that kind.”  (Indiana Farm Bureau Ins. v. Shenzhen Anet Tech. Co., No. 419CV00168TWPDML, 2020 WL 7711346 (S.D. Ind. Dec. 29, 2020))
Here, again, the court determined that Walmart.com was not a “Seller” or “Manufacturer” under the Indiana law and could not be held liable for the damages caused by defective third-party products.
However, law developing out of California reaches a contradictory conclusion.  Here, in a case where an Amazon.com-sold laptop battery caught fire, the court ruled that in regard to strict liability the Communications Decency Act did not offer immunity to internet marketplaces.  The court supported the finding by determining that Amazon played an “’integral part of the overall producing and marketing enterprise.’”  Here, Amazon’s role providing speech, which is immunized, is differentiated from its “role in the chain of production and distribution of an allegedly defective product.”  Bolger v. Amazon.com, LLC, 53 Cal. App. 5th 431, 267 Cal. Rptr. 3d 601 (2020), review denied (Nov. 18, 2020)
The convergence between consumer law and Section 230 helps develop an understanding of how we think about the responsibility of internet service providers when the content or products they facilitate cause damage.  Ultimately, the emerging trend is that a party must in some way own the content in question.

How Defamation and Minor Protection Laws Ultimately Shaped the Internet

Kyiv, Ukraine – September 5, 2019: A paper cubes collection with printed logos of world-famous social networks and online messengers, such as Facebook, Instagram, YouTube, Telegram and others.

The Communications Decency Act (CDA) was originally enacted with the intention of shielding minors from indecent and obscene online material. Despite its origins, Section 230 of the Communications Decency Act is now commonly used as a broad legal safeguard for social media platforms to shield themselves from legal liability for content posted on their sites by third parties. Interestingly, the reasoning behind this safeguard arises both from defamation common law, and constitutional free speech laws. As the internet has grown, however, this legal safeguard has gained increasing criticism. However, is this legislation actually undesirable? Many would disagree as section 230 contains “the 26 words that created the internet.”

 

Origin of the Communications Decency Act

The CDA was introduced and enacted as an attempt to shield minors from obscene or indecent content online. Although parts of the Act were later struck down for first amendment free speech violations, the Court left section 230 intact. The creation of section 230 was influenced by two landmark court decisions of defamation lawsuits.

The first case was in 1991, and involved an Internet site that hosted around 150 online forums. A claim was brought against the internet provider when a columnist of one of the online forums posted a defamatory comment about his competitor. The competitor sued the online distributor for the published defamation. The courts categorized the internet service provider as a distributor because they did not review any content of the forums before the content was posted to the site. As a distributor, there was no legal liability, and the case was dismissed.

 

Distributor Liability

Distributor Liability refers to the limited legal consequences that a distributor is exposed to for defamation. A common example of a distributor, is a bookstore or library. The theory behind distributor liability is that it would be impossible for distributors to moderate and censor every piece of content that they disperse because of the sheer volume, and the impossibility of knowing whether something is false or not.

The second case that influenced the creation of section 230, was Stratton Oakmont, Inc. v. Prodigy Servs. Co., in which the court used publisher liability theory to find the internet provider liable for the third party defamatory postings published on its site.  The court deemed the website a publisher because they moderated and deleted certain posts, regardless of the fact that there were far too many postings a day to regulate each one.

 

Publisher Liability

Under common law principles, a person who publishes a third-party’s defamatory statement bears the same legal responsibility as the creator of that statement. This liability is often referred to as “publisher liability,” and is based in theory that a publisher has the knowledge, opportunity, and ability to exercise control over the publication. For example, a newspaper publisher could face legal consequences for the content located within it. The court’s decision was significant because it meant that if a website attempted to moderate certain posts, it would be held liable for all posts.

 

Section 230’s Creation

In response to the Stratton-Oakmond case, and the ambiguous court decisions regarding internet services provider’s liability, members of Congress introduced an amendment to the CDA that later became Section 230. The Amendment was specifically introduced and passed with the goal of encouraging the development of unregulated, free speech online by relieving internet providers from any liability for their content.

 

Text of the Act- Subsection (c)(1) 

“No Provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

 Section 230 further provides that…

“No cause of action may be brought and no liability may be imposed under any State or local law that is inconsistent with this section.”

 The language above removes legal consequences arising from content posted on their forum. Courts have interpreted this subsection as providing broad immunity to online platforms from suits over content of third parties. Because of this, section 230 has become the principal legal safeguard from lawsuits over sites content.

 

The Good

  •  Section 230 can be viewed as being one of the most important pieces of legislation that protects free speech online. One of the unique aspects of this legislation is that it essentially extends free speech protection, applying it to private, non-governmental companies.
  • Without CDA 230, the internet would be a very different place. This section influenced some of the internet’s most distinctive characteristics. The internet promotes free speech and offers the ability for worldwide connectivity.
  • The CDA 230 does not fully eliminate liability or court remedies for victims of online defamation. Rather, it makes only the creator themselves liable for their speech, instead of the speaker and the publisher.

 

 

The Bad

  •  Because of the legal protections section 230 provides, social media networks have less of an incentive to regulate false or deceptive posts. Deceptive online posts can have an enormous impact on society. False posts have the ability to alter election results, or lead to dangerous misinformation campaigns, like the QAnon conspiracy theory, and the anti-vaccination movement.
  • Section 230 is twenty-five years old, and has not been updated to match the internet’s extensive growth.
  • Big Tech companies have been left largely unregulated regarding their online marketplaces.

 

 The Future of 230

While section 230 is still successfully used by social media platforms, concerns over the archaic legislation have mounted. Just recently, Justice Thomas, who is infamous for being a quiet Justice, wrote a concurring opinion articulating his view that the government should regulate content providers as common carriers, like utilities companies. What implications could that have on the internet? With the growing level of criticism surrounding section 230, will Congress will finally attempt to fix this legislation? If not, will the Supreme Court be left to tackle the problem themselves?

Dear loyal followers: will you endorse this product for me?

Hi, lambs. It’s me, your Gram celeb. Who wouldn’t like to start their day with a bowl of oatmeal with these freshly picked berries? Remember that I’m always with you, lambs. #yummy #healthybreakfast #mylifestyle #protein #Thanks XYZ.

Up to this point, you may think that your Instagram celebrity wants to share a healthy breakfast with you. You may even feel pleased to see that your celebrity took the time to post such a personal picture, until you read further:

Thank you, XYZ Company, for your healthy breakfast delivery. Love, XOXO.

While some people feel nonchalant about promoted products, some followers may feel betrayed to know that their celebrities are only using their accounts to earn money. You feel tired of seeing these sponsored posts. Are there any legal actions against the Instagram celebrity to melt your deceived heart?

What’s so disturbing and tricky about Instagram influencer marketing is that users cannot always detect whether they are being exposed to digital advertising.

According to an Instagram internal data research, approximately 130 million accounts tap on shopping posts every month. There are a plethora of guides on digital marketing for rising influencers, and one of the highest noted tips is to advertise a product with storytelling. The main theme is to advertise as naturally as possible to make consumers feel engaged—and subsequently, have them make a purchase.

In Jianming Jyu v. Ruhn Holdings Ltd., the court held that social media has become so influential that being a social media influencer is now recognized as a profession. The court defined social media influencers as “individuals who create content on social media platforms such as Facebook, YouTube, Tik Tok, and Instagram with the hope of garnering a large public following [and] who are paid to promote, market and advertise products and services to their fans and followers.” Id.

Take this as another example: your cherished, ever-so-harmless Instagram mega-celebrity wore a beautiful Gucci belt. The celebrity mentioned that the same belt was available on Amazon, which was on sale for less than a quarter of the actual price at Gucci. You immediately purchased the belt, thanking your celebrity and yourself for following the celebrity. Upon the belt’s arrival, you realized that the belt was conspicuously fake with the brand named Pucci. On November 12, 2020, Amazon sued 13 individuals and businesses (collectively, the “defendants”) for advertising, promoting, and facilitating the sale of counterfeit luxury goods on Amazon. The defendants used their Instagram and other social media accounts to promote their knockoff goods being sold on Amazon. Amazon stated that they are seeking damages and an injunction against the influencers to bar them from using Amazon. As of July 4, 2021, the case is still pending.

Okay, we get it. But that’s something Instagram celebrities have to resolve. What about us, the innocent lambs?

Are there any legal actions on digital marketing? Yes!

A digital advertising claim may be brought in state or federal court or action brought by a federal administrative agency, such as the Federal Trade Commission (FTC). Generally, Instagram advertising is considered online advertising, which the FTC regulates. The FTC Act prohibits deceptive advertising in any medium. That is, “advertising must tell the truth and not mislead consumers.” A claim can be misleading if relevant information is left out or if the claim implies something that is not true. So, if an influencer promotes a protein bar that says it has 20 grams of protein, but it actually had 10 grams of protein, it is misleading.

Furthermore, the FTC Act states that all advertising claims must be “substantiated,” primarily when they concern “health, safety, or performance.” If the influencer quoted the protein bar company, which stated that there was research that their protein bar lowered blood pressure, the FTC Act requires a certain level of support for that claim. Thus, online influencers are liable for the products they endorse on social media platforms.

Wait, there is one more. Due to the growing number of fraudulent activities on Instagram, the FTC released new regulations targeted at Instagram influencers. According to 16 CFR § 255.5, the FTC requires that an influencer shall “clearly and conspicuously disclose either the payment or promise of compensation prior to and in exchange for the endorsement or the fact that the endorser knew or had reason to know or to believe that if the endorsement favored the advertised product some benefit.” In sum, an influencer must disclose that the post is sponsored. The FTC noted that the hashtags like “partner,” “sp,” “thanks [Brand]” are not considered adequate disclosures. Otherwise, it is a violation subject to penalty.

Simply putting hashtag “ad” is not an option

The marketing Mediakix issued a report on top celebrity Federal Trade Commission compliance for sponsored posts and found that 93% of the top Instagram endorsements did not meet the FTC’s guidelines.

Going back to the oatmeal example, using the hashtag “#Thanks XYZ” is not sufficient to show that the post is sponsored, and the celebrity is subject to penalty.

As a rule of thumb, all Instagram sponsorships must be disclosed no matter what, and the disclosures must be clear about the sponsorship. Playing hide-and-seek with hashtags is never an option.

What is your opinion on digital marketing? If you were a legislator, what should the regulation on digital marketing be?

Off Campus Does Still Exist: The Supreme Court Decision That Shaped Students Free Speech

We currently live in a world centered around social media. I grew up in a generation where social media apps like Facebook, Snapchat and Instagram just became popular. I remember a time when Facebook was limited to college students, and we did not communicate back and forth with pictures that simply disappear. Currently many students across the country use social media sites as a way to express themselves, but when does that expression go too far? Is it legal to bash other students on social media? What about teachers after receiving a bad test score? Does it matter who sees this post or where the post was written? What if the post disappears after a few seconds? These are all questions that in the past we had no answer to. Thankfully, in the past few weeks the Supreme court has guided us on how to answer these important questions. In Mahanoy Area School District v B.L, the supreme court decided how far a student’s right to free speech can go and how much control a school district has in restricting a student’s off campus speech.

The question presented in the case of Mahanoy Area School District v. B.L was whether a public school has the authority to discipline a student over something they posted on social media while off campus. The student in this case was a girl named Levy. Levy was a sophomore who attended the Mahanoy Area School District. Levy was hoping to make the varsity cheerleading team that year but unfortunately, she did not.  She was very upset when she found out a freshman got the position instead and decided to express her anger about this decision on social media. Levy was in town with her friend at a local convenience store when she sent “F- School, F- Softball, F- Cheerleading, F Everything” to her list of friends on snapchat in addition to posting this on her snapchat story. One of these friends screenshotted the post and sent it to the cheerleading coach. The school district investigated this post and it resulted in Levy being suspended from cheerleading for one year. Levy, along with her parents were extremely upset with this decision and it resulted in a lawsuit that would shape a student’s right to free speech for a long time.

In the lawsuit, Levy and her parents, claimed that Levy’s cheerleading suspension violated her First Amendment right to free speech. They sued Mahanoy Area School District under 42 U.S.C § 1983 claiming that (1) her suspension from the team violated the First Amendment; (2) the school and team rules were overbroad and viewpoint discriminatory; and (3) those rules were unconstitutionally vague. The district court granted summary judgment in favor of Levy, stating that the school had violated her First Amendment rights. The U.S. Court of Appeals for the Third Circuit affirmed the district court decision. The Mahoney School District petitioned for a writ of certiorari. Finally, the case was heard by the Supreme Court.

Mahanoy School District argued that previous ruling in the case, Tinker v. Des Moines Independent Community School District, acknowledges that public schools do not possess absolute authority over students and that students possess First Amendment speech protections at school so long as the students’ expression does not become substantially disruptive to the proper functioning of school. Mahanoy emphasized that the Court intended for Tinker to extend beyond the schoolhouse gates and include not just on-campus speech, but any type of speech that was likely to result in on-campus harm. Levy countered by arguing that the ruling in Tinker only applies to speech protections on school grounds.

In an 8-1 decision the court ruled against Mahanoy. The Supreme Court held that Mahanoy School District violated Levy’s First Amendment Right by punishing her for posting a vulgar story on her snapchat while off campus.  The court ruled that the speech used did not result in severe bullying, nor was substantially disruptive to the school itself. The court also noted that this post was only visible to her friends list on snapchat and would disappear within 24 hours. It is not the school’s job to act as a parent, but it is their job to make sure actions off campus will not result in danger to the school. The Supreme Court also stated that although the student’s expression was unfavorable, if they did not protect the student’s opinions it would limit the students’ ability to think for themselves.

It is remarkably interesting to think about how the minor facts of this case determined the ruling. What if this case was posted on Facebook? One of the factors to consider that helped the court make their decision was that the story was only visible to about 200 of her friends on snapchat and would disappear within a day. One can assume that if Levy made this a Facebook status visible to all with no posting time frame the court could have ruled very differently. Another factor to consider, is that where the Snapchat post was uploaded ended up being another major factor in this case. Based on the Tinker ruling, if Levy posted this on school grounds Mahanoy School District could have the authority to discipline her for her post.

Technology is advancing each day and I am sure that in the future as more social media platforms come out the court will have to set a new precedent. I believe that the Supreme Court made the right decision regarding this case. I feel that speech which is detrimental to another individual should be monitored whether it is Off Campus Speech or On Campus Speech despite the platform that the speech is posted on. In Levy’s case no names were listed, she was expressing frustration for not making a team. I do believe that this speech was vulgar, but do not believe that the school suffered, nor any other students suffered severe detriment from this post.

If you were serving as a Justice on the Supreme Court, would you rule against Mahoney School District? Do you believe it matters which platform the speech is posted on? What about the location of where it was posted?

Advertising in the Cloud

Thanks to social media, advertising to a broad range of people across physical and man-made borders has never been easier. Social media has transformed how people and businesses can interact throughout the world. In just a few moments a marketer can create a post advertising their product halfway across the world and almost everywhere in between. Not only that, but Susan, a charming cat lady in west London, can send her friend Linda, who’s visiting her son in Costa Rica an advertisement she saw for sunglasses she thinks Linda might like. The data collected by social media sites allows marketers to target specific groups of people with their advertisements. For example, if Susan was part of a few Facebook cat groups, she would undoubtedly receive more cat tower or toy related advertisements than the average person.

 

Advertising on social media also allows local stores or venues to advertise to the local communities, targeting groups of people in the local area. New jobs in this area are being created, young entrepreneurs are selling their social media skills to help small business owners create an online presence. Social media has also transformed the way stores advertise to people as well, no longer must stores rely on solely a posterboard, or scripted advertisement. Individuals with a large enough following on social media are sought out by companies to “review” or test their products for free.

Social media has transformed and expanded the marketplace exponentially. Who we can reach in the world, who we can market to and sell to has expanded beyond physical barriers. With these changes, and newfound capabilities through technology, comes a new legal frontier.

 Today, most major brands and companies have their own social media account. Building a store’s “online presence” and promoting brand awareness has now become a priority for many marketing departments. According to Internet Advertising Revenue Report: Full Year 2019 Results & Q1 2020 Revenues, “The Interactive Advertising bureau, an industry trade association, and the research firm eMarketer estimate that U.S. social media advertising revenue was roughly $36 billion in 2019, making up approximately 30% of all digital advertising revenue,” they expect that it will increase to $43 billion in 2020.

The Pew Research Center estimated, “that in 2019, 72% of U.S. adults, or about 184 million U.S. adults, used at least one social media site, based on the results of a series of surveys.”

Companies and people are increasingly utilizing these tools, what are the legal implications? 

This area of law is quickly growing. Advertisers can now directly reach their consumers in an instant, marketing their products at comparable prices. The FTC, Federal Trade Commission has expanded its enforcement actions in this area. Some examples of this are:

  •  The Securities and Exchange Commission Regulation Fair Disclosure addresses, “ the selective disclosure of information by publicly traded companies and other issuers, and the SEC has clarified that disseminating information through social media outlets like Facebook and Twitter is allowed so long as investors have been alerted about which social media will be used to disseminate such information,” 
  • The National Labor Relations Act, “While crafting an effective social media policy regarding who can post for a company or what is acceptable content to post relating to the company is important, companies need to ensure that the policy is not overly broad or can be interpreted as limiting employees’ rights related to protected concerted activity”
  • FDA, “ Even on social media platforms, businesses running promotions or advertising online have to be careful not to run afoul of FDA disclosure requirements”

According to the ABA there are two basic principles in advertising law which apply to any media: 

  1. Advertisers must have a reasonable basis to substantiate claims made; and
  2.  If disclosure is required to prevent an ad from being misleading, such disclosure must appear in a clear and conspicuous manner.

Advertisements may be subject to more specific regulations regarding Children under the Children’s Online Privacy Protection Act (COPPA). This act gives parents control over protections and approvable ways to get verifiable parental consent.  

The Future legality of our Data 

Data brokers are companies that collect information about you and sell that data to other companies or individuals. This information can include everything from family birthdays, addresses, contacts, jobs, education, hobbies, interests, life events and health conditions. Currently, Data brokers are legal in most states. California and Vermont have enacted laws that require data brokers to register their operation in the state. Who owns your data? Should you? Should the sites you are creating the data on? Should it be free for companies to sell? Will states take this issue in different directions? If so, what would these implications be for companies and sites to keep up with?

Facebook’s market capitalization stands at $450 billion.

While there is uncertainty regarding this area of law, it is certain that it is new, expanding and will require much debate. 

According to Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media,  “Collecting user data allows operators to offer different advertisements based on its potential relevance to different users.”   The data collected by social media companies enables them to build complex strategies and sell advertising “space” targeting specific user groups to companies, organizations, and political campaigns (How Does Facebook Make Money). The capabilities here seem endless, “Social media operators place ad spaces in a marketplaces that runs an instantaneous auction with advertisers that can place automated bids.” With the ever expanding possibilities of social media comes a growing legal frontier. 

Removing Content 

 Section 230, a provision of the 1996 Communications Decency Act, states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” (47 U.S.C. § 230). This act shields social media companies from liability for content posted by their users and allows them to remove lawful but objectionable posts.

One legal issue that has been arising here is, advertisements are being taken down by the content monitoring algorithms. According to a Congressional Research Services report, during the COVID-19 pandemic social media companies relied more heavily on automated systems to monitor content. These systems could review large volumes of the content at a time however they mistakenly removed some content. “Facebook’s automated systems have reportedly removed ads from small businesses, mistakenly identifying them as content that violates its policies and causing the business to lose money during the appeals process” (Facebook’s AI Mistakenly Bans Ads for Struggling Businesses). This has affected a wide range of small businesses according to Facebook’s community standards transparency enforcement report. According to this same report, “In 2019, Facebook restored 23% of the 76 million appeals it received, and restored an additional 284 million pieces of content without an appeal—about 2% of the content that it took action on for violating its policies.” 

 

Cancel Culture….. The Biggest Misconception of the 21st Century

Cancel Culture  refers to the popular practice of withdrawing support for (canceling) public figures and companies after they have done or said something considered objectionable or offensive.

Being held accountable isn’t new.

If a public figure has done something or has said something offensive to me why can’t I express my displeasure or discontinue my support for them? Cancel culture is just accountability culture. Words have consequences, and accountability is one of them. However, this is nothing new. We are judged by what we say in our professional and personal lives. For example, whether we like it or not when we’re on a job hunt we are held accountable for what we say or may have said in the past. According to Sandeep Rathore, (2020, May 5). 90% of Employers Consider an Applicant’s Social Media Activity During Hiring Process, employers believe that social media is important to assess job candidates. This article explains that these jobs are  searching your social media for certain red flags like, anything that can be considered hate speech, illegal or illicit content, negative comments about previous jobs or client, threats to people or past employers, confidential or sensitive information about people or previous employers. Seems like a prospective employer can cancel you for a job for things you may have done or said in the past. Sound familiar?

You ever been on a first date? Has your date ever said something so objectionable or offensive that you just cancel them after the first date? I’m sure it has happened to some people. This is just another example of people being held accountable for what they say.

Most public figures who are offended by cancel culture have a feeling of entitlement. They feel they have the right to say anything, even if it’s offensive and hurtful, and bear no accountability. In Sarah Hagi, (2019 November 19). Cancel Culture is not real, at least not in the way people believe it is, Hagi explained that Cancel Culture is turned into a catch-all for when people in power face consequences for their actions or receive any type of criticism, something that they’re not used to.”

What harm is Cancel Culture causing?

Many cancel culture critics say cancel culture is limiting free speech. This I don’t get. The very essence of cancel culture is free speech. Public figures have the right to say what they want and the public has the right to express disapproval and displeasure with what they said. Sometimes this comes in the form of boycotting, blogging, social media posting etc. Public figures who feel that they have been cancelled might have bruised egos, be embarrassed, or might have their career impacted a little but that comes as a consequence of free speech. A Public figure losing fans, customers, or approval in the public eye is not an infringement on their rights. It’s just the opposite. It’s the people of the public expressing their free speech. They have the right to be a fan of who they want, a customer of who they want, and to show approval for who they want. Lastly, Cancel Culture can be open dialogue but  rarely do we see the person that is on the receiving end of a call out wanting to engage in open dialogue with the people who are calling them out.

No public figures are actually getting cancelled.

According to AJ Willingham, (2021 March 7). It’s time to Cancel this talk of cancel culture, “people who are allegedly cancelled still prevail in the end”.  The article gives an example of when Dr. Sues was supposedly cancelled due to racist depictions in his book, but instead his book sales actually went up.  Hip Hop rapper Tory Lanez was supposedly cancelled for allegedly shooting  female rapper Megan the stallion in the foot. Instead of being cancelled he dropped an album describing what happened the night of the shooting and his album skyrocketed in sales. There are numerous examples that show that people are not really being cancelled, but instead simply being called out for their objectionable or offensive behavior.

Who are the real victims here?

In AJ Willingham, (2021 March 7). It’s time to Cancel this talk of cancel culture, the article states “there are real problems that exist…. to know the difference look at the people who actually suffer when these cancel culture wars play out.  There are men and women who allege wrong doing at the risk of their own career. Those are the real victims.” This a problem that needs to be identified in cancel culture debate. To many people are are prioritizing the feelings of the person that is being called out rather than the person that is being oppressed. In Jacqui Higgins-Dailey, (2020, September 3). You need to calm down : You’re getting called out, not cancelled, Dailey explains “ When someone of a marginalized group says they are being harmed, we (the dominant group) say the harm wasn’t our intent. But impact and intent are not the same. When a person doesn’t consider the impact their beliefs, thoughts, words and actions have on a marginalized group, they continue to perpetuate the silencing of that group. Call-out culture is a tool. Ending call-out culture silences marginalized groups who have been censored far too long. The danger of cancel culture is refusing to take criticism. That is stifling debate. That is digging into a narrow world view”.

 

 

 

 

 

 

 

 

 

Skip to toolbar