I Knew I Smelled a Rat! How Derivative Works on Social Media can “Cook Up” Infringement Lawsuits

 

If you have spent more than 60 seconds scrolling on social media, you have undoubtably been exposed to short clips or “reels” that often reference different pop culture elements that may be protected intellectual property. While seemingly harmless, it is possible that the clips you see on various platforms are infringing on another’s copyrighted work. Oh Rats!

What Does Copyright Law Tell Us?

Copyright protection, which is codified in 17 U.S.C. §102, extends to “original works of authorship fixed in any tangible medium of expression”. It refers to your right, as the original creator, to make copies of, control, and reproduce your own original content. This applies to any created work that is reduced to a tangible medium. Some examples of copyrightable material include, but are not limited to, literary works, musical works, dramatic works, motion pictures, and sound recordings.

Additionally, one of the rights associated with a copyright holder is the right to make derivative works from your original work. Codified in 17 U.S.C. §101, a derivative work is “a work based upon one or more preexisting works, such as a translation, musical arrangement, dramatization, fictionalization, motion picture version, sound recording, art reproduction, abridgment, condensation, or any other form in which a work may be recast, transformed, or adapted. A work consisting of editorial revisions, annotations, elaborations, or other modifications which, as a whole, represent an original work of authorship, is a ‘derivative work’.” This means that the copyright owner of the original work also reserves the right to make derivative works. Therefore, the owner of the copyright to the original work may bring a lawsuit against someone who creates a derivative work without permission.

Derivative Works: A Recipe for Disaster!

The issue of regulating derivative works has only intensified with the growth of cyberspace and “fandoms”. A fandom is a community or subculture of fans that’s built itself up around one specific piece of pop culture and who share a mutual bond over their enthusiasm for the source material. Fandoms can also be composed of fans that actively participate and engage with the source material through creative works, which is made easier by social media. Historically, fan works have been deemed legal under the fair use doctrine, which states that some copyrighted material can be used without legal permission for the purposes of scholarship, education, parody, or news reporting, so long as the copyrighted work is only being used to the extent necessary. Fair use can also apply to a derivative work that significantly transforms the original copyrighted work, adding a new expression, meaning, or message to the original work. So, that means that “anyone can cook”, right? …Well, not exactly! The new, derivative work cannot have an economic impact on the original copyright holder. I.e., profits cannot be “diverted to the person making the derivative work”, when the revenue could or should have gone to original copyright holder.

With the increased use of “sharing” platforms, such as TikTok, Instagram, or YouTube, it has become increasingly easier to share or distribute intellectual property via monetized accounts. Specifically, due to the large amount of content that is being consumed daily on TikTok, its users are incentivized with the ability to go “viral” instantaneity, if not overnight,  as well the ability to earn money through the platform’s “Creator Fund.” The Creator Fund is paid for by the TikTok ads program, and it allows creators to get paid based on the amount of views they receive. This creates a problem because now that users are getting paid for their posts, the line is blurred between what is fair use and what is a violation of copyright law. The Copyright Act fails to address the monetization of social media accounts and how that fits neatly into a fair use analysis.

Ratatouille the Musical: Anyone Can Cook?

Back in 2020, TikTok users Blake Rouse and Emily Jacobson were the first of many to release songs based on Disney-Pixar’s 2007 film, Ratatouille. What started out as a fun trend for users to participate in, turned into a full-fledged viral project and eventual tangible creation. Big name Broadway stars including André De Shields, Wayne Brady, Adam Lambert, Mary Testa, Kevin Chamberlin, Priscilla Lopez, and Tituss Burgess all participated in the trend, and on December 9, 2020, it was announced that Ratatouille was coming to Broadway via a virtual benefit concert.

Premiered as a one-night livestream event in January 1 2021, all profits generated from the event were donated to the Entertainment Community Fund (formerly the Actors Fund), which is a non-profit organization that supports performers and workers in the arts and entertainment industry. It initially streamed in over 138 countries and raised over $1.5 million for the charity. Due to its success, an encore production was streamed on TikTok 10 days later, which raised an additional $500,000 for the fund (totaling $2 million). While this is unarguably a derivative work, the question of fair use was not addressed here because Disney lawyers were smart enough not to sue. In fact, they embraced the Ratatouille musical by releasing a statement to the Verge magazine:

Although we do not have development plans for the title, we love when our fans engage with Disney stories. We applaud and thank all of the online theatre makers for helping to benefit The Actors Fund in this unprecedented time of need.

Normally, Disney is EXTREMELY strict and protective over their intellectual property. However, this small change of heart has now opened a door for other TikTok creators and fandom members to create unauthorized derivative works based on others’ copyrighted material.

Too Many Cooks in the Kitchen!

Take the “Unofficial Bridgerton Musical”, for example. In July of 2022, Netflix sued content creators Abigail Barlow and Emily Bear for their unauthorized use of Netflix’s original series, Bridgerton. The Bridgerton Series on Netflix is based on the Bridgerton book series by Julia Quinn. Back in 2020, Barlow and Bear began writing and uploading songs based on the Bridgerton series to TikTok for fun. Needless to say, the videos went viral, thus prompting Barlow and Bear to release an entire musical soundtrack based on Bridgerton. They even went so far as to win the 2022 Grammy Award for Best Musical Album.

On July 26, Barlow and Bear staged a sold-out performance with tickets ranging from $29-$149 at the New York Kennedy Center, and also incorporated merchandise for sale that included the “Bridgerton” trademark. Netflix then sued, demanding an end to these for-profit performances. Interestingly enough, Netflix was allegedly initially on board with Barlow and Bear’s project. However, although Barlow and Bear’s conduct began on social media, the complaint alleges they “stretched fanfiction way past its breaking point”. According to the complaint, Netflix “offered Barlow & Bear a license that would allow them to proceed with their scheduled live performances at the Kennedy Center and Royal Albert Hall, continue distributing their album, and perform their Bridgerton-inspired songs live as part of larger programs going forward,” which Barlow and Bear refused. Netflix also alleged that the musical interfered with its own derivative work, the “Bridgerton Experience,” an in-person pop-up event that has been offered in several cities.

Unlike the Ratatouille: The Musical, which was created to raise money for a non-profit organization that benefited actors during the COVID-19 pandemic, the Unofficial Bridgerton Musical helped line the pockets of its creators, Barlow and Bear, in an effort to build an international brand for themselves. Netflix ended up privately settling the lawsuit in September of 2022.

Has the Aftermath Left a Bad Taste in IP Holder’s Mouths?

The stage has been set, and courts have yet to determine exactly how fan-made derivative works play out in a fair use analysis. New technologies only exacerbate this issue with the monetization of social media accounts and “viral” trends. At a certain point, no matter how much you want to root for the “little guy”, you have to admit when they’ve gone too far. Average “fan art” does not go so far as to derive significant profits off the original work and it is very rare that a large company will take legal action against a small content creator unless the infringement is so blatant and explicit, there is no other choice. IP law exists to protect and enforce the rights of the creators and owners that have worked hard to secure their rights. Allowing content creators to infringe in the name of “fair use” poses a dangerous threat to intellectual property law and those it serves to protect.

 

Update Required: An Analysis of the Conflict Between Copyright Holders and Social Media Users

Opening

For anyone who is chronically online as yours truly, in one way or another we have seen our favorite social media influencers, artists, commentators, and content creators complain about their problems with the current US Intellectual Property (IP) system. Be it that their posts are deleted without explanation or portions of their video files are muted, the combination of factors leading to copyright issues on social media is endless. This, in turn, has a markedly negative impact on free and fair expression on the internet, especially within the context of our contemporary online culture. For better or worse, interaction in society today is intertwined with the services of social media sites. Conflict arises when the interests of copyright holders clash with this reality. They are empowered by byzantine and unrealistic laws that hamper our ability to exist as freely as we do in real life. While they do have legitimate and fundamental rights that need to be protected, such rights must be balanced out with desperately needed reform. People’s interaction with society and culture must not be hampered, for that is one of the many foundations of a healthy and thriving society. To understand this, I venture to analyze the current legal infrastructure we find ourselves in.

Current Relevant Law

The current controlling laws for copyright issues on social media are the Copyright Act of 1976 and the Digital Millennium Copyright Act (DMCA). The DMCA is most relevant to our analysis; it gives copyright holders relatively unrestrained power to demand removal of their property from the internet and to punish those using illegal methods to get ahold of their property. This broad law, of course, impacted social media sites. Title II of the law added 17 U.S. Code § 512 to the Copyright Act of 1976, creating several safe harbor provisions for online service providers (OSP), such as social media sites, when hosting content posted by third parties. The most relevant of these safe harbors to this issue is 17 U.S. Code § 512(c), which states that an OSP cannot be liable for monetary damages if it meets several requirements and provides a copyright holder a quick and easy way to claim their property. The mechanism, known as a “notice and takedown” procedure, varies by social media service and is outlined in their terms and conditions of service (YouTube, Twitter, Instagram, TikTok, Facebook/Meta). Regardless, they all have a complaint form or application that follows the rules of the DMCA and usually will rapidly strike objectionable social media posts by users. 17 U.S. Code § 512(g) does provide the user some leeway with an appeal process and § 512(f) imposes liability to those who send unjustifiable takedowns. Nevertheless, a perfect balance of rights is not achieved.

The doctrine of fair use, codified as 17 U.S. Code § 107 via the Copyright Act of 1976, also plays a massive role here. It established a legal pathway for the use of copyrighted material for “purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research” without having to acquire right to said IP from the owner. This legal safety valve has been a blessing for social media users, especially with recent victories like Hosseinzadeh v. Klein, which protected reaction content from DMCA takedowns. Cases like Lenz v. Universal Music Corp further established that fair use must be considered by copyright holders when preparing for takedowns. Nevertheless, failure to consider said rights by true copyright holders still happens, as sites are quick to react to DMCA complaints. Furthermore, the flawed reporting systems of social media sites lead to abuse by unscrupulous actors faking true ownership. On top of that, such legal actions can be psychologically and financially intimidating, especially when facing off with a major IP holder, adding to the unbalanced power dynamic between the holder and the poster.

The Telecommunications Act of 1996, which focuses primarily on cellular and landline carriers, is also particularly relevant to social media companies in this conflict. At the time of its passing, the internet was still in its infancy. Thus, it does not incorporate an understanding of the current cultural paradigm we find ourselves in. Specifically, the contentious Section 230 of the Communication Decency Act (Title V of the 1996 Act) works against social media companies in this instance, incorporating a broad and draconian rule on copyright infringement. 47 U.S. Code § 230(e)(2) states in no uncertain terms that “nothing in this section shall be construed to limit or expand any law pertaining to intellectual property.” This has been interpreted and restated in Perfect 10, Inc. v. CCBill LLC to mean that such companies are liable for user copyright infringement. This gap in the protective armor of Section 230 is a great concern to such companies, therefore they react strongly to such issues.

What is To Be Done?

Arguably, fixing the issues around copyright on social media is far beyond the capacity of current legal mechanisms. With ostensibly billions of posts each day on various sites, regulation by copyright holders and sites is far beyond reason. It will take serious reform in the socio-cultural, technological, and legal arenas before a true balance of liberty and justice can be established. Perhaps we can start with an understanding by copyright holders not to overreact when their property is posted online. Popularity is key to success in business, so shouldn’t you value the free marketing that comes with your copyrighted property getting shared honestly within the cultural sphere of social media?  Social media sites can also expand their DMCA case management teams or create tools for users to accredit and even share revenue with, if they are an influencer or content creator, the copyright holder. Finally, congressional action is desperately needed as we have entered a new era that requires new laws. That being said, achieving a balance between the free exchange of ideas and creations and the rights of copyright holders must be the cornerstone of the government’s approach to socio-cultural expression on social media. That is the only way we can progress as an ever more online society.

 

Image: Freepik.com

https://www.freepik.com/free-vector/flat-design-intellectual-property-concept-with-woman-laptop_10491685.htm#query=intellectual%20property&position=2&from_view=keyword”>Image by pikisuperstar

DANCE DANCE LITIGATION

When the tune of the “Y.M.C.A.,” by The Village People starts to play, no matter the time or place, the urge to raise your arms and dance is impossible to ignore. A wave of nostalgia and childish-like happiness quickly fills the atmosphere, and as the chorus begins, you and (almost) everyone around you begin to dance the only way you know how: throwing your arms up in the air and forming the letters, duh!  But what’s not so obvious is that the “Y.M.C.A.” dance, irrespective of its wild popularity and incorporation into major television and film productions since its release in 1978, is not copyrighted. The songwriters, artists, and producers each have and continue to receive the recognition, compensation, and title they deserve for their contributions to the song itself, but the inherent choreography remains unprotected. According to the Copyright Office (“the Office”), a dance “whereby a group of people spell out letters with their arms” is simply too basic to deserve copyright recognition because no matter how distinctive it may be, it is nonetheless a commonplace movement or gesture.

CONGRESS ‘GETS DOWN’

Choreographers, since the beginning of the entertainment industry, have never received the legal protections that producers, songwriters, and artists have. Although The Copyright Act of 1976 (the “Act”) officially recognizes choreography as a protected form of creative expression, in order to qualify as copyrightable, the choreographic work must conform to the following elements: (1) it is an original work of authorship, (2) it is an expression as opposed to an idea, and (3) it is “fixed in any tangible medium of expression. In addition, the Supreme Court has held that an individual may not bring a copyright infringement suit under the Act until the individual has registered with the Office. Although choreographic works were finally recognized as worthy or deserving of copyright recognition and status, the application of copyright laws to choreography since its recognition has revealed a significant grey area for intellectual property law.

BUT IS IT JUST A SHIMMY OR A  ‘CHOREOGRAPHIC WORK’?

When assessing what qualifies as a copyrightable choreographic work, the Office acknowledges that the dividing line between what is a simple routine and what is copyrightable choreography is more of a continuum, rather than a bright line. The Office also indicated certain types of works that, from the outright, may not be copyrighted: common place movements, individual dance moves or gestures, social dances, ordinary and athletic movements, and short dance routines.

Whether a particular dance qualifies as a choreographic work, or not, ultimately rests on the Office’s assessment of the following elements collectively:

(1) rhythmic movement in a defined space

(2) compositional arrangement

(3) musical or textual accompaniment

(4) dramatic content

(5) presentation before an audience

(6) execution by skilled performers

DANCING OUR WAY TO THE COURT HOUSE 

Litigation surrounding the video game Fortnitereleased through Epic Games Inc., reveals just how large that grey area has grown to be. Although free to play, Fortnite’s revenue is derived from in-game purchases including purchasing a dance emote or a dance routine for the player’s avatar.

In 2019, Alfonso Ribeiro, who played the character ‘Carlton Banks’ on the TV show The Fresh Prince of Bel-Air, sought justice for Epic Games’ improper use of the Carlton as a dance emote in Fortnite but was both dismissed and rejected by the court and the Office. Following the direction of the Supreme Court, the court dismissed Mr. Ribeiro’s claim for failing to register and receive final registration of his claim with the Copyright Office. Registration is deemed to be “made” only when “the Register has registered a copyright after examining a properly filed application.” In an attempt to salvage his claim, Mr. Ribeiro proceeded to the Office but nonetheless left emptied handed. In reviewing the application, the Office refused to grant Mr. Ribeiro a copyright because the Carlton did not rise to the level of choreography since it was a simple routine made up of just three dance steps. Likewise, cases brought by rapper 2 Milly and the Backpack Kid against Epic Games alleging copyright infringement for their choreographic works the “Milly Rock,” and “the Floss” as an emote in Fortnite were also dismissed for failure to register with the Office.

So, since the cases were all dismissed for not having a valid registration with the Office, then having a valid registration with the Office is the golden ticket to defending your claim of improper infringement, right? Not quite.

Earlier this year in March, professional dance choreographer Kyle Hanagami (“Hanagami”) filed suit against Epic Games for using dance movements from the copyrighted routine used for the song “How Long” from Charlie Puth. Hanagami, unlike his predecessors above, secured a copyright for his choreographic work. Holding that golden ticket, Hanagami argued that Epic Games did not credit or seek his consent to use, display, reproduce, sell or create derivative work based on his registered choreography.

Regardless of the fact that Hanagami did secure his copyright before bringing a claim under the Act, the court yet again dismissed the case and agreed with Epic Games. The court stated that Hanagami’s steps are potentially protected only when combined with the other elements that make up his copyrighted work. Epic Games technically didn’t infringe on Hanagami’s copyright because the specific dance steps on their own were not entitled to copyright protection. When the works were evaluated as a whole, the court decided they were not substantially similar: “[w]hereas Hanagami’s video features human performers in a dance studio in the physical world performing for a YouTube audience, Epic Games’ work features animated characters performing for an in-game audience in a virtual world.”

And as if the grey couldn’t get any grey-er….it indeed does.

DANCING IN CIRCLES, YET AGAIN

The outcome of all this dance-litigation eludes to the central need for choreography, on its own, to be recognized and protected as a separate work. Although securing a copyright to a choreographic work will get you in the door to the courthouse, there’s no guarantee that what you’ve copyrighted will actually be protected. Thus, it is crucial that the plight of choreographers be truly recognized. Inconsistent outcomes and unclear guidelines continue to aggravate the underlying issue of allowing choreographers to pursue the copyright protection they deserve for their works.  Copyrighting successful dance routines is to further help ensure dancers’ and their ability to monetize and profit from their work, but the murky waters that prevent registration and the unpredictability of outcomes in court will remain as barriers until we can clear the grey area.

Alls fair in Love and Romance Scams

In 2014, 81-year-old Glenda thought she had met the love of her life. The problem? Their entire relationship was virtual. The individual on the other end of Glenda’s computer sold her a fictional narrative that he was a United States citizen working in Nigeria. Glenda and this man developed their virtual “relationship”, never meeting in person. After some time, this man would ask Glenda for money to help his business and to get back to the United States. Glenda, wanting to help her love, immediately sent over the money. The requests became more frequent.  When the small money transfers weren’t enough, he asked her to open personal and business bank accounts to transfer funds between the United States and overseas.

Despite numerous warnings from the FBI, local police, and banks to stop, Glenda still believed the man she met online loved her and needed help. She continued illegally transferring money overseas for the next 5 years and would eventually plead guilty to two federal felonies. Glenda was a victim of a Romance Scam and paid the ultimate price.

Unfortunately, Glenda’s situation, while extreme, is far from a rare occurrence today. In 2021 alone, the Federal Trade Commission (FTC) saw consumers report $547 million in losses due to romance scams, a concerning 80% more than those reported in 2020. In total, the FTC has seen an astronomical $1.3 billion in cumulative romance scam losses reported in the last five years. And these are just the scams that were reported to the FTC. Many victims go without reporting due to the shame and stigma that comes with falling prey to an online scam.

Romance scams often referred to as “sweetheart scams” occur when an individual (or group of individuals) fabricates an online persona and targets vulnerable persons for money.

These scammers build a fake relationship with the victim through messages and build empathy and trust over a short amount of time. After the relationship is built, the scammer suddenly succumbs to financial and/or medical hardships. Their initial request for money is typically a small amount and the victim may be repaid the first time to negate any doubts that this is a scam; after the second, third, and fourth request, the victim is likely never see their funds (or their “love”) again.

The elderly population is especially vulnerable to online scams.  Seniors tend to be more trusting than younger generations and usually have significant financial savings (own their home, retirement savings, government benefits). Also due to cognitive decline and unfamiliarity with technology, this group is left at a disadvantage to defend themselves or recognize when someone is feigning friendship versus a genuine connection. Even more so in recent years due to COVID-19, the elderly have become even more vulnerable. Many were forced into isolation and could only stay in contact with family and loved ones by getting internet devices, opening up a whole new world. Unmonitored access to the internet coupled with increased loneliness made elders the perfect target for romance scams.

Are dating sites liable for promoting fraudsters to unsuspecting victims? The short answer is no.

Under 47 USC Section 230, interactive computer service providers (a.k.a. social media and dating sites) are immune from liability for claims arising out of the content that third parties publish to their sites.

In 2022, the Federal Trade Commission’s claims against Match Group Inc. (owner and operator of Match.com, Tinder, PlentyofFish, OkCupid, Hinge, and several other dating sites) asserting that:

  1. Match.com misrepresented to consumers that profiles were interested in “establishing a dating relationship”, but on numerous instances, these profiles were set up by individuals with the intent to defraud; and
  2. Match “exposed to consumers to the risk of fraud” by allowing accounts that were reported or flagged for fraud and under review to still exchange communication with other subscribers.

The Texas Northern District Court dismissed both counts, holding that under Section 230, Match was entitled to immunity from a third party’s fraudulent content and actions. It seems that if a victim is looking for recovery, they won’t find it in the courts or through the dating sites themselves.

This looks like a job for the FBI…

Or maybe not.

The Federal Bureau of Investigation engages its Internet Crime Complaint Center (IC3), Recovery Asset Team (RAT) and Financial Crimes Enforcement Network (FinCEN) to recover monetary losses from internet scams. Unfortunately, the FBI typically takes on international cases of single transfers over $50,000 that fall within a 72-hour reporting window. Most romance scammers typically request money from elderly victims in smaller amounts over an extended period (the median loss for romance fraud victims in their 70s is $6,450).  Due to this high threshold and short reporting window, a majority of romance scam victims never report their losses or see their money again.

In reality…YOU Are Your Best Defense.

Prevent

Do not send money to someone you have never met in person.

Advocate

Check in on your loved ones who are living alone. They may be less inclined to turn to virtual relationships and send money if they have real-life connections.

Check with banks and financial institutions about regular check-in schedules for elderly clients or talk with your loved ones to help monitor their accounts if you notice they are in a cognitive decline.

Report

If you or your loved one have been a victim of a romance scam, contact 1) your financial institution immediately; 2) report the fraud to the dating site to try and shut down the fraudster’s account; and 3) report the fraud to the Federal Trade Commission.

Miracles Can Be Misleading

Want to lose 20 pounds in 4 days? Try this *insert any miracle weight-loss product * and you’ll be skinny in no time!

Miracle weight-loss products (MWLP) are dietary supplements that either work as an appetite suppressant or forcefully induce weight loss. These products are not approved or indicated by pharmaceutical agencies as weight loss prophylactics. Social media users are continuously bombarded with the newest weight-loss products via targeted advertisements and endorsements from their favorite influencers. Users are force fed false promises of achieving the picture-perfect body while companies are profiting off their delusions. Influencer marketing has increased significantly as social media becomes more and more prevalent. 86 percent of women use social media for purchasing advice. 70 percent of teens trust influencers more than traditional celebrities. If you’re on social media, then you’ve seen your favorite influencer endorsing some form of a MWLP and you probably thought to yourself “well if Kylie Jenner is using it, it must be legit.”

The advertisements of MWLP are promoting an unrealistic and oversexualized body image. This trend of selling skinny has detrimental consequences, often leading to body image issues, such as body dysmorphia and various eating disorders. In 2011, the Florida House Experience conducted a study among 1,000 men and women. The study revealed that 87 percent of women and 65 percent of men compare their bodies to those they see on social media. From the 1,000 subjects, 50 percent of the women and 37 percent of the men viewed their bodies unfavorably when compared to those they saw on social media. In 2019, Project Know, a nonprofit organization that studies addictive behaviors, conducted a study which suggested that social media can worsen genetic and psychological predispositions to eating disorders.

Who Is In Charge?

The collateral damages that advertisements of MWLP have on a social media user’s body image is a societal concern. As the world becomes more digital, even more creators of MWLP are going to rely on influencers to generate revenue for their products, but who is in charge of monitoring the truthfulness of these advertisements?

In the United States, the Federal Trade Commission (FTC) and the Food and Drug Administration (FDA) are the two federal regulators responsible for promulgating regulations relating to dietary supplements and other MWLP. While the FDA is responsible for the labeling of supplements, they lack jurisdiction over advertising. Therefore, the FTC is primarily responsible for advertisements that promote supplements and over-the-counter drugs.

The FTC regulates MWLP advertising through the Federal Trade Commission Act of 1914 (the Act). Sections 5 and 12 of the Act collectively prohibit “false advertising” and “deceptive acts or practices” in the marketing and sales of consumer products, and grants authority to the FTC to take action against those companies. An advertisement is in violation of the Act when it is false, misleading, or unsubstantiated. An advertisement is false or misleading when it contains “objective, material representation that is likely to deceive consumers acting reasonably under the circumstances.” An advertisement is unsubstantiated when it lacks “a reasonable basis for its contained representation.” With the rise of influencer marketing, the Act also requires influencers to clearly disclose when they have a financial or other relationship with the product they are promoting.

Under the Act, the FTC has taken action against companies that falsely advertise MWLP. The FTC typically brings enforcement claims against companies by alleging that the advertiser’s claims lack substantiation. To determine the specific level and type of substantiation required, the FTC considers what is known as the “Pfizer factors” established In re Pfizer. These factors include:

    • The type and specificity of the claim made.
    • The type of product.
    • The possible consequences of a false claim.
    • The degree of reliance by consumers on the claims.
    • The type, and accessibility, of evidence adequate to form a reasonable basis for making the particular claims.

In 2014, the FTC applied the Pfizer factors when they brought an enforcement action seeking a permanent injunction against Sensa Products, LLC. Since 2008, Sensa sold a powder weight loss product that allegedly could make an individual lose 30 pounds in six months without dieting or exercise. The company advertised their product via print, radio, endorsements, and online ads. The FTC claimed that Sensa’s marketing techniques were false and deceptive because they lacked evidence to support their health claims, i.e., losing 30 pounds in six months. Furthermore, the FTC additionally claimed that Sensa violated the Act by failing to disclose that their endorsers were given financial incentives for their customer testimonials. Ultimately, Sensa settled, and the FTC was granted the permanent injunction.

What Else Can We Do?

Currently, the FTC, utilizing its authority under the Act, is the main legal recourse for removing these deceitful advertisements from social media. Unfortunately, social media platforms, such as Facebook, Twitter, Instagram, etc., cannot be liable for the post of other users. Under section 230 of the Communications Decency Act, “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” That means, social media platforms cannot be held responsible for the misleading advertisements of MWLP; regardless of if the advertisement is through an influencer or the companies own social media page and regardless of the collateral consequences that these advertisements create.

However, there are other courses of action that social media users and social media platforms have taken to prevent these advertisements from poisoning the body images of users. Many social media influencers and celebrities have rose to the occasion to have MWLP advertisements removed. In fact, in 2018, Jameela Jamil, an actress starring on The Good Place, launched an Instagram account called I Weigh which “encourages women to feel and look beyond the flesh on their bones.” Influencer activism has led to Instagram and Facebook blocking users, under the age of 18, from viewing posts advertising certain weight loss products or other cosmetic procedures. While these are small steps in the right direction, more work certainly needs to be done.

The #Trademarkability of #Hashtags

The #hashtag is an important marketing tool that has revolutionized how companies conduct business. Essentially, hashtags serve to identify or facilitate a search for a keyword or topic of interest by typing a pound sign (#) along with a word or phrase (e.g., #OOTD or #Kony2012). Placing a hashtag at the beginning of a word or phrase on Twitter, Instagram, Facebook, TikTok, etc., turns the word or phrase into a hyperlink attaching it to other related posts, thus driving traffic to users’ sites. This is a great way to promote a product, service or campaign while simultaneously reducing marketing costs and increasing brand loyalty, customer engagement, and, of course, sales. But with the rise of this digital “sharing” tool comes a new wave of intellectual property challenges. Over the years, there has been increasing interest in including the hashtag in trademark applications.

#ToRegisterOrNotToRegister

According to the United States Patent and Trademark Office (USPTO), a term containing the hash symbol or the word “hashtag” MAY be registered as a trademark. The USPTO recognizes hashtags as registrable trademarks “only if [the mark] functions as an identifier of the source of the applicant’s goods or services.” Additionally, Section 1202.18 of the Trademark Manual of Examining Procedure (TMEP) further explains that “when examining a proposed mark containing the hash symbol, careful consideration should be given to the overall context of the mark, the placement of the hash symbol in the mark, the identified goods and services, and the specimen of use, if available. If the hash symbol immediately precedes numbers in a mark, or is used merely as the pound or number symbol in a mark, such marks should not necessarily be construed as hashtag marks. This determination should be made on a case-by-case basis.”

Like other forms of trademarks, one would seek registration of a hashtag in order to exclude others from using the mark when selling or offering the goods or services listed in the registration. More importantly, the existence of the trademark would serve in protecting against consumer confusion. This is the same standard that is applied to other words, phrases, or symbols that are seeking trademark registration. The threshold question when considering whether to file a trademark application for a hashtag is whether the hashtag is a source identifier for goods or services, or whether it merely describes a particular topic, movement, or idea.

#BarsToRegistration

Merely affixing a hashtag to a mark does not automatically make it registerable. For example, in 2019, the Trademark Trial and Appeal Board (TTAB) denied trademark registration for #MAGICNUMBER108 because it did not function as a trademark for shirts and is therefore not a source identifier. Rather, the TTAB found that the social media evidence suggests that the public sees the hashtag as a “widely used message to convey information about the Chicago Cubs baseball team”, namely, their 2016 World Series win after a 108-year drought.  The TTAB went on to say that just because the mark is unique doesn’t mean that the public would perceive it is an indication of a source. This further demonstrates the importance of a goods- source association of the mark.

Hashtags that would not function as trademarks are those simply relating to certain topics that are not associated with any goods or services. So, for example, cooking: #dinnersfortwo, #mealprep, or #healthylunches. These hashtags would likely be searched by users to find information relating to cooking or recipe ideas. When encountering these hashtags on social media, users would probably not link them to a specific brand or product. On the contrary, hashtags like #TheSaladLab or #ChefCuso would likely be linked to specific social media influencers who use that mark in connection with their goods and services and as such, could function as a trademark. Other examples of hashtags that would likely function as trademarks are brands themselves (#sephora, #prada, or #nike). Even slogans for popular brands would suffice (#justdoit, #americarunsondunkin, or #snapcracklepop).

#Infringement

What makes trademarked hashtags unique from other forms of trademarked material is that hashtags actually serve a purpose other than just identifying the source of the goods- they are used to index key words on social media to allow users to follow topics they are interested in. So, does that mean that using a trademarked hashtag in your social media post will create a cause of action for trademark infringement? The answer to this question is every lawyer’s favorite response: it depends. Sticking with the example above, assuming #TheSaladLab is a registered trademark, referencing the tag in this blog post alone would likely not warrant a trademark infringement claim, but if I were to sell kitchen tools or recipe books with the tag #TheSaladLab, that might rise to the level of infringement. However, courts are still unclear about the enforceability of hashtagged marks. In 2013, a Mississippi District Court stated in an order that “hashtagging a competitor’s name or product in social media posts could, in certain circumstances, deceive consumers.” The court never actually made a ruling on whether the use of the hashtag was actually infringing the registered mark.

This is problematic because on one hand, regardless of whether there is a hashtag in front of the mark, the owner of a registered trademark is entitled to bring a cause of action for trademark infringement when someone else uses their mark in commerce without their permission in the same industry. On the other hand, when one uses a trademark with the “#” symbol in front of it for the purposes of sharing information on social media, they are simply complying with the norms of the internet. The goal is to strike a balance between protecting the rights of IP owners and also protecting the rights of users’ freedom of expression on social media.

While the courts are somewhat behind in dealing with infringement relating to hashtagged trademark material, for the time being, various social media platforms (Instagram, Facebook, Twitter, YouTube) have procedures in place that allow users to report misuse of trademark-protected material or other intellectual property-related concerns.

States are ready to challenge Section 230

On January 8, 2021, Twitter permanently suspended @realDonaldTrump.  The decision followed an initial warning to the then-president and conformed to its published standards as defined in its public interest framework.   The day before, Meta (then Facebook) restricted President Trump’s ability to post content on Facebook or Instagram.   Both companies cited President Trump’s posts praising those who violently stormed the U.S. Capitol on January 6, 2021 in support of their decisions.

Members of the Texas and Florida legislatures, together with their governors, were seemingly enraged that these sites would silence President Trump’s voice.  In response, each immediately passed laws aiming to limit the scope of social media sites.   Although substantively different, the Texas and Florida laws are theoretically the same; they both seek to punish social media sites that regulate forms of conservative content that they argue liberal social media sites silence, regardless of whether the posted content violates the site’s published standards.

Shortly after each law’s adoption, two tech advocacy groups, NetChoice and Computer and Communication Industry Association, filed suits in federal district courts challenging the laws as violative of the First Amendment.  Each case has made its way through the federal courts on procedural grounds; the Eleventh Circuit upheld a lower court preliminary injunction prohibiting Florida from enforcing the statute until the case is decided on its merits.   In contrast, the Fifth Circuit overruled a lower court preliminary injunction.  Texas appealed the Fifth Circuit ruling to the Supreme Court of the United States, which, by a vote of 5-4, voted to reinstate the injunction.  The Supreme Court’s decision made clear that these cases are headed to the Supreme Court on the merits.

Criminals Beware the Internet Is Here!

Social media has now become a mainstay in our culture. We use social media to communicate and interact socially with our family and friends. Social media and the Internet allow us to share our whereabouts and latest experiences with just about everyone on the planet instantly with just a click of a button. The police department now understands this shift in culture and is using social media and the advancement in technology to their benefit. “Police are recognizing that a lot of present-day crimes are attached to social media. Even if the minuscule possibility existed that none of the persons involved were on social media, the crime would likely be discussed on social media by people who have become aware of it or the media organizations reporting it”.

Why social media is the New Police Investigative Tool?

Why are police so successful fighting crime with social media? It’s because a lot of us are addicted to social media and it’s our new form of communication. The addiction of social media has made it easier for police to catch criminals. Criminals tend to tell on them selves these days by simply not being able to stay off social media. We tell our friends confidential information through social media with the false narrative of thinking that what we say can’t be traced back to us. We even think that since we put our pages on private, that our information can’t be retrieved. However, that’s far from the truth. Bronx criminal Melvin Colon found this out the hard way.  Police authorities suspected Colon of crimes but lacked probable cause for a search. “Their solution: finding an informant who was friends with him on Facebook. There they gathered the bulk of the evidence needed for an indictment. Colon’s lawyers sought to use the Fourth Amendment to suppress that evidence, but the judge ruled that Colon’s legitimate expectation of privacy ended when he disseminated post to his friends. The court explained that Colons ‘friends’ were free to use the information however they wanted—including sharing it with the Government.” This illustrates that even information we think is private can still be accessed by police.

How Police use social media as an Investigative Tool?

“Most commonly, an officer views publicly available posts by searching for an individual, group, hashtag, or another search vector. Depending on the platform and the search, it may yield all the content responsive to the query or only a portion. When seeking access to more than is publicly available, police may use an informant (such as a friend of the target) or create an undercover account by posing as a fellow activist or alluring stranger”. This allows officers to communicate directly with the target and see content posted by both the target and their contacts that might otherwise be inaccessible to the public. Police also use social media to catch criminals through sting operations. “A sting operation is designed to catch a person in the act of committing a crime. Stings usually include a law enforcement officer playing the part as accessory to a crime, perhaps as a drug dealer or a potential customer of prostitution. After the crime is committed, the suspect is quickly arrested”. Another way social media is used as an investigative tool is through location tracking. “Location tracking links text, pictures and video to an exact geographical location and is a great tool for law enforcement to find suspects”. Due to location tagging, police can search for hot spots of crime and even gain instant photographic evidence from a crime. Social media is also used as an investigative public outreach tool. It helps the police connect with the public. It allows for police to communicate important announcements to the community and solicit tips on criminal investigations.

What does the law say about Police using social media?

There are few laws that specifically constrain law enforcement’s ability to engage in social media monitoring. “In the absence of legislation, the strongest controls over this surveillance tactic are often police departments’ individual social media policies and platform restrictions, such as Facebook’s real name policy and Twitter’s prohibition against using its API for surveillance”. Many people try to use fourth amendment as protections against police intrusion into their social media privacy. The Fourth Amendment guarantees the right of the people to be free from unreasonable searches and seizures. The inquiry against unreasonable searches and seizures is whether a person has a “reasonable expectation of privacy” and whether society recognizes that expectation as reasonable. The court states individuals do not have a recognized expectation of privacy in data publicly shared online. Law enforcement can also seek account information directly from social media companies. Under the Stored Communications Act, law enforcement can serve a warrant or subpoena on a social media company to get access to information about a person’s social media profile. The Stored Communications Act also permits service providers to voluntarily share user data without any legal process if delays in providing the information may lead to death or serious injury. “Courts have upheld warrants looking for IP logs to establish a suspect’s location, for evidence of communications between suspects, and to establish a connection between co-conspirators”.

 

AI Avatars: Seeing is Believing

Have you ever heard of deepfake? The term deepfake comes from “deep learning,” a set of intelligent algorithms that can learn and make decisions on their own. By applying deep learning, deepfake technology replaces faces from the original images or videos with another person’s likeness.

What does deep learning have to do with switching faces?

Basically, deepfake allows AI to learn automatically from its data collection, which means the more people try deepfake, the faster AI learns, thereby making its content more real.

Deepfake enables anyone to create “fake” media.

How does Deepfake work?

First, an AI algorithm called an encoder collects endless face shots of two people. The encoder then detects similarities between the two faces and compresses the images so they can be delivered. A second AI algorithm called a decoder receives the package and recovers it to reconstruct the images to perform a face swap.

Another way deepfake uses to swap faces is GAN, or a generative adversarial network. A GAN adds two AI algorithms against each other, unlike the first method where encoder and decoder work hand in hand.
The first algorithm, the generator, is given random noise and converts it into an image. This synthetic image is then added to a stream of real photos like celebrities. This combination of images gets delivered to the second algorithm, the discriminator. After repeating this process countless times, the generator and discriminator both improve. As a result, the generator creates completely lifelike faces.

For instance, Artist Bill Posters used deepfake technology to create a fake video of Mark Zuckerberg , saying that Facebook’s mission is to manipulate its users.

Real enough?

How about this. Consider having Paris Hilton’s famous quote, “If you don’t even know what to say, just be like, ‘That’s hot,’” replaced by Vladimir Putin, President of Russia. Those who don’t know either will believe that Putin is a Playboy editor-in-chief.

Yes, we can all laugh at these fake jokes. But when something becomes overly popular, it has to come with a price.

Originally, deepfake was developed by an online user of the same name for the purpose of entertainment, as the user had put it.

Yes, Deepfake meant pornography.

The biggest problem of deepfake is that it is challenging to detect the difference and figure out which one is the original. It has become more than just superimposing one face onto another.

Researchers found that more than 95% of deepfake videos were pornographic, and 99% of those videos had faces replaced with female celebrities. Experts explained that these fake videos lead to the weaponization of artificial intelligence used against women, perpetuating a cycle of humiliation, harassment, and abuse.

How do you spot the difference?

As mentioned earlier, the algorithms are fast learners, so for every breath we take, deepfake media becomes more real. Luckily, research showed that deepfake faces do not blink normally or even blink at all. That sounds like one easy method to remember. Well, let’s not get ahead of ourselves just yet. When it comes to machine learning, nearly every problem gets corrected as soon as it gets revealed. That is how algorithms learn. So, unfortunately, the famous blink issue already had been solved.

But not so fast. We humans may not learn as quickly as machines, but we can be attentive and creative, which are some qualities that tin cans cannot possess, at least for now.
It only takes extra attention to detect Deepfake. Ask these questions to figure out the magic:

Does the skin look airbrushed?
Does the voice synchronize with the mouth movements?
Is the lighting natural, or does it make sense to have that lighting on that person’s face?

For example, the background may be dark, but the person may be wearing a pair of shiny glasses reflecting the sun’s rays.

Oftentimes, deepfake contents are labeled as deepfake because creators want to display themselves as artists and show off their works.
In 2018, a software named Deeptrace was developed to detect deepfake contents. A deeptrace lab reported that deepfake videos are proliferating online, and its rapid growth is “supported by the growing commodification of tools and services that lower the barrier for non-experts—from well-maintained open source libraries to cheap deepfakes-as-a-service websites.”

The pros and cons of deepfake

It may be self-explanatory to name the cons, but here are some other risks deepfake imposes:

  • Destabilization: the misuse of deepfake can destabilize politics and international relations by falsely implicating political figures in scandals.
  • Cybersecurity: the technology can also negatively influence cybersecurity by having fake political figures incite aggression.
  • Fraud: audio deepfake can clone voices to convince people to believe that they are talking to actual people and induce them into giving away private information.

Well then, are there any pros to deepfake technology other than having entertainment values? Surprisingly, a few:

  • Accessibility: deepfake creates various vocal personas that can turn text into speech, which can help with speech impediments.
  • Education: deepfake can deliver innovative lessons that are more engaging and interactive than traditional lessons. For example, deepfake can bring famous historical figures back to life and explain what happened during their time. Deepfake technology, when used responsibly, can be served as a better learning tool.
  • Creativity: instead of hiring a professional narrator, implementing artificial storytelling using audio deepfake can tell a captivating story and let its users do so only at a fraction of the cost.

If people use deepfake technology with high ethical and moral standards on their shoulders, it can create opportunities for everyone.

Case

In a recent custody dispute in the UK, the mother presented an audio file to prove that the father had no right to take away their child. In the audio, the father  was heard making a series of violent threats towards his wife.

The audio file was compelling evidence. When people thought the mother would be the one to walk out with a smile on her face, the father’s attorney thought something was not right. The attorney challenged the evidence, and it was revealed through forensic analysis that the audio was tailored using a deepfake technology.

This lawsuit is still pending. But do you see any other problems in this lawsuit? We are living in an era where evidence tampering is easily available to anyone with the Internet. It would require more scrutiny to figure out whether evidence is altered.

Current legislation on deepfake.

The National Defense Authorization Act for Fiscal Year 2021 (“NDAA”), which became law as Congress voted to override former President Trump’s veto, also requires the Department of Homeland Security (“DHS”) to issue an annual report for the next five years on manipulated media and deepfakes.

So far, only three states took action against deepfake technology.
On September 1, 2019, Texas became the first state to prohibit the creation and distribution of deepfake content intended to harm candidates for public office or influence elections.
Similarly, California also bans the creation of “videos, images, or audio of politicians doctored to resemble real footage within 60 days of an election.”
Also, in 2019, Virginia banned deepfake pornography.

What else does the law say?

Deep fakes are not illegal per se. But depending on the content, a deepfake can breach data protection law, infringe copyright and defamation. Additionally, if someone shares non-consensual content or commits a revenge porn crime, it is punishable depending on the state law. For example, in New York City, the penalties for committing a revenge porn crime are up to one year in jail and a fine of up to $1,000 in criminal court.

Henry Ajder, head of threat intelligence at Deeptrace, raised another issue: “plausible deniability,” where deepfake can wrongfully provide an opportunity for anyone to dismiss actual events as fake or cover them up with fake events.

What about the First Amendment rights?

The First Amendment of the U.S. Constitution states:
“Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.”

There is no doubt that injunctions against deepfakes are likely to face First Amendment challenges. The First Amendment will be the biggest challenge to overcome. Even if the lawsuit survives, lack of jurisdiction over extraterritorial publishers would inhibit their effectiveness, and injunctions will not be granted unless under particular circumstances such as obscenity and copyright infringement.

How does defamation law apply to deepfake?

How about defamation laws? Will it apply to deepfake?

Defamation is a statement that injures a third party’s reputation.  To prove defamation, a plaintiff must show all four:

1) a false statement purporting to be fact;

2) publication or communication of that statement to a third person;

3) fault amounting to at least negligence; and

4) damages, or some harm caused to the person or entity who is the subject of the statement.

As you can see, deepfake claims are not likely to succeed under defamation because it is difficult to prove that the content was intended to be a statement of fact. All that the defendant will need to protect themselves from defamation claims is to have the word “fake” somewhere in the content. To make it less of a drag, they can simply say that they used deep”fake” to publish their content.

Pursuing a defamation claim against nonconsensual deepfake pornography also poses a problem. The central theme of the claim is the nonconsensual part, and our current defamation law fails to address whether or not the publication was consented to by the victim.

To reflect our transformative impact of artificial intelligence, I would suggest making new legislation to regulate AI-backed technology like deepfake. Perhaps this could lower the hurdle that plaintiffs must face.
What are your suggestions in regards to deepfake? Share your thoughts!

 

 

 

 

 

Social Media: a pedophile’s digital playground.

A few years ago, as I got lost in the wormhole that is YouTube, I stumbled upon a family channel, “The ACE Family.” They had posted a cute video where the mom was playing a prank on her boyfriend by dressing their infant daughter in a cute and tiny crochet bikini. I didn’t think much of it at the time as it seemed innocent and cute, but then I pondered and thought about it. I stumbled on this video without any malintent, but how easy would it be for someone looking for content like this with other disgusting intent.

When you Google “Social media child pornography,” you get many articles from 2019. In 2019 a YouTuber using the name “MattsWhatItIs” posted a YouTube video titled “YouTube is Facilitating the Sexual Exploitation of Children, and it’s Being Monetized(2019)”; this video has 4,305,097 views to date and has not been removed from the platform. The author of the video discusses a potential child pornography ring on YouTube that was being facilitated due to a glitch in the algorithm. He demonstrates how with a brand-new account on a VPN, all it takes is two clicks to end up in this ring. The search started with a “Bikini Haul” search. After two clicks on the recommended videos section, he stumbles upon an innocent-looking homemade video. The video looks innocent, but he scrolls down to the comments to expose the dark side. Multiple random account comment time stamps, those timestamps are linked to slots on the video where the children are in comprising implicit sexual positions. The most disturbing part is that the algorithm glitches once you enter the wormhole, and you get stuck on these “child pornography” videos. Following the vast attention this video received, YouTube has created an algorithm that is supposed to catch this predatory behavior; when the video was posted, it didn’t seem to be doing much.

YouTube has since implemented a “Child Safety Policy,” which details all the content which the social media platform has aimed to protect. It also includes recommended steps for parents or agents posting content with children being the focus. “To protect minors on YouTube, content that doesn’t violate our policies but features children may have some features disabled at both the channel and video level. These features may include:

  • Comments
  • Live chat
  • Live streaming
  • Video recommendations (how and when your video is recommended)
  • Community posts”

Today when you look up the news on this topic, you don’t find much. There are forums and such exposing the many methods these predators use to get around the algorithms set up by platforms to detect their activity. Many predators leave links to child pornography in the comments section of specific videos. Others used generic terms with the initials “C.P.,” a common abbreviation for “child pornography,” and codes like “caldo de pollo,” which means “chicken soup” in Spanish. Many dedicated and concerned parents have formed online communities that scour the Internet for this disgusting content and report it to social media platforms. Online communities scan the Internet for this activity and report it, but why haven’t social media platforms created departments for this issue? Most technology companies use automated tools to detect images and videos that law enforcement has already categorized as child sexual exploitation material. Still, they struggle to identify new, previously unknown material and rely heavily on user reports.

The Child Rescue Coalition has created the “Child Protection System Software.” This tool has a growing database of more than a million hashed images and videos, which it uses to find computers that have downloaded them. The software can track I.P. addresses, which are shared by people connected to the same Wi-Fi network and individual devices. According to the Child Rescue Coalition, the system can follow widgets even if the owners move or use virtual private networks or VPNs to mask the I.P. addresses. Last year they expressed interest in partnering with social media platforms to combine resources to crack down on child pornography. Unfortunately, some are against this as it would allow social media companies access to this unregulated database of suspicious I.P. addresses. Thankfully, many law enforcement departments have partnered up and used this software and as the president of the Child Rescue Coalition said: “Our system is not open-and-shut evidence of a case. It’s for probable cause.”

The United States Department of Justice has created a “Citizen’s Guide to U.S. Federal Law on Child Pornography.” The first line on this page reads, “Images of child pornography are not protected under First Amendment rights, and are illegal contraband under federal law.” Commonly federal jurisdiction is applied if the child pornography offense occurred in interstate or foreign commerce. In today’s digital era, federal law almost always applies when the Internet is used to commit child pornography offenses. The United States has implemented multiple laws that define child pornography and what constitutes a crime related to child pornography.

Whose job is it to protect children from these predators? Should social media have to regulate this? Should parents be held responsible for contributing to the distribution of these media?

 

“Unfortunately, we’ve also seen a historic rise in the distribution of child pornography, in the number of images being shared online, and in the level of violence associated with child exploitation and sexual abuse crimes. Tragically, the only place we’ve seen a decrease is in the age of victims.

 This is – quite simply – unacceptable.”

-Attorney General Eric Holder Jr. speaks at the National Strategy Conference on Combating Child Exploitation in San Jose, California, May 19, 2011.

Skip to toolbar