Mental Health Advertisements on #TikTok

The stigma surrounding mental illness has persisted since the mid-twentieth century. This stigma is one of the many reasons why 60% of adults with a mental illness often go untreated. The huge treatment disparity demonstrates a significant need to spread awareness and make treatment more readily available. Ironically, social media, which has been ridiculed for its negative impact on the mental health of its users, has become a really important tool for spreading awareness about and de-stigmatizing mental health treatment.

The content shared on social media is a combination of users sharing their experiences with a mental health condition and companies who treat mental health using advertisements to attract potential patients. At the first glance, this appears to be a very powerful way to use social media to bridge treatment gaps. However, it highlights concerns over vulnerable people seeing content and self-diagnosing themselves with a condition that they might not have and undergoing unnecessary, and potentially dangerous, treatment. Additionally, they might fail to undergo needed treatment because they are overlooking the true cause of their symptoms due to the misinformation they were subjected to.

Attention Deficit Hyperactivity Disorder (“ADHD”) is an example of a condition that social media has jumped on. #ADHD has 14.5 billion views on TikTok and 3 million posts on Instagram. Between 2007 and 2016, diagnoses of ADHD increased by 123%. Further, prescriptions for stimulants, which treat ADHD, have increased 16% since the pandemic. Many experts are attributing this, in large part, to the use of social media in spreading awareness about ADHD and the rise of telehealth companies that have emerged to treat ADHD during the pandemic. These companies have jumped on viral trends with targeted advertisements that oversimplify what ADHD actually looks like and then offers treatment to those that click on the advertisement.

The availability and reliance of telemedicine grew rapidly during the COVID-19 pandemic and many restrictions regarding telehealth were suspended. This created an opening in the healthcare industry for these new companies. ‘Done’ and ‘Cerebral’ are two examples of companies that have emerged during the pandemic to treat ADHD. These companies attract, accept, and treat patients through a very simplistic procedure: (1) social media advertisements, (2) short online questionnaire, (2) virtual visit, and (3) prescription.

Both Done and Cerebral have utilized social media platforms like Instagram and TikTok to lure potential patients to their services. The advertisements vary, but they all highlight how easy and affordable treatment is by emphasizing convenience, accessibility, and low cost. Accessing the care offered is as simple as swiping up on an advertisements that appear as users are scrolling on the platform. These targeted ads depict images of people seeking treatment, taking medication, and having their symptoms go away. Further, these companies utilize viral trends and memes to increase the effectiveness of the advertisements, which typically oversimplify complex ADHD symptoms and mislead consumers.

ADHD content is popular on TikTok, as America faces an Adderall shortage - Vox

While these companies are increasing healthcare access for many patients due to the low cost and virtual platform, this speedy version of healthcare is blurring the line between offering treatment to patients and selling prescriptions to customers through social media. Further, medical professionals are concerned with how these companies are marketing addictive stimulants to young users, and, yet, remain largely unregulated due to outdated guidelines on advertisements for medical services.

The advertising model utilized by these telemedicine companies emphasize a need to modify existing laws to ensure that these advertisements are subjected to the FDA’s unique oversight to protect consumers. These companies are targeting young consumers and other vulnerable people to self-diagnose themselves with misleading information as to the criteria for a diagnosis. There are eighteen symptoms of ADHD and the average person meets at least one or two of those in the criteria, which is what these ads are emphasizing.

Advertisements in the medical sphere are regulated by either the FDA or the FTC. The FDA has unique oversight to regulate the marketing of prescription drugs by manufacturers and drug distributors in what is known as direct-to-consumer (“DTC”) drug advertising. The critics of prescription drug advertisements highlight the negative impact that DTC advertising has on the patient-provider relationship because patients go to providers expecting or requesting particular prescription treatment. In order to minimize these risks, the FDA requires that a prescription drug advertisement must be truthful, present a fair balance of the risks and benefits associated with the medications, and state an approved used of the medication. However, if the advertisement does not mention a particular drug or treatment, it eludes the FDA’s oversight.

Thus, the marketing of medical services, which does not market prescription drugs, is regulated only by the Federal Trade Commission (“FTC”) in the same manner as any other consumer good, which just means that the advertisement must not be false or misleading.

The advertisements these Telehealth companies are putting forward demonstrate that it is time for the FDA to step in because they are combining medical services and prescription drug treatment. They use predatory tactics to lure consumers into believing they have ADHD and then provide them direct treatment on a monthly subscription basis.

The potential for consumer harm is clear and many experts are pointing to the similarities between the opioid epidemic and stimulant drugs. However, the FDA has not currently made any changes to how they regulate advertising in light of social media. The laws regarding DTC drug advertising were prompted in part by the practice of self-diagnosis/self-medication by consumers and the false therapeutic claims made by manufacturers. The telemedicine model these companies are using is emphasizing these exact concerns by targeting consumers, convincing them they have a specific condition, and then offering the medication to treat it after quick virtual visit. Instead of patients going to their doctors requesting a specific prescription that may be inappropriate for a patient’s medical needs, patients are going to the telehealth providers that only prescribe a particular prescription that may also be inappropriate for a patient’s medical needs.

Through the use of social media, diagnosis and treatment with addictive prescription drugs can be initiated by an interactive advertisement in a manner that was not possible when the FDA made the distinctions that these types of advertisements would not be subject to its oversight. Thus, to protect consumers, it is vital that telemedicine advertisements are subjected to a more intrusive monitoring than consumer goods. This will require the companies making these advertisements to properly address the complex symptoms associated with conditions like ADHD and give fair balance to the harms of treatment.

According to the Pew Research Center, 69% of adults and 81% of teens in the United States use social media. Further, about 48% of Americans get their information regularly from social media. We often talk about misinformation in politics and news stories, but it’s permeating every corner of the internet. As these numbers continue to grow, it’s crucial to develop new methods to protect consumers, and regulating these advertisements is only the first step.

Memes, Tweets, and Stocks . . . Oh, My!

 

Pop-Culture’s Got A Chokehold on Your Stocks

In just three short weeks, early in January 2021, Reddit meme-stock traders garnered up enough of GameStop’s stock to increase its value from a mere $17.25 per share to $325 a pop. This reflected almost an 1,800% increase in the stock’s value. In light of this, hedge funds, like New York’s Melvin Capital Management, were left devastated, some smaller hedge funds even went out of business.

For Melvin, because they were holding their GameStop stock in a short position (a trading technique in which the intention is to sell a security with the plan to buy it back later, at a lower cost, in an anticipated short term drop), they lost over 50% of their stock’s value, which translated to nearly $7 billion, in just under a month.

Around 2015, the rise of a new and free online trading platform geared towards a younger generation, emerged in Robinhood. Their mission was simple — “democratize” finance. By putting the capacity to understand and participate in trading, without needing an expensive broker, Robinhood made investing accessible to the masses. However, the very essence of Robinhood putting the power back in the hands of the people, was also what caused a halt in GameStop’s takeover rise. After three weeks, Robinhood had to cease all buying or selling of GameStop’s shares and options because the sheer volume of trading had exceeded their cash-on-hand capacity, or collateral that is required by regulators to function as a legal trade exchange.

But what exactly is a meme-stock? For starters, a meme is an idea or element of pop-culture that spreads and intensifies across people’s minds. As social media has increased in popularity, viral pop-culture references  and trends have as well. Memes allow people to instantaneously spread videos, tweets, pictures, or posts that are humorous, interesting, or sarcastic. This in turns goes viral. Meme-stocks therefore originate on the internet, usually in sub-Reddit threads, where users work together to identify a target stock and then promote it. The goal of promoting a meme stock largely involves shorting the stock—as explained above—which means buying, holding, selling, and rebuying as prices fluctuate to turn a profit.

GameStop is not the first, and certainly not the last, stock to be traded in this fashion. But it represents an important shift in the power of social media and its ability to affect the stock market. Another example of the power meme-culture can have on real-world finances and the economy, is Dogecoin.

Dogecoin was created as satirical new currency, in a way mocking the hype around existing cryptocurrencies. But its positive reaction and bolstered interest on social media turned the joke crypto into a practical reality. This “fun” version of Bitcoin was celebrated, listed on the crypto exchange Binance, and even cryptically endorsed by Elon Musk. More recently, in 2021, cinema chain AMC announced it would accept Dogecoin in exchange for digital gift card purchases, further bolstering the credibility of this meme-originated cryptocurrency.

Tricks of the Trade, Play at Your Own Risk

Stock trading is governed by the Securities Act of 1933, which boils down to two basic objectives: (1) to require that investors receive financial and other material information concerning securities being offered for public sale; and (2) to prohibit any deceit, misrepresentations, and other fraud in the sale of securities. In order to buy, sell, or trade most securities, it must first be registered with the SEC—the primary goal of registration is to facilitate information disclosures, so investors are informed before engaging. Additionally, the Securities Exchange Act of 1934 provides the SEC with broad authority over the securities industry, to regulate, register, and oversee brokerage firms, agents, and SROs. Other regulations at play include the Investment Company Act of 1940 and the Investment Advisers Act of 1940 which regulate investment advisers and their companies, respectively. These Acts require firms and agents that receive compensation for their advising practices are registered with the SEC and adhere to certain qualifications and strict guidelines designed to promote fair, informed investment decisions.

Cryptocurrency has over the years grown from a speculative investment to a new class of assets and regulation is imminent. The Biden Administration has recently added some clarification on crypto use and its regulation through a new directive designating power to the SEC and the Commodity Futures Trading Commission (CFTC), which were already the prominent securities regulators. In the recent Ripple Labs lawsuit, the SEC began to make some strides in regulating cryptocurrency by working to classify it as a security which would bring crypt into their domain of regulation.

Consequentially, the SEC’s Office of Investor Education and Advocacy has adapted with the times and now cautions against  making any investment decisions based solely off of information seen on social media platforms. Because social media has become integral to our daily lives, investors are increasingly relying and turning to it for information when deciding when, where, and on what to invest. This has increased the likelihood of scams, fraud, and other misinformation consequences. These problems can arise through fraudsters disseminating false information anonymously or impersonating someone else.

 

However, there is also an increasing concern with celebrity endorsements and testimonials regarding investment advice. The most common types of social media online scam schematics are impersonation and fake crypto investment advertisements.

 

With this rise in social media use, the laws governing investment advertisements and information are continuously developing. Regulation FD (Fair Disclosure) provides governance on the selective disclosure of information for publicly traded companies. Reg. FD prescribes that when an issuer discloses any material, nonpublic information to certain individuals or entities, they must also make a public disclosure of that information. In 2008, the SEC issued new guidance allowing information to be distributed on websites so long as shareholders, investors, and the market in general were aware it was the company’s “recognized channel of distribution.” In 2013 this was again amended to allow publishing earnings and other material information on social media, provided that investors knew to expect it there.

This clarification came in light of the controversial boast by Netflix co-founder and CEO Reed Hastings on Facebook that Netflix viewers had consumed 1 billion hours of watch time, per month. Hasting’s Facebook page had never previously disclosed performance stats and therefore investors were not on notice that this type of potentially material information, relevant to their investment decisions, would be located there. Hastings also failed to immediately remedy the situation with a public disclosure of the same information via a press release or Form 8-K filing.

In the same vein, a company’s employees may also be the target of consequence if they like or share a post, publish a third-party link, or friend certain people without permission if any of those actions could be viewed as an official endorsement or means of information dissemination.

The SEC requires that certain company information be accompanied by a disclosure or cautionary disclaimer statement. Section 17(b) of the 1933 Act, more commonly known as the Anti-Touting provision, requires any securities endorsement be accompanied by a disclosure of the “nature, source, and amount of any compensation paid, directly or indirectly, by the company in exchange for such endorsement.”

To Trade, or Not to Trade? Let Your Social Media Feed Decide

With the emergence of non-professional trading schematics and platforms like Robinhood, low-cost financial technology has brought investing to the hands of younger users. Likewise, the rise of Bitcoin and blockchain technologies in the early-to-mid 2010’s have changed the way financial firms must think about and approach new investors. The discussion of investments and information sharing that happens on these online forums creates a cesspool ripe for misinformation breeding. Social media sites are vulnerable to information problems for several reasons. For starters, which posts gain attention is not always something that can be calculated in advance—if the wrong post goes viral, hundreds to thousands to millions of users may read improper recommendations. Algorithm rabbit-holes also pose a risk to extremist views and strategically places ads further on this downward spiral.

Additionally, the presence of fake or spam-based accounts and internet trolls pose an ever more difficult problem to contain. Lastly, influencers can sway large groups of followers by mindlessly promoting or interacting with bad information or not properly disclosing required information. There are many more obvious risks associated but “herding” remains one of the largest. Jeff Kreisler, Head of Behavioral Science at J.P. Morgan & Chase explains that:

“Herding has been a common investment trap forever. Social media just makes it worse because it provides an even more distorted perception of reality. We only see what our limited network is talking about or promoting, or what news is ‘trending’ – a status that has nothing to do with value and everything to do with hype, publicity, coolness, selective presentation and other things that should have nothing to do with our investment decisions.”

This shift to a digital lifestyle and reliance on social media for information has played a key role in the information dissemination for investor decision-making. Nearly 80% of institutional investors now use social media as a part of their daily workflow. Of those, about 30% admit that information gathered on social media has in some way influenced an investment recommendation or decision and another third have maintained that because of announcements they saw on social media, they made at least one change to their investments as a direct result. In 2013, the SEC began to allow publicly traded companies to report news and earnings via their social media platforms which has resulted in an increased flow of information to investors on these platforms. Social media also now plays a large role in financial literacy for the younger generations.

The Tweet Heard Around the Market

A notable and recent example of how powerful social media warriors and internet trolls can be in relation to the success of a company’s stock came just days after Elon Musk’s acquisition of Twitter and only hours after launching his pay-for-verification Twitter Blue debacle.  Insulin manufacturing company Eli Lilly saw a stark drop in their stock value after a fake parody account was created under the guise of their name and tweeted out that “insulin is now free.”

This account acting under the Twitter handle @EliLillyandCo labeled itself, bought a blue check mark, and appended the same logo as the real company to its profile making it almost indistinguishable from the real thing. Consequently, the actual Eli Lilly corporate account had to tweet out an apology “to those who have been served a misleading message from a fake Lilly account.” And clarifying that, “Our official Twitter account is @Lillypad.”

This is a perfect example for Elon Musk and other major companies and CEOs just how powerful pop-culture, meme-culture, and internet trolls are by the simple fact that this parody account casually dropped the stock of a multi-billion dollar pharmaceutical company almost 5% in the matter of a few hours and weaponized with $8 and a single tweet.

So, what does all this mean for the future of digital finance? It’s difficult to say exactly where we might be headed, but social media’s growing tether on all facets of our lives leave much up for new regulation. Consumers should be cautious when scrolling through investment-related material, and providers should be transparent with their relationships and goals in promoting any such materials. Social media is here to stay, but the regulation and use of it are still up for grabs.

I Knew I Smelled a Rat! How Derivative Works on Social Media can “Cook Up” Infringement Lawsuits

 

If you have spent more than 60 seconds scrolling on social media, you have undoubtably been exposed to short clips or “reels” that often reference different pop culture elements that may be protected intellectual property. While seemingly harmless, it is possible that the clips you see on various platforms are infringing on another’s copyrighted work. Oh Rats!

What Does Copyright Law Tell Us?

Copyright protection, which is codified in 17 U.S.C. §102, extends to “original works of authorship fixed in any tangible medium of expression”. It refers to your right, as the original creator, to make copies of, control, and reproduce your own original content. This applies to any created work that is reduced to a tangible medium. Some examples of copyrightable material include, but are not limited to, literary works, musical works, dramatic works, motion pictures, and sound recordings.

Additionally, one of the rights associated with a copyright holder is the right to make derivative works from your original work. Codified in 17 U.S.C. §101, a derivative work is “a work based upon one or more preexisting works, such as a translation, musical arrangement, dramatization, fictionalization, motion picture version, sound recording, art reproduction, abridgment, condensation, or any other form in which a work may be recast, transformed, or adapted. A work consisting of editorial revisions, annotations, elaborations, or other modifications which, as a whole, represent an original work of authorship, is a ‘derivative work’.” This means that the copyright owner of the original work also reserves the right to make derivative works. Therefore, the owner of the copyright to the original work may bring a lawsuit against someone who creates a derivative work without permission.

Derivative Works: A Recipe for Disaster!

The issue of regulating derivative works has only intensified with the growth of cyberspace and “fandoms”. A fandom is a community or subculture of fans that’s built itself up around one specific piece of pop culture and who share a mutual bond over their enthusiasm for the source material. Fandoms can also be composed of fans that actively participate and engage with the source material through creative works, which is made easier by social media. Historically, fan works have been deemed legal under the fair use doctrine, which states that some copyrighted material can be used without legal permission for the purposes of scholarship, education, parody, or news reporting, so long as the copyrighted work is only being used to the extent necessary. Fair use can also apply to a derivative work that significantly transforms the original copyrighted work, adding a new expression, meaning, or message to the original work. So, that means that “anyone can cook”, right? …Well, not exactly! The new, derivative work cannot have an economic impact on the original copyright holder. I.e., profits cannot be “diverted to the person making the derivative work”, when the revenue could or should have gone to original copyright holder.

With the increased use of “sharing” platforms, such as TikTok, Instagram, or YouTube, it has become increasingly easier to share or distribute intellectual property via monetized accounts. Specifically, due to the large amount of content that is being consumed daily on TikTok, its users are incentivized with the ability to go “viral” instantaneity, if not overnight,  as well the ability to earn money through the platform’s “Creator Fund.” The Creator Fund is paid for by the TikTok ads program, and it allows creators to get paid based on the amount of views they receive. This creates a problem because now that users are getting paid for their posts, the line is blurred between what is fair use and what is a violation of copyright law. The Copyright Act fails to address the monetization of social media accounts and how that fits neatly into a fair use analysis.

Ratatouille the Musical: Anyone Can Cook?

Back in 2020, TikTok users Blake Rouse and Emily Jacobson were the first of many to release songs based on Disney-Pixar’s 2007 film, Ratatouille. What started out as a fun trend for users to participate in, turned into a full-fledged viral project and eventual tangible creation. Big name Broadway stars including André De Shields, Wayne Brady, Adam Lambert, Mary Testa, Kevin Chamberlin, Priscilla Lopez, and Tituss Burgess all participated in the trend, and on December 9, 2020, it was announced that Ratatouille was coming to Broadway via a virtual benefit concert.

Premiered as a one-night livestream event in January 1 2021, all profits generated from the event were donated to the Entertainment Community Fund (formerly the Actors Fund), which is a non-profit organization that supports performers and workers in the arts and entertainment industry. It initially streamed in over 138 countries and raised over $1.5 million for the charity. Due to its success, an encore production was streamed on TikTok 10 days later, which raised an additional $500,000 for the fund (totaling $2 million). While this is unarguably a derivative work, the question of fair use was not addressed here because Disney lawyers were smart enough not to sue. In fact, they embraced the Ratatouille musical by releasing a statement to the Verge magazine:

Although we do not have development plans for the title, we love when our fans engage with Disney stories. We applaud and thank all of the online theatre makers for helping to benefit The Actors Fund in this unprecedented time of need.

Normally, Disney is EXTREMELY strict and protective over their intellectual property. However, this small change of heart has now opened a door for other TikTok creators and fandom members to create unauthorized derivative works based on others’ copyrighted material.

Too Many Cooks in the Kitchen!

Take the “Unofficial Bridgerton Musical”, for example. In July of 2022, Netflix sued content creators Abigail Barlow and Emily Bear for their unauthorized use of Netflix’s original series, Bridgerton. The Bridgerton Series on Netflix is based on the Bridgerton book series by Julia Quinn. Back in 2020, Barlow and Bear began writing and uploading songs based on the Bridgerton series to TikTok for fun. Needless to say, the videos went viral, thus prompting Barlow and Bear to release an entire musical soundtrack based on Bridgerton. They even went so far as to win the 2022 Grammy Award for Best Musical Album.

On July 26, Barlow and Bear staged a sold-out performance with tickets ranging from $29-$149 at the New York Kennedy Center, and also incorporated merchandise for sale that included the “Bridgerton” trademark. Netflix then sued, demanding an end to these for-profit performances. Interestingly enough, Netflix was allegedly initially on board with Barlow and Bear’s project. However, although Barlow and Bear’s conduct began on social media, the complaint alleges they “stretched fanfiction way past its breaking point”. According to the complaint, Netflix “offered Barlow & Bear a license that would allow them to proceed with their scheduled live performances at the Kennedy Center and Royal Albert Hall, continue distributing their album, and perform their Bridgerton-inspired songs live as part of larger programs going forward,” which Barlow and Bear refused. Netflix also alleged that the musical interfered with its own derivative work, the “Bridgerton Experience,” an in-person pop-up event that has been offered in several cities.

Unlike the Ratatouille: The Musical, which was created to raise money for a non-profit organization that benefited actors during the COVID-19 pandemic, the Unofficial Bridgerton Musical helped line the pockets of its creators, Barlow and Bear, in an effort to build an international brand for themselves. Netflix ended up privately settling the lawsuit in September of 2022.

Has the Aftermath Left a Bad Taste in IP Holder’s Mouths?

The stage has been set, and courts have yet to determine exactly how fan-made derivative works play out in a fair use analysis. New technologies only exacerbate this issue with the monetization of social media accounts and “viral” trends. At a certain point, no matter how much you want to root for the “little guy”, you have to admit when they’ve gone too far. Average “fan art” does not go so far as to derive significant profits off the original work and it is very rare that a large company will take legal action against a small content creator unless the infringement is so blatant and explicit, there is no other choice. IP law exists to protect and enforce the rights of the creators and owners that have worked hard to secure their rights. Allowing content creators to infringe in the name of “fair use” poses a dangerous threat to intellectual property law and those it serves to protect.

 

#ad : The Rise of Social Media Influencer Marketing

 

 

 

 

 

 

 

 

#Ad : The rise of social media influence marketing.

When was the last time you bought something from a billboard or a newspaper? Probably not recently. Instead, advertisers are now spending their money on digital market platforms. And at the pinnacle of these marketing platforms are influencers. Since millennial, generation Y, and generation Z consumers spend so much time consuming user-generated content, the creator begins to become their acquaintance and could even be categorized as a friend. Once that happens, the influencer has more power to do what their name suggests and influence the user to purchase. This is where our current e-commerce market is headed.

Imagine this:

If a person you know and trust suggests you try a brand new product, you would probably try it. Now, if that same person were to divulge to you that they were paid to tell you all about how wonderful this product is, you would probably have some questions about the reality of their love for this product, right?

Lucky for us consumers, the Federal Trade Commission (FTC) has established an Endorsement Guide so we can all have that information when we are being advertised to by our favorite social media influencers.

 

The times have changed, quickly.

Over the past 8 years, there has been a resounding shift in the way companies market their products, to the younger generation specifically. The unprecedented changes throughout the physical and digital marketplace have forced brands to think thoroughly through their strategies on how to reach the desired consumer. Businesses are now forced to rely on digital and social media marketing more than they ever have before.

With the rise of social media and apps like Vine, and Tik Tok, came a new metaverse with almost untapped potential for marketing. This was the way companies would be able to reach this younger generation of consumers, you know, the ones with their heads craned over a phone and their thumbs constantly scrolling. These were the people that advertisers had trouble reaching, until now.

 

What the heck is an “ Influencer”?

The question “What is an influencer?” has become standard in conversations among social media users. We know who they are, but the term is very loosely defined. Rachel David, a popular, YouTube personality, defined it with the least ambiguity as “Someone like you and me, except they chose to consistently post stuff online”. This definition seems harmless enough until you understand that it is much more nuanced than that and these individuals are being paid huge sums of money to push products that they most likely don’t use themselves, despite what their posts may say. The reign of celebrity-endorsed marketing is shifting to a new form of celebrity called an “Influencer”. High-profile celebrities were too far removed from the average consumer. A new category emerged with the rise of social media use, and the only difference between a celebrity and a famous influencer is…relatability. Consumers could now see themselves in the influencer and would default to trusting them and their opinion.

One of the first instances we saw influencers flexing their advertising muscle was the popular app Vine .Vine was a revolutionary app and frankly existed before its time. It introduced the user to a virtual experience that matched their dwindling attention span. Clips were no more than 6 seconds long and would repeat indefinitely until the user swiped to the next one. This short clip captured the user’s attention and provided that much-needed dopamine hit. This unique platform began rising in popularity, rivaling other apps like the powerhouse of user engagement, YouTube. Unlike YouTube, however, Vine required less work on the shorter videos, and more short videos were produced by the creator. Since the videos were so short, the consumers wanted more and more videos (content), which opened the door for other users to blast their content, creating an explosion of “Vine Famous” creators. Casual creators were now, almost overnight, amassing millions of followers, followers they can now influence. Vine failed to capitalize on its users and its inability to monetize on its success, it ultimately went under in 2016. But, what happened to all of those influencers? They made their way to alternate platforms like YouTube, Instagram, and Facebook taking with them their followers and subsequently their influencer status. These popular influencers went from being complete strangers to people the users inherently trusted because of the perceived transparency into their daily life.

 

Here come the #ads.

Digital marketing was not introduced by Vine, but putting a friendly influencer face behind the product has some genesis there. Consumerism changed when social media traffic increased. E-commerce rose categorically when the products were right in front of the consumer’s face, even embedded into the content they were viewing. Users were watching advertisements and didn’t even care. YouTube channels that were dedicated solely to reviewing different products and giving them a rating became an incredibly popular genre of video. Advertisers saw content becoming promotion for a product and the shift from traditional marketing strategies took off. Digital, inter-content advertising was the new way to reach this generation.

Now that influencer marketing is a mainstream form of marketing, the prevalence of the FTC Endorsement Guide has amplified. Creators are required to be transparent about their intentions in marketing a product. The FTC guide suggests ways influencers can effectively market the product they are endorsing while remaining transparent about their motivations to the user. The FTC guide provides examples of how and when to disclose the fact that a creator is sponsoring or endorsing a particular product that must be followed to avoid costly penalties. Most users prefer to have their content remain as “on brand” as possible and will resort to the most surreptitious option and choose to disguise the “#ad” within a litany of other relevant hashtags.

The age of advertising has certainly changed right in front of our eyes, literally. As long as influencers remain transparent about their involvement with the products they show in their content, consumers will inherently trust them and their opinion on the product. So sit back, relax, and enjoy your scrolling. But, always be cognizant that your friendly neighborhood influencer may have monetary motivation behind their most recent post.

 

 

 

 

Corporate Use of Social Media: A Fine Line Between What Could-, Would-, and Should-be Posted

 

Introduction

In recent years, social media has taken a hold on nearly every aspect of human interaction and turned the way we communicate on its head. Social media apps’ high speed capability of disseminating information instantaneously have affected the way many sectors of business operate. From entertainment, social, environmental, educational, or financial, social media has bewildered the legal departments of many in house general counsels across all industries. Additionally, the generational shaft between the person actually posting for the account versus their supervisor has only exacerbated the potential for communications to miss their mark and cause controversy or adverse effects.

These days, most companies have social media accounts, but not all accounts are created equal, and they certainly are not all monitored the same. In most cases, these accounts are not regulated at all except by their own internal managers and #CancelCulture. Depending on the product or company, social media managers have done their best to stay abreast of changes in popular hashtags, trends and challenges, and the overall shift from a corporate tone of voice to one of relatability–more Gen-Z-esque, if you will. But with this shift, the rights and implications of corporate speech through social media has been put to the test.

Changes in Corporate Speech on Social Media 

In the last 20 years, corporate use of social media has become a battle of relevance. With the decline of print media, social media, and its apps, have emerged as a marketing necessity. Early social media use was predominantly geared towards social purposes. If we look at the origins of Facebook, Myspace, and Twitter it is clear that these apps were intended for superficial uses—not corporate communications—but this all changed with the introduction of LinkedIn, which sparked a dynamic shift towards business and professional use of social media.

Today social media is used to report on almost every aspect of our lives, from disaster preparation and emergency responses to political updates, to dating and relationship finders, and customer service based tasks, social media truly covers all. It is also more common now days to get backlash for not speaking out or using social media after a major social or political movement occurs. Social media is also increasingly being used for research with geolocation technology, for organizing demonstrations and political unrest, and in the business context, for development in sales, marketing, networking, and hiring or recruiting practices.

These changes are starting to lead to significant conversations in the business world when it comes to company speech, regulated disclosures and First Amendment rights. For example, so far, there is minimal research on how financial firms disseminate communications to investor news outlets via social media and in which format they are being responded to. And while some may view social media as an opportunity to further this kind of investor outreach, others have expressed concerns that disseminating communications in this manner could result in a company’s loss of control over such communications entirely.

The viral nature of social media allows not just investors to connect more easily with companies but also with individuals who may not directly follow that company and would therefore be a lot less likely to be informed about a company’s prior financial communications and the importance of any changes. This creates risk for a company’s investor communications via social media because of the potential to spread and possibly reach uniformed individuals which could in turn produce adverse consequences for the company when it comes to concerns about reliance and misleading information.

Corporate Use, Regulations, and Topics of Interest on Social Media 

With the rise of social media coverage on various societal issues, these apps have become a platform for news coverage, political movements, and social concerns and, for some generations, a platform that replaces traditional news media almost entirely. Specifically, when it comes to the growing interest in ESG related matters and sustainable business practices, social media poses as a great tool for information communication. For example, the Spanish company Acciona has recently been reported by the latest Epsilon Icarus Analytics Panel on ESG Sustainability, as having Spain’s highest resonating ESG content of all their social networks. Acciona demonstrates the potential leadership capabilities for a company to fundamentally impact and effectuate digital communications on ESG related topics. This developing content strategy focuses on brand values, and specifically, for Acciona, strong climate-change based values, female leadership, diversity, and other cultural, societal changes which demonstrates this new age of social media as a business marketing necessity.

Consequentially, this shift in usage of social media and the way we treat corporate speech on these platforms has left room for emerging regulation. Commercial or corporate speech is generally permissible under Constitutional Free Speech rights, so long as the corporation is not making false or misleading statements. Section 230 provides broad protection to internet content providers from accountability based on information disseminated on their platform. In most contexts, social media platforms will not be held accountable for the consequences resulting therefrom (i.e. a bad user’s speech). For example, a recent lawsuit was dismissed in favor of the defendant, TikTok, and its parent company, after a young girl died from participation in a trending challenge that went awry because under § 230 the platform was immune from liability.

In essence, when it comes to ESG-related topics, the way a company handles its social media and the actual posts they put out can greatly affect the company’s success and reputation as often ESG focused perspectives affect many aspects of the operation of the business. The type of communication, and coverage on various issues, can impact a company’s performance in the short term and long term hemispheres–the capability of which can effectuate change in corporate environmental practices, governance, labor and employment standards, human resource management and more.

With ESG trending, investors, shareholders, and regulators now face serious risk management concerns. Companies must now, more publicly, address news concerning their social responsibilities, on a much more frequent basis as ESG concerns continue to rise. Public company activities, through Consumer Service Reports, are mandated in annual 10-K filings and disclosures by the SEC, along with ESG disclosures thanks to a recent rule promulgation. These disclosures are designed to hold accountable and improve environmental, social, and economic performance when it comes to their respective stakeholders’ expectations.

Conclusion

In conclusion, social media platforms have created an entirely new mechanism for corporate speech to be implicated. Companies should proceed cautiously when covering social, political, environmental, and related concerns and their methods of information dissemination as well as the possible effects their posts may have on business performance and reputation overall.

Miracles Can Be Misleading

Want to lose 20 pounds in 4 days? Try this *insert any miracle weight-loss product * and you’ll be skinny in no time!

Miracle weight-loss products (MWLP) are dietary supplements that either work as an appetite suppressant or forcefully induce weight loss. These products are not approved or indicated by pharmaceutical agencies as weight loss prophylactics. Social media users are continuously bombarded with the newest weight-loss products via targeted advertisements and endorsements from their favorite influencers. Users are force fed false promises of achieving the picture-perfect body while companies are profiting off their delusions. Influencer marketing has increased significantly as social media becomes more and more prevalent. 86 percent of women use social media for purchasing advice. 70 percent of teens trust influencers more than traditional celebrities. If you’re on social media, then you’ve seen your favorite influencer endorsing some form of a MWLP and you probably thought to yourself “well if Kylie Jenner is using it, it must be legit.”

The advertisements of MWLP are promoting an unrealistic and oversexualized body image. This trend of selling skinny has detrimental consequences, often leading to body image issues, such as body dysmorphia and various eating disorders. In 2011, the Florida House Experience conducted a study among 1,000 men and women. The study revealed that 87 percent of women and 65 percent of men compare their bodies to those they see on social media. From the 1,000 subjects, 50 percent of the women and 37 percent of the men viewed their bodies unfavorably when compared to those they saw on social media. In 2019, Project Know, a nonprofit organization that studies addictive behaviors, conducted a study which suggested that social media can worsen genetic and psychological predispositions to eating disorders.

Who Is In Charge?

The collateral damages that advertisements of MWLP have on a social media user’s body image is a societal concern. As the world becomes more digital, even more creators of MWLP are going to rely on influencers to generate revenue for their products, but who is in charge of monitoring the truthfulness of these advertisements?

In the United States, the Federal Trade Commission (FTC) and the Food and Drug Administration (FDA) are the two federal regulators responsible for promulgating regulations relating to dietary supplements and other MWLP. While the FDA is responsible for the labeling of supplements, they lack jurisdiction over advertising. Therefore, the FTC is primarily responsible for advertisements that promote supplements and over-the-counter drugs.

The FTC regulates MWLP advertising through the Federal Trade Commission Act of 1914 (the Act). Sections 5 and 12 of the Act collectively prohibit “false advertising” and “deceptive acts or practices” in the marketing and sales of consumer products, and grants authority to the FTC to take action against those companies. An advertisement is in violation of the Act when it is false, misleading, or unsubstantiated. An advertisement is false or misleading when it contains “objective, material representation that is likely to deceive consumers acting reasonably under the circumstances.” An advertisement is unsubstantiated when it lacks “a reasonable basis for its contained representation.” With the rise of influencer marketing, the Act also requires influencers to clearly disclose when they have a financial or other relationship with the product they are promoting.

Under the Act, the FTC has taken action against companies that falsely advertise MWLP. The FTC typically brings enforcement claims against companies by alleging that the advertiser’s claims lack substantiation. To determine the specific level and type of substantiation required, the FTC considers what is known as the “Pfizer factors” established In re Pfizer. These factors include:

    • The type and specificity of the claim made.
    • The type of product.
    • The possible consequences of a false claim.
    • The degree of reliance by consumers on the claims.
    • The type, and accessibility, of evidence adequate to form a reasonable basis for making the particular claims.

In 2014, the FTC applied the Pfizer factors when they brought an enforcement action seeking a permanent injunction against Sensa Products, LLC. Since 2008, Sensa sold a powder weight loss product that allegedly could make an individual lose 30 pounds in six months without dieting or exercise. The company advertised their product via print, radio, endorsements, and online ads. The FTC claimed that Sensa’s marketing techniques were false and deceptive because they lacked evidence to support their health claims, i.e., losing 30 pounds in six months. Furthermore, the FTC additionally claimed that Sensa violated the Act by failing to disclose that their endorsers were given financial incentives for their customer testimonials. Ultimately, Sensa settled, and the FTC was granted the permanent injunction.

What Else Can We Do?

Currently, the FTC, utilizing its authority under the Act, is the main legal recourse for removing these deceitful advertisements from social media. Unfortunately, social media platforms, such as Facebook, Twitter, Instagram, etc., cannot be liable for the post of other users. Under section 230 of the Communications Decency Act, “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” That means, social media platforms cannot be held responsible for the misleading advertisements of MWLP; regardless of if the advertisement is through an influencer or the companies own social media page and regardless of the collateral consequences that these advertisements create.

However, there are other courses of action that social media users and social media platforms have taken to prevent these advertisements from poisoning the body images of users. Many social media influencers and celebrities have rose to the occasion to have MWLP advertisements removed. In fact, in 2018, Jameela Jamil, an actress starring on The Good Place, launched an Instagram account called I Weigh which “encourages women to feel and look beyond the flesh on their bones.” Influencer activism has led to Instagram and Facebook blocking users, under the age of 18, from viewing posts advertising certain weight loss products or other cosmetic procedures. While these are small steps in the right direction, more work certainly needs to be done.

Is Social Media Really Worth It?

 

Human beings are naturally social. We interact with one another every single day in many different ways. Current day, one of the most common ways we interact with one another is on social media.  Each year that goes by the number of individuals using social media increases. The number of social media users worldwide in 2019 was 3.484 billion, up 9% from 2018. The numbers increased dramatically during the 2020 Covid-19 pandemic. In 2020, the number of social media users jumped to 4.5 billion and it increases everyday.

Along with the increasing number of social media users, the number of individuals suffering from mental health issues is also increasing. Mental health is defined as a state of well-being in which people understand their abilities, solve everyday life problems, work well, and make a significant contribution to the lives of their communities. Its very interesting to think about how and why social media can effect an individuals mental state so greatly. The Displaced Behavior Theory may help explain why social media shows a connection with mental health. According to the theory, people who spend more time in sedentary behaviors such as social media use have less time for face-to-face social interaction, both of which have been proven to be protective against mental disorders . For example, the more time an individual spends using social media, the less time this individual spends on their own social relationships off screen.

Believe it or not, many studies have linked the use of Facebook in young adults to increased levels of anxiety, stress and depression.  I know based on my own personal experiences that life changed greatly when Facebook was introduced to my generation in Middle School. We went from going for walks around town, movie dates and phone calls to sitting in front of a computer screen for hours straight trying to figure out who posted the best profile picture that night or who received the most likes and comments on a post.  Based on my own experiences, I believe this is when cyberbullying became a huge issue.  Individuals, especially young teens, take into account everyone’s opinion’s and comments on social media sites like Facebook, Instagram and Snapchat. This why mental health is associated with the use of social media. Social media can create a lot of pressure to create the stereotype that others want to see, its almost like a popularity contest.

It makes me wonder how far is too far? When will Social Media platforms truly censor cyberbullying and put a stop to the rise of mental health issues associated with using these sites. Studies have proven that these platforms cause extreme mental health problems in individuals. The individuals who are mostly affected by this range from 12-17 years of age.  I believe that if we regulate the age groups allowed to join these sights it may be helpful to stop the detrimental affects these sights have on teenagers.  It boggles my mind to think many teenagers would still be alive if they did not download a social media platform or they would not suffer from mental health issues. We really have to think as parents, friends and family members if downloading social media platforms is really worth it.

Can you think of any solutions to this growing problem? At what age would you let your child use social media?

 

A Uniquely Bipartisan Push to Amend/Repeal CDA 230

Last month, I wrote a blog post about the history and importance of the Communications Decency Act, section 230 (CDA 230). I ended that blog post by acknowledging the recent push to amend or repeal section 230 of the CDA. In this blog post, I delve deeper into the politics behind the push to amend or repeal this legislation.

“THE 26 WORDS THAT SHAPED THE INTERNET”

If you are unfamiliar with CDA 230, it is the sole legislation that governs the internet world. Also known as “the 26 words that shaped the internet” Congress specifically articulated in the act that the internet is able to flourish, due to a “minimum of government regulation.” This language has resulted in an un-regulated internet, ultimately leading to problems concerning misinformation.

Additionally, CDA 230(c)(2) limits civil liability for posts that social media companies publish. This has caused problems because social media companies lack motivation to filter and censor posts that contain misinformation.

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” (47 U.S.C. § 230).

Section 230’s liability shade has been extended far beyond Congress’s original intent, which was to protect social media companies against defamation claims. The features of this legislation have resulted in a growing call to update section 230.

In this day and age, an idea or movement rarely gains bi-partisan support anymore. Interestingly, though, amending, or repealing section 230 has gained recent bipartisan support. As expected, however, each party has differing reasons as to why the law should be changed.

BIPARTISAN OPPOSITION

Although the two political parties are in agreement that the legislation should be amended, their reasoning behind it stems from differing places. Republicans tend to criticize CDA 230 for allowing social media companies to selectively censor conservative actors and posts. In contrast, democrats criticize the law for allowing social media companies to disseminate false, and deceptive information.

 DEMOCRATIC OPPOSITION

On the democratic side of the aisle, President Joe Biden has repeatedly called for Congress to repeal the law. In an interview with The New York Times, President Biden was asked about his personal view regarding CDA 230, in which he replied…

“it should be revoked. It should be revoked because it is not merely an internet company. It is propagating falsehoods they know to be false, and we should be setting standards not unlike the Europeans are doing relative to privacy. You guys still have editors. I’m sitting with them. Not a joke. There is no editorial impact at all on Facebook. None. None whatsoever. It’s irresponsible. It’s totally irresponsible.”

House Speaker Nancy Pelosi has also voiced opposition, calling CDA 230 “a gift” to the tech industry that could be taken away.

The law has often been credited by the left for fueling misinformation campaigns, like Trumps voter fraud theory, and false COVID information. In response, social media platforms began marking certain posts as unreliable.  This led to the reasoning behind republicans opposition to section 230.

REPUBLICAN OPPOSITION

Former President Trump has voiced his opposition to CDA 230 numerous times. He first started calling for the repeal of the legislation in May of 2020, after Twitter flagged two of his tweets regarding mail-in voting, with a warning label that stated “Get the facts about mail-in ballots.” In fact, in December, Donald Trump, the current President at the time, threatened to veto the National Defense Authorization Act annual defense funding bill, if CDA 230 was not revoked. The former presidents opposition was so strong, he issued an Executive Order in May of last year urging the government to re-visit CDA 230. Within the order, the former president wrote…

“Section 230 was not intended to allow a handful of companies to grow into titans controlling vital avenues for our national discourse under the guise of promoting open forums for debate, and then to provide those behemoths blanket immunity when they use their power to censor …”

The executive order also asked the Federal Communications Commission to write regulations that would remove protections for companies that “censored” speech online. Although the order didn’t technically affect CDA 230, and was later revoked by President Biden, it resulted in increased attention on this archaic legislation.

LONE SUPPORTERS

Support for the law has not completely vanished, however. As expected, many social media giants support leaving CDA 230 untouched. The Internet Association, an industry group representing some of the largest tech companies like Google, Facebook, Amazon, and Microsoft, recently announced that the “best of the internet would disappear” without section 230, warning that it would lead to numerous companies being subject to an array of lawsuits.

In a Senate Judiciary hearing in October 2020, Facebook CEO Mark Zuckerberg and Twitter CEO Jack Dorsey warned that revoking Section 230 could…

“collapse how we communicate on the Internet.”

However, Mark Zuckerberg took a more moderate position as the hearing continued, telling Congress that he thought lawmakers should update the law.

Facebook has taken a more moderate approach by acknowledging that 230 should be updated. This approach is likely in response to public pressure due to increased awareness. Irregardless, it signifies a likely chance that section 23o will be updated in the future, since Facebook represents one of the largest social media companies protected by 230. A complete repeal of this law would create such major impacts, however, that this scenerio seems unlikely to happen. Nevertheless, growing calls for change, and a Democratic controlled Congress points to a likelihood of future revision of the section.

DIFFERING OPINIONS

Although both sides of Washington, and even some social media companies, agree the law should be amended; the two sides differ greatly on how to change the law.

As mentioned before, President Biden has voiced his support for repealing CDA 230 altogether. Alternatively, senior members of his party, like Nancy Pelosi have suggested simply revising or updating the section.

Republican Josh Hawley recently introduced legislation to amend section 230. The proposed legislation would require companies to prove a “duty of good faith,” when moderating their sites, in order to receive section 230 immunity. The legislation included a $5,000 fee for companies that don’t comply with the legislation.

Adding to the confusion of the section 230 debate, many fear the possible implications of repealing or amending the law.

FEAR OF CHANGE

Because CDA 230 has been referred to as “the first amendment of the internet,” many people fear that repealing this section altogether would result in a limitation on free speech online. Although President Biden has voiced his support for this approach, it seems unlikely to happen, as it would result in massive implications.

One major implication of repealing or amending CDA 230 is that it could allow for numerous lawsuits against social media companies. Not only would major social media companies be affected by this, but even smaller companies like Slice, could become the subject of defamation litigation by allowing reviews to be posted on their website. This could lead to an existence of less social media platforms, as some would not be able to afford legal fees. Many fear that these companies would further censor online posts for fear of being sued. This may also result in higher costs for these platforms. In contrast, companies could react by allowing everything, and anything to be posted, which could result in an unwelcome online environment. This would be in stark contrast to the Congress’s original intent in the creation of the CDA, to protect children from seeing indecent posts on the internet.

FUTURE CHANGE..?

 

Because of the intricacy of the internet, and the archaic nature of CDA 230, there are many differing opinions as to how to successfully fix the problems the section creates. There are also many fears about the consequences of getting rid of the legislation. Are there any revisions you can think of that could successfully deal with the republicans main concern, censorship? Can you think of any solutions for dealing with the democrats concern of limiting the spread of misinformation. Do you think there is any chance that section 230 will be repealed altogether? If the legislation were to be repealed, would new legislation need to be created to replace CDA 230?

 

AI Avatars: Seeing is Believing

Have you ever heard of deepfake? The term deepfake comes from “deep learning,” a set of intelligent algorithms that can learn and make decisions on their own. By applying deep learning, deepfake technology replaces faces from the original images or videos with another person’s likeness.

What does deep learning have to do with switching faces?

Basically, deepfake allows AI to learn automatically from its data collection, which means the more people try deepfake, the faster AI learns, thereby making its content more real.

Deepfake enables anyone to create “fake” media.

How does Deepfake work?

First, an AI algorithm called an encoder collects endless face shots of two people. The encoder then detects similarities between the two faces and compresses the images so they can be delivered. A second AI algorithm called a decoder receives the package and recovers it to reconstruct the images to perform a face swap.

Another way deepfake uses to swap faces is GAN, or a generative adversarial network. A GAN adds two AI algorithms against each other, unlike the first method where encoder and decoder work hand in hand.
The first algorithm, the generator, is given random noise and converts it into an image. This synthetic image is then added to a stream of real photos like celebrities. This combination of images gets delivered to the second algorithm, the discriminator. After repeating this process countless times, the generator and discriminator both improve. As a result, the generator creates completely lifelike faces.

For instance, Artist Bill Posters used deepfake technology to create a fake video of Mark Zuckerberg , saying that Facebook’s mission is to manipulate its users.

Real enough?

How about this. Consider having Paris Hilton’s famous quote, “If you don’t even know what to say, just be like, ‘That’s hot,’” replaced by Vladimir Putin, President of Russia. Those who don’t know either will believe that Putin is a Playboy editor-in-chief.

Yes, we can all laugh at these fake jokes. But when something becomes overly popular, it has to come with a price.

Originally, deepfake was developed by an online user of the same name for the purpose of entertainment, as the user had put it.

Yes, Deepfake meant pornography.

The biggest problem of deepfake is that it is challenging to detect the difference and figure out which one is the original. It has become more than just superimposing one face onto another.

Researchers found that more than 95% of deepfake videos were pornographic, and 99% of those videos had faces replaced with female celebrities. Experts explained that these fake videos lead to the weaponization of artificial intelligence used against women, perpetuating a cycle of humiliation, harassment, and abuse.

How do you spot the difference?

As mentioned earlier, the algorithms are fast learners, so for every breath we take, deepfake media becomes more real. Luckily, research showed that deepfake faces do not blink normally or even blink at all. That sounds like one easy method to remember. Well, let’s not get ahead of ourselves just yet. When it comes to machine learning, nearly every problem gets corrected as soon as it gets revealed. That is how algorithms learn. So, unfortunately, the famous blink issue already had been solved.

But not so fast. We humans may not learn as quickly as machines, but we can be attentive and creative, which are some qualities that tin cans cannot possess, at least for now.
It only takes extra attention to detect Deepfake. Ask these questions to figure out the magic:

Does the skin look airbrushed?
Does the voice synchronize with the mouth movements?
Is the lighting natural, or does it make sense to have that lighting on that person’s face?

For example, the background may be dark, but the person may be wearing a pair of shiny glasses reflecting the sun’s rays.

Oftentimes, deepfake contents are labeled as deepfake because creators want to display themselves as artists and show off their works.
In 2018, a software named Deeptrace was developed to detect deepfake contents. A deeptrace lab reported that deepfake videos are proliferating online, and its rapid growth is “supported by the growing commodification of tools and services that lower the barrier for non-experts—from well-maintained open source libraries to cheap deepfakes-as-a-service websites.”

The pros and cons of deepfake

It may be self-explanatory to name the cons, but here are some other risks deepfake imposes:

  • Destabilization: the misuse of deepfake can destabilize politics and international relations by falsely implicating political figures in scandals.
  • Cybersecurity: the technology can also negatively influence cybersecurity by having fake political figures incite aggression.
  • Fraud: audio deepfake can clone voices to convince people to believe that they are talking to actual people and induce them into giving away private information.

Well then, are there any pros to deepfake technology other than having entertainment values? Surprisingly, a few:

  • Accessibility: deepfake creates various vocal personas that can turn text into speech, which can help with speech impediments.
  • Education: deepfake can deliver innovative lessons that are more engaging and interactive than traditional lessons. For example, deepfake can bring famous historical figures back to life and explain what happened during their time. Deepfake technology, when used responsibly, can be served as a better learning tool.
  • Creativity: instead of hiring a professional narrator, implementing artificial storytelling using audio deepfake can tell a captivating story and let its users do so only at a fraction of the cost.

If people use deepfake technology with high ethical and moral standards on their shoulders, it can create opportunities for everyone.

Case

In a recent custody dispute in the UK, the mother presented an audio file to prove that the father had no right to take away their child. In the audio, the father  was heard making a series of violent threats towards his wife.

The audio file was compelling evidence. When people thought the mother would be the one to walk out with a smile on her face, the father’s attorney thought something was not right. The attorney challenged the evidence, and it was revealed through forensic analysis that the audio was tailored using a deepfake technology.

This lawsuit is still pending. But do you see any other problems in this lawsuit? We are living in an era where evidence tampering is easily available to anyone with the Internet. It would require more scrutiny to figure out whether evidence is altered.

Current legislation on deepfake.

The National Defense Authorization Act for Fiscal Year 2021 (“NDAA”), which became law as Congress voted to override former President Trump’s veto, also requires the Department of Homeland Security (“DHS”) to issue an annual report for the next five years on manipulated media and deepfakes.

So far, only three states took action against deepfake technology.
On September 1, 2019, Texas became the first state to prohibit the creation and distribution of deepfake content intended to harm candidates for public office or influence elections.
Similarly, California also bans the creation of “videos, images, or audio of politicians doctored to resemble real footage within 60 days of an election.”
Also, in 2019, Virginia banned deepfake pornography.

What else does the law say?

Deep fakes are not illegal per se. But depending on the content, a deepfake can breach data protection law, infringe copyright and defamation. Additionally, if someone shares non-consensual content or commits a revenge porn crime, it is punishable depending on the state law. For example, in New York City, the penalties for committing a revenge porn crime are up to one year in jail and a fine of up to $1,000 in criminal court.

Henry Ajder, head of threat intelligence at Deeptrace, raised another issue: “plausible deniability,” where deepfake can wrongfully provide an opportunity for anyone to dismiss actual events as fake or cover them up with fake events.

What about the First Amendment rights?

The First Amendment of the U.S. Constitution states:
“Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.”

There is no doubt that injunctions against deepfakes are likely to face First Amendment challenges. The First Amendment will be the biggest challenge to overcome. Even if the lawsuit survives, lack of jurisdiction over extraterritorial publishers would inhibit their effectiveness, and injunctions will not be granted unless under particular circumstances such as obscenity and copyright infringement.

How does defamation law apply to deepfake?

How about defamation laws? Will it apply to deepfake?

Defamation is a statement that injures a third party’s reputation.  To prove defamation, a plaintiff must show all four:

1) a false statement purporting to be fact;

2) publication or communication of that statement to a third person;

3) fault amounting to at least negligence; and

4) damages, or some harm caused to the person or entity who is the subject of the statement.

As you can see, deepfake claims are not likely to succeed under defamation because it is difficult to prove that the content was intended to be a statement of fact. All that the defendant will need to protect themselves from defamation claims is to have the word “fake” somewhere in the content. To make it less of a drag, they can simply say that they used deep”fake” to publish their content.

Pursuing a defamation claim against nonconsensual deepfake pornography also poses a problem. The central theme of the claim is the nonconsensual part, and our current defamation law fails to address whether or not the publication was consented to by the victim.

To reflect our transformative impact of artificial intelligence, I would suggest making new legislation to regulate AI-backed technology like deepfake. Perhaps this could lower the hurdle that plaintiffs must face.
What are your suggestions in regards to deepfake? Share your thoughts!

 

 

 

 

 

Is your data protected? By who? What rights do you have over your personal information once it has entered the world wide web?

  • – Who doesn’t protect your data?
  • – History of the “data” or personal information legislation 
  • – A July 2021 update on the start of legislation regarding data protection on the internet
  • – What you can do to protect your data for now.

Ever since the 2018 publicized Facebook data breach, I have been curious about what data exactly can be stored, used and “understood” by computer algorithms and what the legal implications may be. At first, I was excited  about this as a new tool. I tend to shop and look for things that are, at least branded as sustainably sourced and environmentally friendly. For me, the idea that I would only be advertised these types of items, no plastics that may off gas sounded great to me. It wasn’t until I heard some of my peers’ concerns before I seriously questioned the dangers of data collection and how this information could be used to harm. 

Social media websites, commerce websites and mobile apps have become integral parts in many of our everyday lives. We use them to connect with friends online, find like-minded people through virtual groups from across the world. These sites are used to share private, work, and “public” information. The data collected from social media can be looked at as a tool or an invasion of privacy. User data collection could give us access to knowledge which allows us to learn more about our human nature. For example, this data can tell us about different demographics and how users use  each platform. However, it also raises new issues on what should be private, and who owns the data created by user usage (the platform/company or individual using it).

What are our governments doing to protect our data – personal information- rights? Do individuals even have data rights over their personal information on the internet? If so, how will these rights be protected or regulated for? And how will legislation attempt to regulate businesses?  These are all questions that I have wondered about and hope to start to answer here. After watching Mark Zuckerberg explain to congressmen how companies make money on the internet, while remaining fre,  I had little faith that our legal system would catch up to how companies and computer programmers are using these new technologies. Many large social media companies remain free making money selling the data and virtual advertising space, which has its own legal issues. Would you rather pay for Facebook, Instagram, Twitter, Snapchat ect., or allow them to sell your data? If we demand regulation and privacy for our data we may need to make this choice. 

 Privacy on the Internet 

Federally in the United States, this area of law is unregulated territory, leaving it up to the tech and social media companies for now. However, some states are starting to create their own laws. See the pictures below.

US State Privacy Legislation Tracker

How has the government regulated these areas thus far? 

There are no general consumer privacy and security laws in the federal government legislation. However, as you may remember the US government imposed a whopping $5 billion dollar penalty for Facebook’s data breach.  The order also required “Facebook to restructure its approach to privacy…  and establishes strong new mechanisms to ensure that Facebook executives are accountable for the decisions they make about privacy, and that those decisions are subject to meaningful oversight” (FTC).  This was under the Federal Trade Commission Act (FTC).

This act, past in 1914, created a government agency and prohibited companies from engaging in “unfair or deceptive acts or practises” (section 5 FTC). It protected consumers from misleading or boldly false advertising by some of America’s largest leading consumer brands (Federal Trade Commission Overview)

Interesting here is why Facebook had to pay a settlement under the Federal Trade Commission act. Under the Federal Trade commission act only companies which, “boldly false advertise,” “mislead,” or “misrepresent.” Facebook told consumers that the site did not sell their data and that users could restrict access Facebook had to data if they set it up by clicking certain boxes. The opposite was true. Facebook did not violate any internet privacy laws (there weren’t any). In this case, a 20th century legislation created, in large, to protect consumers from companies selling fake merchandise.  If Facebook had said nothing about data privacy on their website they wouldn’t have been liable for anything. Since this case, more legal regulations have been introduced. 

Complete Guide to Privacy Laws in the US | Varonis

US Privacy Act of 1974 

 

In order to understand where the legal field will go it is important to understand the history of US Privacy Rights. This act restricted what data, of personal information, US government agencies could store on their (first) computer databases. This act also gave individual’s certain rights, such as the right to access any of the data that is held by government agencies, and the right to correct any errors. It also restricted what and how the information was shared between federal and non-federal agencies, allowing it only under specific circumstances. 

HIPAA, GLBA, COPPA

These three acts further protect individuals personal information. 

HIPAA, the Health Insurance portability and Accountability Act, was put in place to regulate health insurance and protect people’s personal health information. This act laid down certain ground rules for confidentiality requirements. (HIPAA for Professionals).

The Gramm-Leach- Bliley  Act (GLBA), passed in 1999, protects nonpublic personal information, defined as “any information collected about an individual in connection with providing a financial product or service, unless that information is otherwise publicly available.”

The Children’s Online Privacy Protection Act (CPPA), enacted in 1998,regulates the personal information that is collected from minors. The law “imposes certain requirements on operators of websites or online services directed to (or have actual knowledge of) children under 13 years of age.”

 

Worldwide Internet Data Privacy 

Currently, the US does not have any federal level consumer data privacy or security law. According to the “United Nations Conference on Trade and Development, 107 countries have data privacy rules in place including 66 developing nations.”

What does GDPR mean for me? An explainer

The European Union passed the General Data Protection Regulation in 2018. This law went through a long legislative process, the data privacy and security rights law was officially approved in 2016 and went into effect May 2018. It put specific obligations on data processors and the cloud. The regulation also hopes to give individuals the ability to sue processors of data directly for damages, limit and minimize the retention of data that is kept by default and give consumers the right to correct incorrect information. The GDPR also requires explicit consent when consumers give their data. Processing personal data is generally prohibited, unless it is expressly allowed by law, or the data subject has consented to the processing.”CCPA vs. GDPR - differences and similarities – Data Privacy Manager

THE U.S.’s strictest state so far:

CCPA rights and compliance requirements | DropsuiteSo far only three states, California, Colorado and Virginia have actually enacted comprehensive consumer data privacy laws according to the National Conference of State Legislatures as of July 22, 2021. The closest US law to the EU’s GDPR, is California’s Consumer Privacy Act (currently U.S.’s strictest regulation on internet data privacy). In California this act requires businesses to clearly state what types of personal data will be collected from consumers and how this information will be used, managed, shared, and sold by companies or entities doing business with and compiling information about California Residents (CCPA AND GDPR Comparison chart.) This “landmark law” secures new privacy rights for California consumers, including:

 

 

New York State Privacy Law Update June 2021 

 In the New York legislature there were a number of privacy bills that were pending, including the “It’s Your Data Act,” the “New York Privacy Act,” the “Digital Fairness Act,” and the “New York Data Accountability and Transparency Act.” Most of the bills never made it out of committee. 

US LEGISLATION TRACKER

The “It’s Your Data Act” proposed to provide protections and transparency in the collection, use, retention, and sharing of personal information. 

 

From the New York State Senate Summary:

 “The ‘NY Privacy Act’ proposed to enact would require companies to disclose their methods of identifying personal information, to place special safeguards around data sharing, and to allow consumers to obtain the names of all entities with whom their information is shared”, creating a special account to fund a new Office of Privacy and Data Protection. It is currently on the floor calendar, and no action has yet been taken on it. 

 

 The definition of personal information here is – “any information related to an identified or identifiable person” – includes a very extensive list of identifiers: biometric, email addresses, network information and more. 

How to balance your data privacy requirements with effective video security | Blog | Hikvision

What are Data Privacy Rights which have been identified thus far? 

Provisions in Chart

CONSUMER RIGHTS

  • The right of access to personal information collected or shared – The right for a consumer to access from a business/data controller the information or categories of information collected about a consumer, the information or categories of information shared with third parties, or the specific third parties or categories of third parties to which the information was shared; or, some combination of similar information.
  • The right to rectification — The right for a consumer to request that incorrect or outdated personal information be corrected but not deleted.
  • The right to deletion — The right for a consumer to request deletion of personal information about the consumer under certain conditions.
  • The right to restriction of processing — The right for a consumer to restrict a business’s ability to process personal information about the consumer.
  • The right to data portability — The right for a consumer to request personal information about the consumer be disclosed in a common file format.
  • The right to opt out of the sale of personal information — The right for a consumer to opt out of the sale of personal information about the consumer to third parties.
  • The right against automated decision making — A prohibition against a business making decisions about a consumer based solely on an automated process without human input.
  • A consumer private right of action — The right for a consumer to seek civil damages from a business for violations of a statute.

Fines Increase & Enforcements Fall in First Year of GDPR | Hrdots

BUSINESS OBLIGATIONS

While many rights and obligations are starting to be recognized, again, there is not yet legislation to protect them. 

 

What Is Data Privacy? | Laws and Best Practices for Businesses

So, what can you do to protect yourself?

    1. Update and Optimize Your Privacy Settings. 
  • Review what apps have access to facebook data and what they can do with the access 
  • Delete access for all apps you no longer use or need 
  1. Share with Care. Be aware that when you post a picture or message, you may be inadvertantly sharing personal details and sensitive data with strangers. 
  2. Block “supercookies” trails – Supercookies are bits of data that can be stored on your computer like advertising networks. They are a “a much more invasive type of behavior-tracking program than traditional cookies that is also harder to circumvent.supercookies are harder to detect and get rid of because they hide in various places and can’t be automatically deleted. A supercookie owner can capture a ton of your unique personal data like your identity, behavior, preferences, how long you’re online, when you’re most active and more. Supercookies can communicate across different websites, stitching together your personal data into a highly detailed profile.
  3. Set up Private email Identity 
  4. Update your softwares – many software companies release updates which patch bugs and vulnerabilities in the app when they are discovered 
  5. Use App lockers – App lockers provide an extra level of security for apps and work 
  6. Encrypt your data – There are free apps available to encrypt or scramble data so that it can not be read without a key. 
  7. Create long and unique passwords for all counts and use multi-factor authentication whenever possible”. This additional layer of security makes it harder for hackers to get into your accounts. (Data Privacy Senate). 

A computer science expert on the data privacy crisis | The University of Chicago Magazine