Social Media Got Me Fired!

Have you ever wondered if in the age of Social Media, your employers ever looked you up and if that affected you getting the job? Well, continue reading to find out!     

Employers look at your social media profiles to find out anything they can about you before interviewing or hiring you. A 2022 Harris Poll found that 70% of employers in the survey would screen potential employees’ social media profiles before offering them a position. A Career Builder Poll found that 54% of employers ruled out a candidate due to discovering something they disagreed with on their social media profile. 

Pre-employment background checks now go beyond criminal and public records and employment history. If hiring managers can’t find you online, there is an increased chance they will not move forward with your application. In fact, 21% of employers polled said they are not likely to consider a candidate who does not have a social media presence.

However, don’t fret; social media can also be why you get your next job. The Aberdeen Group found that 73% of job seekers between 18 and 34 obtained their last job through social media. People seeking employment have unlimited domains to find jobs on platforms such as Linkedin, Stackoverflow, GitHub, Facebook, TikTok, and other websites. A Career Builder survey found that 44% of hiring managers and employers have discovered content on a candidate’s social media profile that caused them to hire the candidate. Due to this change in hiring and recruitment, employers have to engage the newer generation in the job force through competitive social media advertisements. Job seekers and employers alike use their social media profiles for networking, sourcing, and building recognition.

Is social media a double-edged sword? If you have social media, it can lessen your chances of being employed. At the same time, many jobs are posted on social media, which can be why you get hired. You can use your social media, i.e., Linkedin, to promote yourself and stay active on the platform at least once a week. Employers are interested in how you use your social media.

Regarding Facebook, Instagram, and TikTok, keep them neutral and clean. Have your account private, and before you post a picture, ask yourself if you are comfortable with the CEO or your boss seeing that picture. If the answer is yes, go ahead and post; if you are unsure, the best bet is to not post it. 

We walk a fine line in the great age of Social media; many do’s and don’t vary depending on your job and in what area. Someone working for Google would have a different social media presence and posts than someone working for the Prosecutor’s Office. You can always turn to your employee handbook once you are hired or ask HR to be on the safe side.

Harry Kazakian stated in an article for Forbes Magazine that he screens potential employees’ social media to eliminate potential risks. He does this to ensure employee harmony in the workplace. Specifically, Kazakian is looking to avoid candidates who post: constant negative content, patterns of overt anger, suggestions of violence, associations with questionable characters, signs of crass behavior, or even too many political posts. 

Legally, employers may use social media to recruit candidates by advertising job openings or performing background checks to confirm that a job candidate or applicant is qualified. This allows employers to monitor your website activity, e-mail account, and instant messages. This right, however, cannot be used as a means of discrimination. 

Half the states in the US have enacted laws that do not allow employers to access employees’ social media accounts. California prohibits employers from asking for social media passwords of their current or prospective employees. Maryland, Virginia, and Illinois offer protections to job seekers, so they do not have to divulge their social media passwords or provide account access. California, Illinois, New Jersey, and New York, among other states, have enacted laws prohibiting employers from discriminating based on an employee’s lawful off-duty conduct. 

Federal laws prohibit employers from discriminating against a prospective or current employee based on information on the employee’s social media relating to their race, color, national origin, gender, age, disability, and immigration or citizenship status. Employees should be conscious of what information they display on social media websites. However, federal law prohibits companies of a specified size from illegally discriminating against their employees. Title VII, ADA, and GINA apply to private employers, educational institutions, and state and local governments with 15 or more employees. The ADEA applies to employers with 20 or more employees.

California, Colorado, Connecticut, Illinois, Minnesota, Nevada, New York, North Dakota, and Tennessee all have laws that prohibit employers from firing an employee for engaging in lawful activity, on the employer’s premises, during nonworking hours, even if this activity is not in direct conflict with the essential business-related interests of the employer, are unwelcome, objectionable, or not acceptable. However, the Courts in the states mentioned above will weigh the employee protections against an employer’s business interests. If the Court rules that the employer’s interests outweigh employee privacy concerns, the employer is exempt from the law. Be aware that some laws provide explicit exemptions for employers.

Legal risks based on Employment Discrimination, Title VII of the Civil Rights Act, the Age Discrimination in Employment Act, the Equal Pay Act, Title II of the Genetic Information Nondiscrimination Act, and the Equal Employment Opportunity Commission can occur when employers use social media to screen a job applicant. Section 8(a)(3) of the National Labor Relations Act prohibits discrimination against applicants based on union affiliation or support. Therefore, using social media to screen out applicants on this basis may lead to an unfair labor charge against the company.

 A critical case decided in 2016, Hardin v. Dadlani, regarded a hiring manager who had previously preferred white female employees. The hiring manager instructed an employee to look up an applicant on Facebook and invite her for an interview “if she looks good.” The Court ruled that this statement made by the hiring manager can be reasonably construed to refer to her race, which can establish discriminatory animus. Discriminatory animus is when an employee may prove discrimination by either direct or circumstantial evidence. Direct evidence is evidence, if true, that proves the fact of discriminatory animus without inference or presumption. A single remark can show a discriminatory animus. 

Be an intelligent job candidate by knowing your rights. Companies that use third-party social media background checks must comply with disclosure and authorization requirements since it is considered consumer reporting agency under the Fair Credit Reporting Act. Therefore, an employer must give notice to the prospective employee or current employee that they want to acquire a consumer report for employment purposes and obtain written consent from the prospective employee or existing employee. 

Happy job hunting, and think before you post!

Artificial Intelligence: Putting the AI in “brAIn”

What thinks like a human, acts like a human, and now even speaks like a human…but isn’t actually human? The answer is: Artificial Intelligence.

Yes, that’s right, the futuristic self-driving smart cars, talking robots, and video calling that we once saw in the Jetsons TV Show are now more or less a reality in 2022. Much of this is thanks to the development of Artificial Intelligence.

What is Artificial Intelligence?

Artificial Intelligence (AI) is an umbrella term that has many sub-definitions. Scientists have not yet fully agreed upon one single definition, but AI generally refers to a phrase coined by Stanford Professor John McCarthy…all the way back in 1955. McCarthy defined artificial intelligence as “the science and engineering of making intelligent machines”. He then went on to invent the list processing language LISP, which is now used by numerous industry leaders including Boeing (Boeing Simplified English Checker assists aerospace technical writers) and Grammarly (a grammar computer add-on that many of us use, and that coincidentally, I am using as I write this piece).  McCarthy is thought of as one of the founders of AI and recognized for his contributions to machine language.

Sub Categories and Technologies

Within the overarching category of AI are smaller subcategories such as Narrow AI and Strong AI. Beneath the subcategories are the technologies of Machine Learning and Algorithms that help the subcategories function and meet their objectives.

Narrow AI: Also known as “weak AI” is task-focused intelligence. These systems only focus on specific jobs like internet searches or autonomous driving rather than complete human intelligence.  Examples of this are Apple’s Siri, Amazon Alexa, and autonomous vehicles.
General AI: Also known as “strong AI”, is the overall combined AI components that rival a human’s ability to think for themselves. Think the robots in your favorite science-fiction novel. Science today still seems to be far from reaching General AI, as it proves to be much more difficult to develop as opposed to Narrow AI.

Technologies within AI Subcategories.

Machine Learning requires human involvement to learn. Humans create hierarchiesand pathways for both data input and outputs. These pathways allow the machine to learn with human intervention, but this requires more structured data for the computer.

Deep Learning allows the machine to make the pathway decisions by itself without human intervention. Between the simple input and output layers are multiple hidden layers referred to as a “neural network”. This network can receive unstructured raw data such as images and text and automatically distinguish them from each other and how they should be processed.

Both Machine and Deep Learning have allowed businesses, healthcare, and other industries to flourish from the increased efficiency and time saved through minimizing human decisions. It is possible that because this technology is so new and unregulated, we have been able to see how fast innovation can grow uninhibited.  Government regulations have been hesitant to tread in the murky waters of this new and unknown technology sector

Regulations.
Currently, there is no federal law regulating the use of AI. States seem to be in a trial-and-error phase, attempting to pass a range of laws. Many of these laws attempt to deploy AI-specific task forces to monitor and evaluate AI use in that state or prohibit the use of algorithms in a way that unfairly discriminates based on ethnicity, race, sex, disability, or religion. A live list of pending failed and enacted AI legislation in each state can be found here on the National Conference of State Legislatures’ website.

But what must go up, must come down. While AI increases efficiency and convenience, it also poses a variety of ethical concerns, making it a double-edged sword. We explore the ups and downs of AI below and pose ethical questions that might make you stop and think twice about letting robots control our world.

Employment

With AI emerging in the workforce, many are finding that administrative and mundane tasks can now be automated through the use of AI. Smart Contract systems allow for Optical Character Recognition (OCR) which can scan documents and recognize text from the uploaded image.  The AI can then pull-out standard clauses or noncompliant language and flag it for human review. This, however, still ultimately requires human intervention.

One growing concern with AI and employment lies in the possibility that AI may take over certain jobs completely. An example of this comes with the innovation of self-driving cars and truck drivers. If autonomous vehicles become mainstream for the large-scale transportation of goods, what will happen to those who once held this job? Does the argument that there may be “fewer accidents” outweigh the unemployment that accompanies this switch? And what if the AI fails? Could there be more accidents?

Chatbots

Chatbots are computer programs designed to simulate human communication.  We see these types in online customer service settings. The AI allows customers to hold a conversation with the Chatbot and ask questions about a specific product and receive instant feedback. This cuts down on waiting times and improves service levels for the company.

While customer service Chatbots may not spark any concern to the average consumer, the fact that these bots are able to engage in conversation that is almost indistinguishable from an actual human may pose a threat to other industries. We can forget about catfishing, now individuals will have to worry about if the “person” on the other side of their chatroom is even a “person” at all, or if it is someone who has designed a bot to elicit emotional responses from victims and eventually scam them out of their money.

Privacy

AI now gives consumers the ability to unlock their devices with facial recognition. It can also use these faces to recognize people in photos and tag them on social media sites. Aside from our faces, AI follows our behaviors and slowly learns our likes and dislikes, building a profile on us. Recently, the Netflix documentary “The Social Dilemma” discussed the controversy surrounding AI and Social Media use. In this film, we see the algorithm as three small men “inside the phone” who begin to build a profile on one of the main characters, sending notifications during periods of inactivity from apps that are likely to generate a response. With AI there seems to be a very fine line of what information is left undisclosed. We must be diligently aware of what we are opting into (or out of) to protect our personally identifiable information. While this may not be a major concern of those in the United States, it may raise concerns for civilians in foreign countries under a dictatorship that may use facial recognition as a tool to retain ultimate control.

Spread of Disinformation and Bias

AI is only as smart as the data it learns from. If it is fed data with a discriminatory bias or any bias at all (be it political, musical, or even your favorite movie genre) it will begin to make decisions based on that information.

We see the good in this – new movie suggestions in your favorite genre, advertising a sweater that you didn’t know you needed – but we have also seen the spread of false information across social media sites. Oftentimes, algorithms will only show us news from sources that align with our political affiliation because that is whom we tend to follow and engage with. This leaves us with a one-sided view of the world and grows the gap between parties even further.

As AI develops, we will be faced with new ethical questions every day. How do we prevent bias when it is almost human nature to begin with? How do we protect individuals’ privacy while still letting them enjoy the convenience of AI technology?

Can we have our cake and eat it too? Stay tuned in the next few years to find out…

 

Memes, Tweets, and Stocks . . . Oh, My!

 

Pop-Culture’s Got A Chokehold on Your Stocks

In just three short weeks, early in January 2021, Reddit meme-stock traders garnered up enough of GameStop’s stock to increase its value from a mere $17.25 per share to $325 a pop. This reflected almost an 1,800% increase in the stock’s value. In light of this, hedge funds, like New York’s Melvin Capital Management, were left devastated, some smaller hedge funds even went out of business.

For Melvin, because they were holding their GameStop stock in a short position (a trading technique in which the intention is to sell a security with the plan to buy it back later, at a lower cost, in an anticipated short term drop), they lost over 50% of their stock’s value, which translated to nearly $7 billion, in just under a month.

Around 2015, the rise of a new and free online trading platform geared towards a younger generation, emerged in Robinhood. Their mission was simple — “democratize” finance. By putting the capacity to understand and participate in trading, without needing an expensive broker, Robinhood made investing accessible to the masses. However, the very essence of Robinhood putting the power back in the hands of the people, was also what caused a halt in GameStop’s takeover rise. After three weeks, Robinhood had to cease all buying or selling of GameStop’s shares and options because the sheer volume of trading had exceeded their cash-on-hand capacity, or collateral that is required by regulators to function as a legal trade exchange.

But what exactly is a meme-stock? For starters, a meme is an idea or element of pop-culture that spreads and intensifies across people’s minds. As social media has increased in popularity, viral pop-culture references  and trends have as well. Memes allow people to instantaneously spread videos, tweets, pictures, or posts that are humorous, interesting, or sarcastic. This in turns goes viral. Meme-stocks therefore originate on the internet, usually in sub-Reddit threads, where users work together to identify a target stock and then promote it. The goal of promoting a meme stock largely involves shorting the stock—as explained above—which means buying, holding, selling, and rebuying as prices fluctuate to turn a profit.

GameStop is not the first, and certainly not the last, stock to be traded in this fashion. But it represents an important shift in the power of social media and its ability to affect the stock market. Another example of the power meme-culture can have on real-world finances and the economy, is Dogecoin.

Dogecoin was created as satirical new currency, in a way mocking the hype around existing cryptocurrencies. But its positive reaction and bolstered interest on social media turned the joke crypto into a practical reality. This “fun” version of Bitcoin was celebrated, listed on the crypto exchange Binance, and even cryptically endorsed by Elon Musk. More recently, in 2021, cinema chain AMC announced it would accept Dogecoin in exchange for digital gift card purchases, further bolstering the credibility of this meme-originated cryptocurrency.

Tricks of the Trade, Play at Your Own Risk

Stock trading is governed by the Securities Act of 1933, which boils down to two basic objectives: (1) to require that investors receive financial and other material information concerning securities being offered for public sale; and (2) to prohibit any deceit, misrepresentations, and other fraud in the sale of securities. In order to buy, sell, or trade most securities, it must first be registered with the SEC—the primary goal of registration is to facilitate information disclosures, so investors are informed before engaging. Additionally, the Securities Exchange Act of 1934 provides the SEC with broad authority over the securities industry, to regulate, register, and oversee brokerage firms, agents, and SROs. Other regulations at play include the Investment Company Act of 1940 and the Investment Advisers Act of 1940 which regulate investment advisers and their companies, respectively. These Acts require firms and agents that receive compensation for their advising practices are registered with the SEC and adhere to certain qualifications and strict guidelines designed to promote fair, informed investment decisions.

Cryptocurrency has over the years grown from a speculative investment to a new class of assets and regulation is imminent. The Biden Administration has recently added some clarification on crypto use and its regulation through a new directive designating power to the SEC and the Commodity Futures Trading Commission (CFTC), which were already the prominent securities regulators. In the recent Ripple Labs lawsuit, the SEC began to make some strides in regulating cryptocurrency by working to classify it as a security which would bring crypt into their domain of regulation.

Consequentially, the SEC’s Office of Investor Education and Advocacy has adapted with the times and now cautions against  making any investment decisions based solely off of information seen on social media platforms. Because social media has become integral to our daily lives, investors are increasingly relying and turning to it for information when deciding when, where, and on what to invest. This has increased the likelihood of scams, fraud, and other misinformation consequences. These problems can arise through fraudsters disseminating false information anonymously or impersonating someone else.

 

However, there is also an increasing concern with celebrity endorsements and testimonials regarding investment advice. The most common types of social media online scam schematics are impersonation and fake crypto investment advertisements.

 

With this rise in social media use, the laws governing investment advertisements and information are continuously developing. Regulation FD (Fair Disclosure) provides governance on the selective disclosure of information for publicly traded companies. Reg. FD prescribes that when an issuer discloses any material, nonpublic information to certain individuals or entities, they must also make a public disclosure of that information. In 2008, the SEC issued new guidance allowing information to be distributed on websites so long as shareholders, investors, and the market in general were aware it was the company’s “recognized channel of distribution.” In 2013 this was again amended to allow publishing earnings and other material information on social media, provided that investors knew to expect it there.

This clarification came in light of the controversial boast by Netflix co-founder and CEO Reed Hastings on Facebook that Netflix viewers had consumed 1 billion hours of watch time, per month. Hasting’s Facebook page had never previously disclosed performance stats and therefore investors were not on notice that this type of potentially material information, relevant to their investment decisions, would be located there. Hastings also failed to immediately remedy the situation with a public disclosure of the same information via a press release or Form 8-K filing.

In the same vein, a company’s employees may also be the target of consequence if they like or share a post, publish a third-party link, or friend certain people without permission if any of those actions could be viewed as an official endorsement or means of information dissemination.

The SEC requires that certain company information be accompanied by a disclosure or cautionary disclaimer statement. Section 17(b) of the 1933 Act, more commonly known as the Anti-Touting provision, requires any securities endorsement be accompanied by a disclosure of the “nature, source, and amount of any compensation paid, directly or indirectly, by the company in exchange for such endorsement.”

To Trade, or Not to Trade? Let Your Social Media Feed Decide

With the emergence of non-professional trading schematics and platforms like Robinhood, low-cost financial technology has brought investing to the hands of younger users. Likewise, the rise of Bitcoin and blockchain technologies in the early-to-mid 2010’s have changed the way financial firms must think about and approach new investors. The discussion of investments and information sharing that happens on these online forums creates a cesspool ripe for misinformation breeding. Social media sites are vulnerable to information problems for several reasons. For starters, which posts gain attention is not always something that can be calculated in advance—if the wrong post goes viral, hundreds to thousands to millions of users may read improper recommendations. Algorithm rabbit-holes also pose a risk to extremist views and strategically places ads further on this downward spiral.

Additionally, the presence of fake or spam-based accounts and internet trolls pose an ever more difficult problem to contain. Lastly, influencers can sway large groups of followers by mindlessly promoting or interacting with bad information or not properly disclosing required information. There are many more obvious risks associated but “herding” remains one of the largest. Jeff Kreisler, Head of Behavioral Science at J.P. Morgan & Chase explains that:

“Herding has been a common investment trap forever. Social media just makes it worse because it provides an even more distorted perception of reality. We only see what our limited network is talking about or promoting, or what news is ‘trending’ – a status that has nothing to do with value and everything to do with hype, publicity, coolness, selective presentation and other things that should have nothing to do with our investment decisions.”

This shift to a digital lifestyle and reliance on social media for information has played a key role in the information dissemination for investor decision-making. Nearly 80% of institutional investors now use social media as a part of their daily workflow. Of those, about 30% admit that information gathered on social media has in some way influenced an investment recommendation or decision and another third have maintained that because of announcements they saw on social media, they made at least one change to their investments as a direct result. In 2013, the SEC began to allow publicly traded companies to report news and earnings via their social media platforms which has resulted in an increased flow of information to investors on these platforms. Social media also now plays a large role in financial literacy for the younger generations.

The Tweet Heard Around the Market

A notable and recent example of how powerful social media warriors and internet trolls can be in relation to the success of a company’s stock came just days after Elon Musk’s acquisition of Twitter and only hours after launching his pay-for-verification Twitter Blue debacle.  Insulin manufacturing company Eli Lilly saw a stark drop in their stock value after a fake parody account was created under the guise of their name and tweeted out that “insulin is now free.”

This account acting under the Twitter handle @EliLillyandCo labeled itself, bought a blue check mark, and appended the same logo as the real company to its profile making it almost indistinguishable from the real thing. Consequently, the actual Eli Lilly corporate account had to tweet out an apology “to those who have been served a misleading message from a fake Lilly account.” And clarifying that, “Our official Twitter account is @Lillypad.”

This is a perfect example for Elon Musk and other major companies and CEOs just how powerful pop-culture, meme-culture, and internet trolls are by the simple fact that this parody account casually dropped the stock of a multi-billion dollar pharmaceutical company almost 5% in the matter of a few hours and weaponized with $8 and a single tweet.

So, what does all this mean for the future of digital finance? It’s difficult to say exactly where we might be headed, but social media’s growing tether on all facets of our lives leave much up for new regulation. Consumers should be cautious when scrolling through investment-related material, and providers should be transparent with their relationships and goals in promoting any such materials. Social media is here to stay, but the regulation and use of it are still up for grabs.

The Rise of E-personation

Social media allows millions of users to communicate with one another on a daily basis, but do you really know who is behind the computer screen?

As social media continues to expand into the enormous entity that we know it to be today, the more susceptible users are to abuse online. Impersonation through electronic means, often referred to as e-personation is a rapidly growing trend on social media. E-personation is extremely troublesome because it requires far less information than the other typical forms of identity theft. In order to create a fake social media page, all an e-personator would need is the victim’s name, and maybe a profile picture. While creating a fake account is relatively easy for the e-personator, the impact on the victim’s life can be detrimental.

E-personation Under State Law

It wasn’t until 2008, when New York became the first state to recognized e-personation as a criminally punishable form of identity theft. Under New York law, “a person is guilty of criminal impersonation in the second degree when he … impersonates another by communication by internet website or electronic means with intent to obtain a benefit or injure or defraud another, or by such communication pretends to be a public servant in order to induce another to submit to such authority or act in reliance on such pretense.”

Since 2008, other states, such as California, New Jersey, and Texas, have also amended their identity theft statutes to include online impersonation as a criminal offense. New Jersey amended their impersonation and identity theft statute in 2014, after an e-personator case revealed their current statute lacked any mention of “electronic communication” as means of unlawful impersonation. In 2011, New Jersey Superior Court Judge David Ironson in Morris County, declined to dismiss an indictment of identity theft against Dana Thornton. Ms. Thornton allegedly created a fictitious Facebook page that portrayed her ex-boyfriend, a narcotics detective, unfavorably. On the Facebook page, Thornton, pretending to be her ex, posted admitting to hiring prostitutes, using drugs, and even contracting a sexually transmitted disease. Thornton’s defense counsel argued that New Jersey’s impersonation statute was not applicable because online impersonation was not explicitly mentioned in the statute and therefore, Thornton’s actions do not fall within the scope of activity the statute proscribes. Judge Ironson disagreed by noting the New Jersey statute is “clear and unambiguous” in forbidding impersonation activities that cause injury and does not need to specify the means by which the injury occurs.

Currently under New Jersey law, a person is guilty of impersonation or theft of identity if … “the person engages in one or more of the following actions by any means, but not limited to, the use of electronic communications or an internet website:”

    1. Impersonates another or assumes a false identity … for the purpose of obtaining a benefit for himself or another or to injure or defraud another;
    2. Pretends to be a representative of some person or organization … for the purpose of obtaining a benefit for himself or another or to injure or defraud another;
    3. Impersonates another, assumes a false identity or makes a false or misleading statement regarding the identity of any person, in an oral or written application for services, for the purpose of obtaining services;
    4. Obtains any personal identifying information pertaining to another person and uses that information, or assists another person in using the information … without that person’s authorization and with the purpose to fraudulently obtain or attempt to obtain a benefit or services, or avoid the payment of debt … or avoid prosecution for a crime by using the name of the other person; or
    5. Impersonates another, assumes a false identity or makes a false or misleading statement, in the course of making an oral or written application for services, with the purpose of avoiding payment for prior services.

As social media continues to grow it is likely that more state legislators will amend their statutes to incorporate e-personation into their impersonation and identify theft statutes.

E-personators Twitter Takeover

Over the last week, e-personation has erupted into chaos on Twitter. Elon Musk brought Twitter on October 27, 2022, for $44 billion dollars. He immediately began firing the top Twitter executives including the chief executive and chief financial officer. On the verge of bankruptcy, Elon needed a plan to generate more subscription revenue. At last, the problematic Twitter Blue subscription was created. Under the Twitter Blue policy users could purchase a subscription for $8 a month and receive the blue verification check mark next to their Twitter handle.

The unregulated distribution of the blue verification check mark has led to chaos on Twitter by allowing e-personators to run amuck. Traditionally the blue check mark has been a symbol of authentication for celebrities, politicians, news outlets, and other companies. It was created to protect those susceptible to e-personation. The rollout of Twitter Blue began on November 9, 2022, the policy did not specify any requirements needed to verify a user’s authenticity beyond payment of the monthly fee.

Shortly after the rollout, e-personators began to take advantage of their newly purchased verification subscription by impersonating celebrities, pharmaceutical companies, politicians, and even the new CEO of Twitter, Elon Musk. For example, comedian Kathy Griffin was one of the first Twitter accounts suspended after Twitter Blue’s launch for changing her Twitter name and profile photo to Elon Musk and impersonating the new CEO. Kathy was not the only Twitter user to impersonate Elon and in response Elon tweeted “Going forward, any Twitter handles engaging in impersonation without clearly specifying ‘parody’ will be permanently suspended.”

Elon’s threats of permanent suspension did not stop e-personators from trolling on Twitter. One e-personator used their blue check verification to masquerade as Eli Lilly and Company, an American pharmaceutical company. The fake Eli Lilly account tweeted the company would be providing free insulin to its customers. The real Eli Lilly account tweeted an apology shortly thereafter. Another e-personator used their verification to impersonate former United States President George W. Bush. The fake Bush account tweeted “I miss killing Iraqis” along with a sad face emoji. The e-personators did not stop there, many more professional athletes, politicians, and companies were impersonated under the new Twitter Blue subscription policy. An internal Twitter log seen by the New York Times indicated that 140,000 accounts had signed up for the new Twitter Blue subscription. It is unlikely that Elon will be able to discover every e-personator account and remedy this spread of misinformation.

Twitter’s Term and Conditions 

Before the rollout of Twitter Blue, Twitter’s guidelines included a policy for misleading and deceptive identities. Under Twitter’s policy “you many not impersonate individuals, groups, or organizations to mislead, confuse, or deceive others, nor use a fake identity in a manner that disrupts the experience of others on Twitter.” The guidelines further explain that impersonation is prohibited, specifically “you can’t pose as an existing person, group, or organization in a confusing or deceptive manner.” Based on the terms of Twitter’s guidelines, the recent e-personators are in direct violation of Twitter’s policy, but are these users also criminally liable?

Careful, You Could Get a Criminal Record

Social media networks, such as Facebook, Instagram, and Twitter, have little incentive to protect the interests of individual users because they cannot be held liable for anything their users post. Under section 230 of the Communications Decency Act, “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Because of the lack responsibility placed on social media platforms, victims of e-personation often have a hard time trying to remove the fake online presence. Ironically, in order for a victim to gain control of an e-personator’s fake account, the victim must provide the social media platform with confidential identifying information, while the e-personator effectively remains anonymous.

By now you’re probably asking yourself, but what about the e-personators criminal liability? Under some state statutes, like those mentioned above, e-personators can be found criminally liable. However, there are some barriers that effect the effectiveness of these prosecutions. For example, e-personators maintain great anonymity, therefore finding the actual person behind the fake account could be difficult. Furthermore, many of the state statutes that criminalize e-personation include proving the perpetrator’s intent, which may also pose a risk to prosecution. Lastly, social media is a global phenomenon which means jurisdictional issues will arise when bringing these cases to court. Unfortunately, only a minority of states have amended their impersonation statutes to include e-personation. Hopefully as social media continues to grow more states will follow suit and e-personation will be prosecuted more efficiently and effectively. Remember, not everyone on social media is who they claim to be, so be cautious.

I Knew I Smelled a Rat! How Derivative Works on Social Media can “Cook Up” Infringement Lawsuits

 

If you have spent more than 60 seconds scrolling on social media, you have undoubtably been exposed to short clips or “reels” that often reference different pop culture elements that may be protected intellectual property. While seemingly harmless, it is possible that the clips you see on various platforms are infringing on another’s copyrighted work. Oh Rats!

What Does Copyright Law Tell Us?

Copyright protection, which is codified in 17 U.S.C. §102, extends to “original works of authorship fixed in any tangible medium of expression”. It refers to your right, as the original creator, to make copies of, control, and reproduce your own original content. This applies to any created work that is reduced to a tangible medium. Some examples of copyrightable material include, but are not limited to, literary works, musical works, dramatic works, motion pictures, and sound recordings.

Additionally, one of the rights associated with a copyright holder is the right to make derivative works from your original work. Codified in 17 U.S.C. §101, a derivative work is “a work based upon one or more preexisting works, such as a translation, musical arrangement, dramatization, fictionalization, motion picture version, sound recording, art reproduction, abridgment, condensation, or any other form in which a work may be recast, transformed, or adapted. A work consisting of editorial revisions, annotations, elaborations, or other modifications which, as a whole, represent an original work of authorship, is a ‘derivative work’.” This means that the copyright owner of the original work also reserves the right to make derivative works. Therefore, the owner of the copyright to the original work may bring a lawsuit against someone who creates a derivative work without permission.

Derivative Works: A Recipe for Disaster!

The issue of regulating derivative works has only intensified with the growth of cyberspace and “fandoms”. A fandom is a community or subculture of fans that’s built itself up around one specific piece of pop culture and who share a mutual bond over their enthusiasm for the source material. Fandoms can also be composed of fans that actively participate and engage with the source material through creative works, which is made easier by social media. Historically, fan works have been deemed legal under the fair use doctrine, which states that some copyrighted material can be used without legal permission for the purposes of scholarship, education, parody, or news reporting, so long as the copyrighted work is only being used to the extent necessary. Fair use can also apply to a derivative work that significantly transforms the original copyrighted work, adding a new expression, meaning, or message to the original work. So, that means that “anyone can cook”, right? …Well, not exactly! The new, derivative work cannot have an economic impact on the original copyright holder. I.e., profits cannot be “diverted to the person making the derivative work”, when the revenue could or should have gone to original copyright holder.

With the increased use of “sharing” platforms, such as TikTok, Instagram, or YouTube, it has become increasingly easier to share or distribute intellectual property via monetized accounts. Specifically, due to the large amount of content that is being consumed daily on TikTok, its users are incentivized with the ability to go “viral” instantaneity, if not overnight,  as well the ability to earn money through the platform’s “Creator Fund.” The Creator Fund is paid for by the TikTok ads program, and it allows creators to get paid based on the amount of views they receive. This creates a problem because now that users are getting paid for their posts, the line is blurred between what is fair use and what is a violation of copyright law. The Copyright Act fails to address the monetization of social media accounts and how that fits neatly into a fair use analysis.

Ratatouille the Musical: Anyone Can Cook?

Back in 2020, TikTok users Blake Rouse and Emily Jacobson were the first of many to release songs based on Disney-Pixar’s 2007 film, Ratatouille. What started out as a fun trend for users to participate in, turned into a full-fledged viral project and eventual tangible creation. Big name Broadway stars including André De Shields, Wayne Brady, Adam Lambert, Mary Testa, Kevin Chamberlin, Priscilla Lopez, and Tituss Burgess all participated in the trend, and on December 9, 2020, it was announced that Ratatouille was coming to Broadway via a virtual benefit concert.

Premiered as a one-night livestream event in January 1 2021, all profits generated from the event were donated to the Entertainment Community Fund (formerly the Actors Fund), which is a non-profit organization that supports performers and workers in the arts and entertainment industry. It initially streamed in over 138 countries and raised over $1.5 million for the charity. Due to its success, an encore production was streamed on TikTok 10 days later, which raised an additional $500,000 for the fund (totaling $2 million). While this is unarguably a derivative work, the question of fair use was not addressed here because Disney lawyers were smart enough not to sue. In fact, they embraced the Ratatouille musical by releasing a statement to the Verge magazine:

Although we do not have development plans for the title, we love when our fans engage with Disney stories. We applaud and thank all of the online theatre makers for helping to benefit The Actors Fund in this unprecedented time of need.

Normally, Disney is EXTREMELY strict and protective over their intellectual property. However, this small change of heart has now opened a door for other TikTok creators and fandom members to create unauthorized derivative works based on others’ copyrighted material.

Too Many Cooks in the Kitchen!

Take the “Unofficial Bridgerton Musical”, for example. In July of 2022, Netflix sued content creators Abigail Barlow and Emily Bear for their unauthorized use of Netflix’s original series, Bridgerton. The Bridgerton Series on Netflix is based on the Bridgerton book series by Julia Quinn. Back in 2020, Barlow and Bear began writing and uploading songs based on the Bridgerton series to TikTok for fun. Needless to say, the videos went viral, thus prompting Barlow and Bear to release an entire musical soundtrack based on Bridgerton. They even went so far as to win the 2022 Grammy Award for Best Musical Album.

On July 26, Barlow and Bear staged a sold-out performance with tickets ranging from $29-$149 at the New York Kennedy Center, and also incorporated merchandise for sale that included the “Bridgerton” trademark. Netflix then sued, demanding an end to these for-profit performances. Interestingly enough, Netflix was allegedly initially on board with Barlow and Bear’s project. However, although Barlow and Bear’s conduct began on social media, the complaint alleges they “stretched fanfiction way past its breaking point”. According to the complaint, Netflix “offered Barlow & Bear a license that would allow them to proceed with their scheduled live performances at the Kennedy Center and Royal Albert Hall, continue distributing their album, and perform their Bridgerton-inspired songs live as part of larger programs going forward,” which Barlow and Bear refused. Netflix also alleged that the musical interfered with its own derivative work, the “Bridgerton Experience,” an in-person pop-up event that has been offered in several cities.

Unlike the Ratatouille: The Musical, which was created to raise money for a non-profit organization that benefited actors during the COVID-19 pandemic, the Unofficial Bridgerton Musical helped line the pockets of its creators, Barlow and Bear, in an effort to build an international brand for themselves. Netflix ended up privately settling the lawsuit in September of 2022.

Has the Aftermath Left a Bad Taste in IP Holder’s Mouths?

The stage has been set, and courts have yet to determine exactly how fan-made derivative works play out in a fair use analysis. New technologies only exacerbate this issue with the monetization of social media accounts and “viral” trends. At a certain point, no matter how much you want to root for the “little guy”, you have to admit when they’ve gone too far. Average “fan art” does not go so far as to derive significant profits off the original work and it is very rare that a large company will take legal action against a small content creator unless the infringement is so blatant and explicit, there is no other choice. IP law exists to protect and enforce the rights of the creators and owners that have worked hard to secure their rights. Allowing content creators to infringe in the name of “fair use” poses a dangerous threat to intellectual property law and those it serves to protect.

 

Update Required: An Analysis of the Conflict Between Copyright Holders and Social Media Users

Opening

For anyone who is chronically online as yours truly, in one way or another we have seen our favorite social media influencers, artists, commentators, and content creators complain about their problems with the current US Intellectual Property (IP) system. Be it that their posts are deleted without explanation or portions of their video files are muted, the combination of factors leading to copyright issues on social media is endless. This, in turn, has a markedly negative impact on free and fair expression on the internet, especially within the context of our contemporary online culture. For better or worse, interaction in society today is intertwined with the services of social media sites. Conflict arises when the interests of copyright holders clash with this reality. They are empowered by byzantine and unrealistic laws that hamper our ability to exist as freely as we do in real life. While they do have legitimate and fundamental rights that need to be protected, such rights must be balanced out with desperately needed reform. People’s interaction with society and culture must not be hampered, for that is one of the many foundations of a healthy and thriving society. To understand this, I venture to analyze the current legal infrastructure we find ourselves in.

Current Relevant Law

The current controlling laws for copyright issues on social media are the Copyright Act of 1976 and the Digital Millennium Copyright Act (DMCA). The DMCA is most relevant to our analysis; it gives copyright holders relatively unrestrained power to demand removal of their property from the internet and to punish those using illegal methods to get ahold of their property. This broad law, of course, impacted social media sites. Title II of the law added 17 U.S. Code § 512 to the Copyright Act of 1976, creating several safe harbor provisions for online service providers (OSP), such as social media sites, when hosting content posted by third parties. The most relevant of these safe harbors to this issue is 17 U.S. Code § 512(c), which states that an OSP cannot be liable for monetary damages if it meets several requirements and provides a copyright holder a quick and easy way to claim their property. The mechanism, known as a “notice and takedown” procedure, varies by social media service and is outlined in their terms and conditions of service (YouTube, Twitter, Instagram, TikTok, Facebook/Meta). Regardless, they all have a complaint form or application that follows the rules of the DMCA and usually will rapidly strike objectionable social media posts by users. 17 U.S. Code § 512(g) does provide the user some leeway with an appeal process and § 512(f) imposes liability to those who send unjustifiable takedowns. Nevertheless, a perfect balance of rights is not achieved.

The doctrine of fair use, codified as 17 U.S. Code § 107 via the Copyright Act of 1976, also plays a massive role here. It established a legal pathway for the use of copyrighted material for “purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research” without having to acquire right to said IP from the owner. This legal safety valve has been a blessing for social media users, especially with recent victories like Hosseinzadeh v. Klein, which protected reaction content from DMCA takedowns. Cases like Lenz v. Universal Music Corp further established that fair use must be considered by copyright holders when preparing for takedowns. Nevertheless, failure to consider said rights by true copyright holders still happens, as sites are quick to react to DMCA complaints. Furthermore, the flawed reporting systems of social media sites lead to abuse by unscrupulous actors faking true ownership. On top of that, such legal actions can be psychologically and financially intimidating, especially when facing off with a major IP holder, adding to the unbalanced power dynamic between the holder and the poster.

The Telecommunications Act of 1996, which focuses primarily on cellular and landline carriers, is also particularly relevant to social media companies in this conflict. At the time of its passing, the internet was still in its infancy. Thus, it does not incorporate an understanding of the current cultural paradigm we find ourselves in. Specifically, the contentious Section 230 of the Communication Decency Act (Title V of the 1996 Act) works against social media companies in this instance, incorporating a broad and draconian rule on copyright infringement. 47 U.S. Code § 230(e)(2) states in no uncertain terms that “nothing in this section shall be construed to limit or expand any law pertaining to intellectual property.” This has been interpreted and restated in Perfect 10, Inc. v. CCBill LLC to mean that such companies are liable for user copyright infringement. This gap in the protective armor of Section 230 is a great concern to such companies, therefore they react strongly to such issues.

What is To Be Done?

Arguably, fixing the issues around copyright on social media is far beyond the capacity of current legal mechanisms. With ostensibly billions of posts each day on various sites, regulation by copyright holders and sites is far beyond reason. It will take serious reform in the socio-cultural, technological, and legal arenas before a true balance of liberty and justice can be established. Perhaps we can start with an understanding by copyright holders not to overreact when their property is posted online. Popularity is key to success in business, so shouldn’t you value the free marketing that comes with your copyrighted property getting shared honestly within the cultural sphere of social media?  Social media sites can also expand their DMCA case management teams or create tools for users to accredit and even share revenue with, if they are an influencer or content creator, the copyright holder. Finally, congressional action is desperately needed as we have entered a new era that requires new laws. That being said, achieving a balance between the free exchange of ideas and creations and the rights of copyright holders must be the cornerstone of the government’s approach to socio-cultural expression on social media. That is the only way we can progress as an ever more online society.

 

Image: Freepik.com

https://www.freepik.com/free-vector/flat-design-intellectual-property-concept-with-woman-laptop_10491685.htm#query=intellectual%20property&position=2&from_view=keyword”>Image by pikisuperstar

DANCE DANCE LITIGATION

When the tune of the “Y.M.C.A.,” by The Village People starts to play, no matter the time or place, the urge to raise your arms and dance is impossible to ignore. A wave of nostalgia and childish-like happiness quickly fills the atmosphere, and as the chorus begins, you and (almost) everyone around you begin to dance the only way you know how: throwing your arms up in the air and forming the letters, duh!  But what’s not so obvious is that the “Y.M.C.A.” dance, irrespective of its wild popularity and incorporation into major television and film productions since its release in 1978, is not copyrighted. The songwriters, artists, and producers each have and continue to receive the recognition, compensation, and title they deserve for their contributions to the song itself, but the inherent choreography remains unprotected. According to the Copyright Office (“the Office”), a dance “whereby a group of people spell out letters with their arms” is simply too basic to deserve copyright recognition because no matter how distinctive it may be, it is nonetheless a commonplace movement or gesture.

CONGRESS ‘GETS DOWN’

Choreographers, since the beginning of the entertainment industry, have never received the legal protections that producers, songwriters, and artists have. Although The Copyright Act of 1976 (the “Act”) officially recognizes choreography as a protected form of creative expression, in order to qualify as copyrightable, the choreographic work must conform to the following elements: (1) it is an original work of authorship, (2) it is an expression as opposed to an idea, and (3) it is “fixed in any tangible medium of expression. In addition, the Supreme Court has held that an individual may not bring a copyright infringement suit under the Act until the individual has registered with the Office. Although choreographic works were finally recognized as worthy or deserving of copyright recognition and status, the application of copyright laws to choreography since its recognition has revealed a significant grey area for intellectual property law.

BUT IS IT JUST A SHIMMY OR A  ‘CHOREOGRAPHIC WORK’?

When assessing what qualifies as a copyrightable choreographic work, the Office acknowledges that the dividing line between what is a simple routine and what is copyrightable choreography is more of a continuum, rather than a bright line. The Office also indicated certain types of works that, from the outright, may not be copyrighted: common place movements, individual dance moves or gestures, social dances, ordinary and athletic movements, and short dance routines.

Whether a particular dance qualifies as a choreographic work, or not, ultimately rests on the Office’s assessment of the following elements collectively:

(1) rhythmic movement in a defined space

(2) compositional arrangement

(3) musical or textual accompaniment

(4) dramatic content

(5) presentation before an audience

(6) execution by skilled performers

DANCING OUR WAY TO THE COURT HOUSE 

Litigation surrounding the video game Fortnitereleased through Epic Games Inc., reveals just how large that grey area has grown to be. Although free to play, Fortnite’s revenue is derived from in-game purchases including purchasing a dance emote or a dance routine for the player’s avatar.

In 2019, Alfonso Ribeiro, who played the character ‘Carlton Banks’ on the TV show The Fresh Prince of Bel-Air, sought justice for Epic Games’ improper use of the Carlton as a dance emote in Fortnite but was both dismissed and rejected by the court and the Office. Following the direction of the Supreme Court, the court dismissed Mr. Ribeiro’s claim for failing to register and receive final registration of his claim with the Copyright Office. Registration is deemed to be “made” only when “the Register has registered a copyright after examining a properly filed application.” In an attempt to salvage his claim, Mr. Ribeiro proceeded to the Office but nonetheless left emptied handed. In reviewing the application, the Office refused to grant Mr. Ribeiro a copyright because the Carlton did not rise to the level of choreography since it was a simple routine made up of just three dance steps. Likewise, cases brought by rapper 2 Milly and the Backpack Kid against Epic Games alleging copyright infringement for their choreographic works the “Milly Rock,” and “the Floss” as an emote in Fortnite were also dismissed for failure to register with the Office.

So, since the cases were all dismissed for not having a valid registration with the Office, then having a valid registration with the Office is the golden ticket to defending your claim of improper infringement, right? Not quite.

Earlier this year in March, professional dance choreographer Kyle Hanagami (“Hanagami”) filed suit against Epic Games for using dance movements from the copyrighted routine used for the song “How Long” from Charlie Puth. Hanagami, unlike his predecessors above, secured a copyright for his choreographic work. Holding that golden ticket, Hanagami argued that Epic Games did not credit or seek his consent to use, display, reproduce, sell or create derivative work based on his registered choreography.

Regardless of the fact that Hanagami did secure his copyright before bringing a claim under the Act, the court yet again dismissed the case and agreed with Epic Games. The court stated that Hanagami’s steps are potentially protected only when combined with the other elements that make up his copyrighted work. Epic Games technically didn’t infringe on Hanagami’s copyright because the specific dance steps on their own were not entitled to copyright protection. When the works were evaluated as a whole, the court decided they were not substantially similar: “[w]hereas Hanagami’s video features human performers in a dance studio in the physical world performing for a YouTube audience, Epic Games’ work features animated characters performing for an in-game audience in a virtual world.”

And as if the grey couldn’t get any grey-er….it indeed does.

DANCING IN CIRCLES, YET AGAIN

The outcome of all this dance-litigation eludes to the central need for choreography, on its own, to be recognized and protected as a separate work. Although securing a copyright to a choreographic work will get you in the door to the courthouse, there’s no guarantee that what you’ve copyrighted will actually be protected. Thus, it is crucial that the plight of choreographers be truly recognized. Inconsistent outcomes and unclear guidelines continue to aggravate the underlying issue of allowing choreographers to pursue the copyright protection they deserve for their works.  Copyrighting successful dance routines is to further help ensure dancers’ and their ability to monetize and profit from their work, but the murky waters that prevent registration and the unpredictability of outcomes in court will remain as barriers until we can clear the grey area.

Alls fair in Love and Romance Scams

In 2014, 81-year-old Glenda thought she had met the love of her life. The problem? Their entire relationship was virtual. The individual on the other end of Glenda’s computer sold her a fictional narrative that he was a United States citizen working in Nigeria. Glenda and this man developed their virtual “relationship”, never meeting in person. After some time, this man would ask Glenda for money to help his business and to get back to the United States. Glenda, wanting to help her love, immediately sent over the money. The requests became more frequent.  When the small money transfers weren’t enough, he asked her to open personal and business bank accounts to transfer funds between the United States and overseas.

Despite numerous warnings from the FBI, local police, and banks to stop, Glenda still believed the man she met online loved her and needed help. She continued illegally transferring money overseas for the next 5 years and would eventually plead guilty to two federal felonies. Glenda was a victim of a Romance Scam and paid the ultimate price.

Unfortunately, Glenda’s situation, while extreme, is far from a rare occurrence today. In 2021 alone, the Federal Trade Commission (FTC) saw consumers report $547 million in losses due to romance scams, a concerning 80% more than those reported in 2020. In total, the FTC has seen an astronomical $1.3 billion in cumulative romance scam losses reported in the last five years. And these are just the scams that were reported to the FTC. Many victims go without reporting due to the shame and stigma that comes with falling prey to an online scam.

Romance scams often referred to as “sweetheart scams” occur when an individual (or group of individuals) fabricates an online persona and targets vulnerable persons for money.

These scammers build a fake relationship with the victim through messages and build empathy and trust over a short amount of time. After the relationship is built, the scammer suddenly succumbs to financial and/or medical hardships. Their initial request for money is typically a small amount and the victim may be repaid the first time to negate any doubts that this is a scam; after the second, third, and fourth request, the victim is likely never see their funds (or their “love”) again.

The elderly population is especially vulnerable to online scams.  Seniors tend to be more trusting than younger generations and usually have significant financial savings (own their home, retirement savings, government benefits). Also due to cognitive decline and unfamiliarity with technology, this group is left at a disadvantage to defend themselves or recognize when someone is feigning friendship versus a genuine connection. Even more so in recent years due to COVID-19, the elderly have become even more vulnerable. Many were forced into isolation and could only stay in contact with family and loved ones by getting internet devices, opening up a whole new world. Unmonitored access to the internet coupled with increased loneliness made elders the perfect target for romance scams.

Are dating sites liable for promoting fraudsters to unsuspecting victims? The short answer is no.

Under 47 USC Section 230, interactive computer service providers (a.k.a. social media and dating sites) are immune from liability for claims arising out of the content that third parties publish to their sites.

In 2022, the Federal Trade Commission’s claims against Match Group Inc. (owner and operator of Match.com, Tinder, PlentyofFish, OkCupid, Hinge, and several other dating sites) asserting that:

  1. Match.com misrepresented to consumers that profiles were interested in “establishing a dating relationship”, but on numerous instances, these profiles were set up by individuals with the intent to defraud; and
  2. Match “exposed to consumers to the risk of fraud” by allowing accounts that were reported or flagged for fraud and under review to still exchange communication with other subscribers.

The Texas Northern District Court dismissed both counts, holding that under Section 230, Match was entitled to immunity from a third party’s fraudulent content and actions. It seems that if a victim is looking for recovery, they won’t find it in the courts or through the dating sites themselves.

This looks like a job for the FBI…

Or maybe not.

The Federal Bureau of Investigation engages its Internet Crime Complaint Center (IC3), Recovery Asset Team (RAT) and Financial Crimes Enforcement Network (FinCEN) to recover monetary losses from internet scams. Unfortunately, the FBI typically takes on international cases of single transfers over $50,000 that fall within a 72-hour reporting window. Most romance scammers typically request money from elderly victims in smaller amounts over an extended period (the median loss for romance fraud victims in their 70s is $6,450).  Due to this high threshold and short reporting window, a majority of romance scam victims never report their losses or see their money again.

In reality…YOU Are Your Best Defense.

Prevent

Do not send money to someone you have never met in person.

Advocate

Check in on your loved ones who are living alone. They may be less inclined to turn to virtual relationships and send money if they have real-life connections.

Check with banks and financial institutions about regular check-in schedules for elderly clients or talk with your loved ones to help monitor their accounts if you notice they are in a cognitive decline.

Report

If you or your loved one have been a victim of a romance scam, contact 1) your financial institution immediately; 2) report the fraud to the dating site to try and shut down the fraudster’s account; and 3) report the fraud to the Federal Trade Commission.

Miracles Can Be Misleading

Want to lose 20 pounds in 4 days? Try this *insert any miracle weight-loss product * and you’ll be skinny in no time!

Miracle weight-loss products (MWLP) are dietary supplements that either work as an appetite suppressant or forcefully induce weight loss. These products are not approved or indicated by pharmaceutical agencies as weight loss prophylactics. Social media users are continuously bombarded with the newest weight-loss products via targeted advertisements and endorsements from their favorite influencers. Users are force fed false promises of achieving the picture-perfect body while companies are profiting off their delusions. Influencer marketing has increased significantly as social media becomes more and more prevalent. 86 percent of women use social media for purchasing advice. 70 percent of teens trust influencers more than traditional celebrities. If you’re on social media, then you’ve seen your favorite influencer endorsing some form of a MWLP and you probably thought to yourself “well if Kylie Jenner is using it, it must be legit.”

The advertisements of MWLP are promoting an unrealistic and oversexualized body image. This trend of selling skinny has detrimental consequences, often leading to body image issues, such as body dysmorphia and various eating disorders. In 2011, the Florida House Experience conducted a study among 1,000 men and women. The study revealed that 87 percent of women and 65 percent of men compare their bodies to those they see on social media. From the 1,000 subjects, 50 percent of the women and 37 percent of the men viewed their bodies unfavorably when compared to those they saw on social media. In 2019, Project Know, a nonprofit organization that studies addictive behaviors, conducted a study which suggested that social media can worsen genetic and psychological predispositions to eating disorders.

Who Is In Charge?

The collateral damages that advertisements of MWLP have on a social media user’s body image is a societal concern. As the world becomes more digital, even more creators of MWLP are going to rely on influencers to generate revenue for their products, but who is in charge of monitoring the truthfulness of these advertisements?

In the United States, the Federal Trade Commission (FTC) and the Food and Drug Administration (FDA) are the two federal regulators responsible for promulgating regulations relating to dietary supplements and other MWLP. While the FDA is responsible for the labeling of supplements, they lack jurisdiction over advertising. Therefore, the FTC is primarily responsible for advertisements that promote supplements and over-the-counter drugs.

The FTC regulates MWLP advertising through the Federal Trade Commission Act of 1914 (the Act). Sections 5 and 12 of the Act collectively prohibit “false advertising” and “deceptive acts or practices” in the marketing and sales of consumer products, and grants authority to the FTC to take action against those companies. An advertisement is in violation of the Act when it is false, misleading, or unsubstantiated. An advertisement is false or misleading when it contains “objective, material representation that is likely to deceive consumers acting reasonably under the circumstances.” An advertisement is unsubstantiated when it lacks “a reasonable basis for its contained representation.” With the rise of influencer marketing, the Act also requires influencers to clearly disclose when they have a financial or other relationship with the product they are promoting.

Under the Act, the FTC has taken action against companies that falsely advertise MWLP. The FTC typically brings enforcement claims against companies by alleging that the advertiser’s claims lack substantiation. To determine the specific level and type of substantiation required, the FTC considers what is known as the “Pfizer factors” established In re Pfizer. These factors include:

    • The type and specificity of the claim made.
    • The type of product.
    • The possible consequences of a false claim.
    • The degree of reliance by consumers on the claims.
    • The type, and accessibility, of evidence adequate to form a reasonable basis for making the particular claims.

In 2014, the FTC applied the Pfizer factors when they brought an enforcement action seeking a permanent injunction against Sensa Products, LLC. Since 2008, Sensa sold a powder weight loss product that allegedly could make an individual lose 30 pounds in six months without dieting or exercise. The company advertised their product via print, radio, endorsements, and online ads. The FTC claimed that Sensa’s marketing techniques were false and deceptive because they lacked evidence to support their health claims, i.e., losing 30 pounds in six months. Furthermore, the FTC additionally claimed that Sensa violated the Act by failing to disclose that their endorsers were given financial incentives for their customer testimonials. Ultimately, Sensa settled, and the FTC was granted the permanent injunction.

What Else Can We Do?

Currently, the FTC, utilizing its authority under the Act, is the main legal recourse for removing these deceitful advertisements from social media. Unfortunately, social media platforms, such as Facebook, Twitter, Instagram, etc., cannot be liable for the post of other users. Under section 230 of the Communications Decency Act, “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” That means, social media platforms cannot be held responsible for the misleading advertisements of MWLP; regardless of if the advertisement is through an influencer or the companies own social media page and regardless of the collateral consequences that these advertisements create.

However, there are other courses of action that social media users and social media platforms have taken to prevent these advertisements from poisoning the body images of users. Many social media influencers and celebrities have rose to the occasion to have MWLP advertisements removed. In fact, in 2018, Jameela Jamil, an actress starring on The Good Place, launched an Instagram account called I Weigh which “encourages women to feel and look beyond the flesh on their bones.” Influencer activism has led to Instagram and Facebook blocking users, under the age of 18, from viewing posts advertising certain weight loss products or other cosmetic procedures. While these are small steps in the right direction, more work certainly needs to be done.

The #Trademarkability of #Hashtags

The #hashtag is an important marketing tool that has revolutionized how companies conduct business. Essentially, hashtags serve to identify or facilitate a search for a keyword or topic of interest by typing a pound sign (#) along with a word or phrase (e.g., #OOTD or #Kony2012). Placing a hashtag at the beginning of a word or phrase on Twitter, Instagram, Facebook, TikTok, etc., turns the word or phrase into a hyperlink attaching it to other related posts, thus driving traffic to users’ sites. This is a great way to promote a product, service or campaign while simultaneously reducing marketing costs and increasing brand loyalty, customer engagement, and, of course, sales. But with the rise of this digital “sharing” tool comes a new wave of intellectual property challenges. Over the years, there has been increasing interest in including the hashtag in trademark applications.

#ToRegisterOrNotToRegister

According to the United States Patent and Trademark Office (USPTO), a term containing the hash symbol or the word “hashtag” MAY be registered as a trademark. The USPTO recognizes hashtags as registrable trademarks “only if [the mark] functions as an identifier of the source of the applicant’s goods or services.” Additionally, Section 1202.18 of the Trademark Manual of Examining Procedure (TMEP) further explains that “when examining a proposed mark containing the hash symbol, careful consideration should be given to the overall context of the mark, the placement of the hash symbol in the mark, the identified goods and services, and the specimen of use, if available. If the hash symbol immediately precedes numbers in a mark, or is used merely as the pound or number symbol in a mark, such marks should not necessarily be construed as hashtag marks. This determination should be made on a case-by-case basis.”

Like other forms of trademarks, one would seek registration of a hashtag in order to exclude others from using the mark when selling or offering the goods or services listed in the registration. More importantly, the existence of the trademark would serve in protecting against consumer confusion. This is the same standard that is applied to other words, phrases, or symbols that are seeking trademark registration. The threshold question when considering whether to file a trademark application for a hashtag is whether the hashtag is a source identifier for goods or services, or whether it merely describes a particular topic, movement, or idea.

#BarsToRegistration

Merely affixing a hashtag to a mark does not automatically make it registerable. For example, in 2019, the Trademark Trial and Appeal Board (TTAB) denied trademark registration for #MAGICNUMBER108 because it did not function as a trademark for shirts and is therefore not a source identifier. Rather, the TTAB found that the social media evidence suggests that the public sees the hashtag as a “widely used message to convey information about the Chicago Cubs baseball team”, namely, their 2016 World Series win after a 108-year drought.  The TTAB went on to say that just because the mark is unique doesn’t mean that the public would perceive it is an indication of a source. This further demonstrates the importance of a goods- source association of the mark.

Hashtags that would not function as trademarks are those simply relating to certain topics that are not associated with any goods or services. So, for example, cooking: #dinnersfortwo, #mealprep, or #healthylunches. These hashtags would likely be searched by users to find information relating to cooking or recipe ideas. When encountering these hashtags on social media, users would probably not link them to a specific brand or product. On the contrary, hashtags like #TheSaladLab or #ChefCuso would likely be linked to specific social media influencers who use that mark in connection with their goods and services and as such, could function as a trademark. Other examples of hashtags that would likely function as trademarks are brands themselves (#sephora, #prada, or #nike). Even slogans for popular brands would suffice (#justdoit, #americarunsondunkin, or #snapcracklepop).

#Infringement

What makes trademarked hashtags unique from other forms of trademarked material is that hashtags actually serve a purpose other than just identifying the source of the goods- they are used to index key words on social media to allow users to follow topics they are interested in. So, does that mean that using a trademarked hashtag in your social media post will create a cause of action for trademark infringement? The answer to this question is every lawyer’s favorite response: it depends. Sticking with the example above, assuming #TheSaladLab is a registered trademark, referencing the tag in this blog post alone would likely not warrant a trademark infringement claim, but if I were to sell kitchen tools or recipe books with the tag #TheSaladLab, that might rise to the level of infringement. However, courts are still unclear about the enforceability of hashtagged marks. In 2013, a Mississippi District Court stated in an order that “hashtagging a competitor’s name or product in social media posts could, in certain circumstances, deceive consumers.” The court never actually made a ruling on whether the use of the hashtag was actually infringing the registered mark.

This is problematic because on one hand, regardless of whether there is a hashtag in front of the mark, the owner of a registered trademark is entitled to bring a cause of action for trademark infringement when someone else uses their mark in commerce without their permission in the same industry. On the other hand, when one uses a trademark with the “#” symbol in front of it for the purposes of sharing information on social media, they are simply complying with the norms of the internet. The goal is to strike a balance between protecting the rights of IP owners and also protecting the rights of users’ freedom of expression on social media.

While the courts are somewhat behind in dealing with infringement relating to hashtagged trademark material, for the time being, various social media platforms (Instagram, Facebook, Twitter, YouTube) have procedures in place that allow users to report misuse of trademark-protected material or other intellectual property-related concerns.

Skip to toolbar