Sharing is NOT Always Caring

Where There’s Good, There’s Bad

Social media’s vast growth over the past several years has attracted millions of users who use these platforms to share content, connect with others, conduct business, and spread news and information. However, social media is a double-edged sword. While it creates communities of people and bands them together, it destroys privacy in the meantime. All of the convenient aspects of social media that we know and love lead to significant exposure of personal information and related privacy risks. Social media companies retain massive amounts of sensitive information regarding users’ online behavior, including their interests, daily activities, and political views. Algorithms are embedded within these functions to promote specific goals of social media companies, such as user engagement and targeted advertising. As a result, the means to achieve these goals conflict with consumers’ privacy concerns.

Common Issues

In 2022, several U.S. state and federal agencies banned their employees from using TikTok on government-subsidized devices, fearful that foreign governments could acquire confidential information. While a lot of the information collected through these platforms is voluntarily shared by users, much of it is also tracked using “cookies,” and you can’t have these with a glass of milk! Tracking cookies allows information regarding users’ online browsing activity to be stored and displayed in a way that targets specific interests and personalizes content tailored to these particular likings. Signing up for a social account and agreeing to the platform’s terms permits companies to collect all of this data.

Social media users leave a “digital footprint” on the internet when they create and use their accounts. Unfortunately, enabling a “private” account does not solve the problem because data is still retrieved in other ways. For example, engagement in certain posts through likes, shares, comments, buying history, and status updates all increase the likelihood that privacy will be intruded on.

Two of the most notorious issues related to privacy on social media are data breaches and data mining. Data breaches occur when individuals with unauthorized access steal private or confidential information from a network or computer system. Data mining on social media is the process in which user information is analyzed to identify specific tendencies which are subsequently used to inform research and other advertising functions.

Other issues that affect privacy are certain loopholes that can be taken around preventive measures already in place. For example, if an individual maintains a private social account but then shares something with their friend, others who are connected with the friend can view the post. Moreover, location settings enable a person’s location to be known even if the setting is turned off. Other means, such as Public Wi-Fi and websites can still track users’ locations.

Taking into account all of these prevailing issues, only a small amount of information is actually protected under federal law. Financial and healthcare transactions as well as details regarding children are among the classes of information that receive heightened protection. Most other data that is gathered through social media can be collected, stored, and used. Social media platforms are unregulated to a great degree with respect to data privacy and consumer data protection. The United States does have a few laws in place to safeguard privacy on social media but more stringent ones exist abroad.

Social media platforms are required to implement certain procedures to comply with privacy laws. They include obtaining user consent, data protection and security, user rights and transparency, and data breach notifications. Social media platforms typically ask their users to agree to their Terms and Conditions to obtain consent and authorization for processing personal data. However, most are guilty of accepting without actually reading these terms so that they can quickly get to using the app.

Share & Beware: The Law

Privacy laws are put in place to regulate how social media companies can act on all of the information users share, or don’t share. These laws aim to ensure that users’ privacy rights are protected.

There are two prominent social media laws in the United States. The first is the Communications Decency Act (CDA) which regulates indecency that occurs through computer networks. Nevertheless, Section 230 of the CDA provides enhanced immunity to any cause of action that would make internet providers, including social media platforms, legally liable for information posted by other users. Therefore, accountability for common issues on social media like data breaches and data misuse is limited under the CDA. The second is the Children’s Online Privacy Protection Act (COPPA). COPPA protects privacy on websites and other online services for children under the age of thirteen. The law prevents social media sites from gathering personal information without first providing written notice of disclosure practices and obtaining parental consent. The challenge remains in actually knowing whether a user is underage because it’s so easy to misrepresent oneself when signing up for an account. On the other hand, the European Union has General Data Protection Regulation (GDPR) which grants users certain control over when and how their data is processed. The GDPR contains a set of guidelines that restrict personal data from being disseminated on social media platforms. In the same way, it also gives internet users a long set of rights in cases where their data is shared and processed. Some of these rights include the ability to withdraw consent that was previously given, access information that is collected from them, and delete or restrict personal data in certain situations. The most similar domestic law to the GDPR is the California Consumer Privacy Act (CCPA) which was enacted in 2020. The CCPA regulates what kind of information can be collected by social media companies, giving platforms like Google and Facebook much less freedom in harvesting user data. The goal of the CCPA is to make data collection transparent and understandable to users.

Laws on the state level are lacking and many lawsuits have occurred as a result of this deficiency. A class action lawsuit was brought in response to the collection of users’ information by Nick.com. These users were all children under the age of thirteen who sued Viacom and Google for violating privacy laws. They argued that the data collected by the website together with Google’s stored data relative to its users was personally identifiable information. A separate lawsuit was brought against Facebook for tracking users when they visited third-party websites. Individuals who brought suit claimed that Facebook was able to personally identify and track them through shares and likes when they visited certain healthcare websites. Facebook was able to collect sensitive healthcare information as users browsed these sites, without their consent. However, the court asserted that users did indeed consent to these actions when they agreed to Facebook’s data tracking and data collection policies. The court also stated that the nature of this data was not subject to any stricter requirements as plaintiffs claimed it was because it was all available on publicly accessible websites. In other words, public information is fair game for Facebook and many other social media platforms when it comes to third-party sites.

In contrast to these two failed lawsuits, TikTok agreed to pay a $92 million settlement for twenty-one combined lawsuits due to privacy violations earlier this year. The lawsuit included substantial claims, such as allegations that the app analyzed users’ faces and collected private data on users’ devices without obtaining their permission.

We are living in a new social media era, one that is so advanced that it is difficult to fully comprehend. With that being said, data privacy is a major concern for users who spend a large amount of time sharing personal information, whether they realize it or not. Laws are put in place to regulate content and protect users, however, keeping up with the growing presence of social media is not an easy task–sharing is inevitable and so are privacy risks.

To share or not to share? That is the question. Will you think twice before using social media?

New York is Protecting Your Privacy

TAKING A STANCE ON DATA PRIVACY LAW

The digital age has brought with it unprecedented complexity surrounding personal data and the need for comprehensive data legislation. Recognizing this gap in legislative protection, New York has introduced the New York Privacy Act (NYPA), and the Stop Addictive Feeds Exploitation (SAFE) For Kids Act, two comprehensive initiatives designed to better document and safeguard personal data from the consumer side of data collection transactions. New York is taking a stand to protect consumers and children from the harms of data harvesting.

Currently under consideration in the Standing Committee on Consumer Affairs And Protection, chaired by Assemblywoman Nily Rozic, the New York Privacy Act was introduced as “An Act to amend the general business law, in relation to the management and oversight of personal data.” The NYPA was sponsored by State Senator Kevin Thomas and closely resembles the California Consumer Privacy Act (CCPA), which was finalized in 2019. In passing the NYPA, New York would become just the 12th state to adopt a comprehensive data privacy law protecting state residents.

DOING IT FOR THE DOLLAR

Companies buy and sell millions of user’s sensitive personal data in the pursuit of boosting profits. By purchasing personal user data from social media sites, web browsers, and other applications, advertisement companies can predict and drive trends that will increase product sales among different target groups.

Social media companies are notorious for selling user data to data collection companies, things such as your: name, phone number, payment information, email address, stored videos and photos, photo and file metadata, IP address, networks and connections, messages, videos watched, advertisement interactions, and sensor data, as well as time, frequency, and duration of activity on the site. The NYPA targets businesses like these by regulating legal persons that conduct business in the state of New York, or who produce products and services aimed at residents of New York. The entity that stands to be regulated must:

  • (a) have annual gross revenue of twenty-five million dollars or more;
  • (b) control or process personal data of fifty thousand consumers or more;
  • or (c) derive over fifty percent of gross revenue from the sale of personal data.

The NYPA does more for residents of New York because it places the consumer first, as the Act is not restricted to regulating businesses operating within New York but encompasses every resident of New York State who may be subject to targeted data collection, an immense step forward in giving consumers control over their digital footprint.

MORE RIGHTS, LESS FRIGHT

The NYPA works by granting all New Yorkers additional rights regarding how their data is maintained by controllers to which the Act applies. The comprehensive rights granted to New York consumers include the right to notice, opt out, consent, portability, correct, and delete personal information. The right to notice requires each controller provide a conspicuous and readily available notice statement describing the consumer’s rights, indicating the categories of personal data the controller will be collecting, where its collected from, and what it may be used for. The right to opt out includes allowing for consumers to opt out of processing their personal data for the purposes of targeted advertising, the sale of their personal data, and for profiling purposes. This gives the consumer an advantage when browsing sites and using apps, as they will be duly informed of exactly what information they are giving up when online.

While all the rights included in the NYPA are groundbreaking for the New York consumer, the right to consent to sensitive data collection and the right to delete data cannot be understated. The right to consent requires controllers to conspicuously ask for express consent to collect sensitive personal data. It also contains a zero-discrimination clause to protect consumers who do not give controllers express consent to use their personal data. The right to delete requires controllers to delete any or all of a consumer’s personal data upon request, demanding controllers delete said data within 45 days of receiving the request. These two clauses alone can do more for New Yorker’s digital privacy rights than ever before, allowing for complete control over who may access and keep sensitive personal data.

BUILDING A SAFER FUTURE

Following the early success of the NYPA, New York announced their comprehensive plan to better protect children from the harms of social media algorithms, which are some of the main drivers of personal data collection. Governor Kathy Hochul, State Senator Andrew Gounardes, and Assemblywoman Nily Rozic recently proposed the Stop Addictive Feeds Exploitation (SAFE) For Kids Act, directly targeting social media sites and their algorithms. It has long been suspected that social media usage contributes to worsening mental health conditions in the United States, especially among youths. The SAFE For Kids Act seeks to require parental consent for children to have access to social media feeds that use algorithms to boost usage.

On top of selling user data, social media sites like Facebook, YouTube, and X/Twitter also use carefully constructed algorithms to push content that the user has expressed interest in, usually based on the profiles they click on or the posts they ‘like’. Social media sites feed user data to algorithms they’ve designed to promote content that will keep the user engaged for longer, which exposes the user to more advertisements and produces more revenue.

Children, however, are particularly susceptible to these algorithms, and depending on the posts they view, can be exposed to harmful images or content that can have serious consequences for their mental health. Social media algorithms can show children things they are not meant to see, regardless of their naiveté and blind trust, traits that are not exactly cohesive with internet use. Distressing posts or controversial images could be plastered across children’s feeds if the algorithm determines it would drive better engagement by putting them there. Under the SAFE For Kids Act, without parental consent, children on social media sites would see their feed in chronological order, and only see posts from users they ‘follow’ on the platform. This change would completely alter the way platforms treat accounts associated with children, ensuring they are not exposed to content they don’t seek out themselves. This legislation would build upon the foundations established by the NYPA, opening the door to even further regulations that could increase protections for the average consumer and more importantly, for the average child online.

New Yorkers: If you have ever spent time on the internet, your personal data is out there, but now you have the power to protect it.

Don’t Talk to Strangers! But if it’s Online, it’s Okay?

It is 2010.  You are in middle school and your parents let your best friend come over on a Friday night.  You gossip, talk about crushes, and go on all social media sites.  You decide to try the latest one, Omegle.  You automatically get paired with a stranger to talk to and video chat with.  You speak to a few random people, and then, with the next click, a stranger’s genitalia are on your screen.

Stranger Danger

Omegle is a free video-chatting social media platform.  Its primary function has become meeting new people and arranging “online sexual rendezvous.”  Registration is not required.  Omegle randomly pairs users for one-on-one video sessions.  These sessions are anonymous, and you can skip to a new person at any time.  Although there is a large warning on the home screen saying “you must be 18 or older to use Omegle”, no parental controls are available through the platform.  Should you want to install any parental controls, you must use a separate commercial program.

While the platform’s community guidelines illustrate the “dos and don’ts” of the site, it seems questionable that the platform can monitor millions of users, especially when users are not required to sign up, or to agree to any of Omegle’s terms and conditions.  It, therefore, seems that this site could harbor online predators, raising quite a few issues.

One recent case surrounding Omegle involved a pre-teen who was sexually abused, harassed, and blackmailed into sending a sexual predator obscene content.  In A.M. v. Omegle.com LLC, the open nature of Omegle ended up matching an 11-year-old girl with a sexual predator in his late thirties.  Being easily susceptible, he forced the 11-year-old into sending pornographic images and videos of herself, perform for him and other predators, and recruit other minors.  This predator was able to continue this horrific crime for three years by threatening to release these videos, pictures, and additional content publicly.  The 11-year-old plaintiff sued Omegle on two general claims of platform liability through Section 230, but only one claim was able to break through the law.

Unlimited Immunity Cards!

Under 47 U.S.C. § 230 (Section 230), social media platforms are immune from liability for content posted by third parties.  As part of the Communications Decency Act of 1996, Section 230 provides almost full protection against lawsuits for social media companies since no platform is seen as a publisher or speaker of user-generated content posted on the site.  Section 230 has gone so far to say that Google and Twitter were immune from liability for claims that their platforms were used to aid terrorist activities.  In May of 2023, these cases moved up to the Supreme Court.  Although the court declined to rule for the Google case, they ruled on the Twitter case.  Google was found not liable for the claim that they stimulated the growth of ISIS through targeted recommendations and inspired an attack that killed an American student.  Twitter was immune for the claim that the platform aided and abetted a terrorist group to raise funds and recruit members for a terrorist attack.

Wiping the Slate

In February of 2023, the District Court in Oregon for the Portland Division found that Section 230 immunity did not apply to Omegle in a products liability claim, and the platform was held liable for these predatory actions committed by the third party on the site.  By side-stepping the third-party freedom of speech issue that comes with Section 230 immunity for an online publisher, the district court found Omegle responsible under the Plaintiff’s products liability claim, which targeted the platforms’ defective design, defective warning, negligent design, and failure to warn.

Three prongs need to be proved to preclude a platform from liability under Section 230:

  1. A provider of an interactive site,
  2. Whom is sought to be treated as a publisher or speaker, and
  3. For information provided by a third-party.

It is clear that Omegle is an interactive site that fits into the definition provided by Section 230.  The issue then falls on the second and third prongs: if the cause of action treated Omegle as the speaker of third-party content.  The sole function of randomly pairing strangers causes the foreseen danger of pairing a minor with an adult. Shown in the present case, “the function occurs before the content occurs.” By designing the platform negligently and with knowing disregard for the possibility of harm, the court ultimately concluded that the liability of the platform’s function does not pertain to third-party published content and that the claim targeted specific functions rather than users’ speech on the platform.  Section 230 immunity did not apply for this first claim and Omegle was held liable.

Not MY Speech

The plaintiff’s last claim dealing with immunity under Section 230 is that Omegle negligently failed to apply reasonable precautions to provide a safe platform.  There was a foreseeable risk of harm when marketing the service to children and adults and randomly pairing them.  Unlike the products liability claim, the negligence claim was twofold: the function of matching people and publishing their communications to each other, both of which fall directly into Section 230’s immunity domain.  The Oregon District Court drew a distinct line between the two claims, so although Omegle was not liable under Section 230 here through negligent service, they were liable through products liability.

If You Cannot Get In Through the Front Door, Try the Back Door!

For almost 30 years, social media platforms have been nearly immune from liability pertaining to Section 230 issues.  In the last few years, with the growth of technology on these platforms, judges have been trying to find loopholes in the law to hold companies liable.  A.M. v. Omegle has just moved through the district court level.  If appealed, it will be an interesting case to follow and see if the ruling will stand or be overruled in conjunction with the other cases that have been decided.  

How do you think a higher court will rule on issues like these?

When in Doubt, DISCLOSE it Out!

The sweeping transformation of social media platforms over the past several years has given rise to convenient and cost-effective advertising. Advertisers are now able to market their products or services to consumers (i.e. users) at low cost, right at their fingertips…literally! But convenience comes with a few simple and easy rules. Influencers, such as, athletes, celebrities, and high-profile individuals are trusted by their followers to remain transparent. Doing so does not require anything difficult. In fact, including “Ad” or “#Ad” at the beginning of a post is satisfactory. The question then becomes, who’s making these rules?

The Federal Trade Commission (FTC) works to stop deceptive or misleading advertising and provides guidance on how to go about doing so. Under the FTC, individuals have a legal obligation to clearly and conspicuously disclose their material connection to the products, services, brands, and/or companies they promote on their feeds. The FTC highlights one objective component to help users identify an endorsement. That is, a statement made by the speaker where their relationship with the advertiser is such that the speaker’s statement can be understood to be sponsored by the advertiser. In other words, if the speaker is acting on behalf of the advertiser, then that statement will be taken as an endorsement and subject to guidelines. Several factors will determine this, such as compensation, free products, and the terms of any agreement. Two basic principles of advertising law apply to all types of advertising in any media. They include 1) a reasonable basis to evidence claims and 2) clear and conspicuous disclosure. Overall, the FTC works to ensure transparent sponsorship in an effort to maintain consumer trust.

The Breakdown—When, How, & What Else

Influencers should disclose when they have a financial, employment, personal, or family relationship with a brand. Financial relationships do not have to be limited to money. If for example, a brand gives you a free product, disclosure is required even if you were not asked to mention it in a post. Similarly, if a user posts from abroad, U.S. law still applies if it is reasonably foreseeable that U.S. consumers will be affected.

When disclosing your material connection to the brand, make sure that disclosure is easy to see and understand. The FTC has previously disapproved of disclosure in places that are remote from the post itself. For instance, users should not have to press “show more” in the comments section to see that the post is actually an endorsement.

Another important aspect advertisers and endorsers should consider when disclosing are making sure not to talk about items they have not yet tried. They should also avoid saying that a product was great when they in fact thought it was not. In addition, individuals should not convey information or produce claims that are unsupported by actual evidence.

However, not everyone who posts about a brand needs to disclose. If you want to post a Sephora haul or a Crumbl Cookie review, that is okay! As long as a company is not giving you products for free or paying you to sponsor them, individuals are free to post at their leisure, without disclosing.

Now that you realize how seamless disclosure is, it may be surprising that people still fail to do so.

Rule Breakers

In Spring 2020 we saw an uptick of social media posts due to the fact that most people abided by stay-at-home orders and turned to social media for entertainment. TikTok is deemed particularly addictive, with users spending substantially more time on it over other apps, such as Instagram and Twitter.

TikTok star Charlie D’Amelio spoke positively about the enhancement drink, Muse in a Q&A post. She never acknowledged that the brand was paying her to sponsor their product and failed to use the platform’s content enabling tool which makes it even easier for users to disclose. D’Amelio is the second most followed account on the platform.

The Teami brand found itself in a similar position when stars like Cardi B and Brittany Renner made unfounded claims that the wellness company made products that resulted in unrealistic health benefits. The FTC instituted a complaint alleging that the company misled consumers to think that their 30-day detox pack would ensure weight loss. A subsequent court order prohibited them from making such unsubstantiated claims.

Still, these influencers hardly got punished, but received a mere ‘slap on the wrist’ for making inadequate disclosures. They were ultimately sent warning letters and received some bad press.

Challenges in Regulation & Recourse

Section 5(a) of the FTC Act is the statute that allows the agency to investigate and prevent unfair methods of competition. It is what gives them the authority to seek relief for consumers. This includes injunctions and restitution and in some cases, civil penalties. However, regulation is challenging because noncompliance is so easy. While endorsers have the ultimate responsibility to disclose their content, advertising companies are urged to implement procedures that make doing so more probable. There are never-ending amounts of content on social media to regulate, making it difficult for entities like the FTC to know when rules are actually being broken.

Users can report undisclosed posts through their social media accounts directly, their state attorneys general office, or to the FTC. Private parties can also bring suit. In 2022, a travel agency group sued a travel influencer for deceptive advertising. The influencer made false claims, such as being the first woman to travel to every country and failed to disclose paid promotions on her Instagram and TikTok accounts. The group seeks to enjoin the influencer from advertising without disclosing and to engage in corrective measures on her remaining posts that violate the FTC’s rules. Social media users are better able to weigh the value of endorsements when they can see the truth behind such posts.

In a world filled with filters, when it comes to advertisements on social media, let’s just keep it real.

THE SCHEME BEHIND AN ILLEGAL STREAM

FOLLOW THE STREAM TOWARDS A FELONY

The Protecting Lawful Streaming Act makes it a felony to engage in large-scale streaming of copyright material. The introduction of this law took place on December 10th, 2020. The law pertains to the increased concern surrounding live audio and video streaming in recent years. Specifically, such streaming has transformed society and become one of the most influential ways society chooses to enjoy various forms of content. Yet, the growth of legitimate streaming services has continuously been accompanied and disturbed by unlawful streaming of copyright materials. Initially, the illegal streaming of copyright material was only a misdemeanor until the Protecting Lawful Streaming Act became a part of America’s newest addition to the law.

Under the Protecting Lawful Streaming Act, a person must act:

  1. Willfully.
  2. For purposes of commercial advantage or private financial gain.
  3. Offer or provide to the public a digital transmission service.

ALL FOR ONE, ONE FOR ALL

The law’s enactment incentivizes those who indulge in hosting illegal streams subjects them to severe criminal penalties. Accordingly, anyone who hosts an illegal stream that not only infringes upon copyright material but also obtains an economic benefit will now face felony charges. Many fail to recognize that while the individual responsible for hosting the illegal stream faces criminal charges, any individual who merely partakes in viewing this infringement does not technically violate any criminal law. Therefore, illegal streams that host hundreds and even thousands of viewers allow for no criminal action to be taken or even threatened to all these spectators. Instead, the focus is entirely on the host of this illegal stream.

PLATFORMS ENGINEERING IS PERFECTLY IMPERFECT

The question then becomes, what does social media do with illegal streaming? For starters, social media platforms serve as one of, if not the most, influential ways illegal streams reach society. Social media platform designs focus on spreading information. They not only spread information but essentially take information and provide the capability to have it worldwide within seconds. As such, these platform’s engineering do precisely what illegal streaming hosts want. That is to expose these streams to millions of individuals who may indulge and use copyright material for their benefit. Social media’s capabilities of utilizing hashtags, likes, shares, and other methods of expansion through social media allow hosts to capitalize on these platform’s designs to take advantage for their own personal and financial gain.

NOT MY MESS, NOT MY PROBLEM

Social media platforms are not liable for copyright material exposure on their platforms. According to the Digital Millennium Copyright Act, the only requirement is that these platforms must take prompt action when contacted by the rights holders. However, the statistics have shown thus far that social media platforms fail to take the initiative and are generally unwilling to address this ongoing concern. The argument on behalf of social media platforms is that the duty is not on their behalf but on the rights holders to report an infringement. With this belief, social media platforms could take a more significant initiative to address this concern of illegal streaming. While social media platforms have at least some implementations to help prevent infringement of owner’s work, the system is flawed, with many unresolved areas of concern. Current measures in place by themselves fail to provide reassurance that they can protect the content of the actual owner from being exploited for the financial benefit of illegal streaming hosts around the world. 

MORE MONEY, MORE PROBLEMS

The question then becomes, how many illegal streaming services impact people? Major entertainment networks such as the NFL, NBA, and UFC are just a few examples of illegal streaming threatening their businesses’ most critical revenue stream. That being the television viewership. Not only this but even movie and non-sport television programs are reported to have lost billions of dollars to the hands of illegal streaming. Thus, by enacting the Protecting Lawful Streaming Act, the goal is to deter harmful criminal activity and simultaneously protect the rights of creators and copyright owners.

Furthermore, the individual people would least expect to be harmed by illegal streaming is also in jeopardy. That being themselves! Illegal streams cause various risks of malicious software that can infect one’s device. This exposure puts individuals’ personal information at risk. It is subject to several casualties, such as identity fraud, financial loss, and permanent damage to devices that watch these illegal streaming services. 

WHAT’S MINE IS YOURS

Society must recognize and address how individuals can counteract illegal streaming legally yet unfairly. For instance, an individual who legally purchases a pay-per-view event and then live streams this on their social media for others to also spectate. Someone can lawfully buy the stream and not be subject to being host to an illegal stream. Yet, the same issue arises. The owners of this content are stuck with no resolution and lose out on potential revenue. Rather than these individuals all purchasing the content for themselves, one is used as a sacrifice while the others reap the same benefit without costing a dime. The same scenario can arise where individuals gather in one home to watch a pay-per-view or a movie on demand. This conduct is not illegal, but it negates the potential revenue these industries may obtain. Such a solution was, is, and consistently will be recognized as legal activity.

AN ISSUE, BUT NOT AN ISSUE WORTH SOLVING

Even streaming platforms like Netflix fail to take any measures regarding not necessarily illegally streaming its content but sharing passwords for one account. Although such conduct can be subject to civil liability in a breach of its contractual terms or even criminal liability if fraud is determined, these platforms fail to take proper measures against this behavior. Ultimately, moving forward on these actions would be too costly and can result in losing viewership through this sort of conduct.

Through these findings, it’s clear that illegal streaming has and continues to take advantage of the actual copyright owners of this material. The Protecting Lawful Streaming Act was society’s most recent attempt to minimize this ongoing issue through an effort to increase the criminal penalty and deter such conduct. Yet, based on the inability to identify and diminish these illegal streams on social media, many continue to get away with this behavior daily. The legal loopholes discussed above prove that entertainment industries may never see the revenue stream they anticipate. Only time will tell how society responds to this predicament and whether some law will address it in the foreseeable future. If the law were to hold higher standards for social media platforms to take accountability for this conduct, would it make a difference? Even so, would the minimization of social media’s influence on the spread of illegal streams even have a lasting impact? 

Destroying Defamation

The explosion of Fake News spread among social media sites is destroying a plaintiff’s ability to succeed in a defamation action. The recent proliferation of rushed journalism, online conspiracy theories, and the belief that most stories are, in fact, “Fake News” have created a desert of veracity. Widespread public skepticism about even the most mainstream social media reporting means plaintiffs need help convincing jurors that third parties believed any reported statement to be true. Such proof is necessary for a plaintiff to prove the elements of defamation.

Fake News Today

Fake News is any journalistic story that knowingly and intentionallyincludes untrue factual statements. Today, many speak of Fake News as a noun. There is no shortage of examples of Fake News and its impact.

      • Pizzagate: During the 2016 Presidential Election, Edgar Madison Welch, 28, read a story on (then) Facebook that Hilary Clinton was running a child trafficking ring out of the basement of a pizzeria. Welch, a self-described vigilante, shot open a locked door of the pizzeria with his AR-15.
      • A study by three MIT scholars found that false news stories spread faster on Twitter than true stories, with the former being 70% more likely to be retweeted than the latter.
      • During the defamation trial of Amber Heard and Johnny Depp, a considerable number of “Fake News” reports circulated across social media platforms, particularly TikTok, Twitter, and YouTube, attacking Ms. Heard at a disproportionality more significant rate than Mr. Depp.

 

What is Defamation?

To establish defamation, a plaintiff must show the defendant published a false assertion of fact that damages the plaintiff’s reputation. Hyperbolic language or other indications that a statement was not meant to be taken seriously are not actionable. Today’s understanding that everything on the Internet is susceptible to manipulation destroys defamation.

Because the factuality of a statement is a question of law, a plaintiff must first convince a judge that the offending statement is fact and not opinion. Courts often find that Internet and social media statements are hyperbole or opinion. If a plaintiff succeeds in persuading the judge, then the issue of whether the statement defamed the plaintiff heads to the jury. A jury faced with defamation must determine whether the statement of fact harmed the defendant’s reputation or livelihood to the extent that it caused the plaintiff to incur damages. The prevalence of Fake News creates another layer of difficulty for the Internet plaintiff, who must convince the jury that the statement was true.

Defamation’s Slow and Steady Erosion

Since the 1960s, the judiciary has limited plaintiffs’ ability to succeed in defamation claims. The decisions in Sullivan v. New York Times and Gertz increased the difficulty for public figures, and those with limited public figure status, to succeed by requiring them to prove actual malice against a defendant, a standard higher than the mere negligence standard allowed for individuals who are not of community interest.

The rise of Internet use, mainly social media, presents plaintiffs with yet another hurdle. Plaintiffs can only succeed if the challenged statement is fact, not opinion. However, judges find that statements made on the Internet are opinions and not points. The combined effect of Supreme Court limitations on proof and the increased belief that social media posts are mostly opinions has limited the plaintiff’s ability to succeed in a defamation claim.

Destroying Defamation

If the Supreme Court and social media have eroded defamation, Fake News has destroyed it. Today, convincing a jury that a false statement purporting to be fact has defamed a plaintiff is difficult given the dual issues of society’s objective mistrust of the media and the understanding that information on the Internet is generally opinion, not fact. Fake News sows confusion and makes it almost impossible for jurors to believe any statement has the credibility necessary to cause harm.

To be clear, in some instances, fake News is so intolerable that a jury will find for the plaintiffs. A Connecticut jury found conspiracy theorist Alex Jones liable for defamation based on his assertion that the government had faked the Sandy Hook shootings.

But often, plaintiffs are unsuccessful where the challenged language is conflated with untruths. Fox News successfully defended itself against a lawsuit claiming that it had aired false and deceptive content about the coronavirus, even though its reporting was, in fact, untrue.

Similarly, a federal judge dismissed a defamation case against Fox News for Tucker Carlson’s report that the plaintiff had extorted then-President Donald Trump. In reaching its conclusion, the judge observed that Carlson’s comments were rhetorical hyperbole and that the reasonable viewer “‘arrive[s] with the appropriate amount of skepticism.”‘ Reports of media success in defending against defamation claims further fuel media mistrust.

The current polarization caused by identity politics is furthering the tendency for Americans to mistrust the media. Sarah Palin announced that the goal of her recent defamation case against The New York Times was to reveal that the “lamestream media” publishes “fake news.”

If jurors believe that no reasonable person could credit a challenged statement as accurate, they cannot find that the statement the plaintiff asserts is defamatory caused harm. An essential element of defamation is that the defendant’s remarks damaged the plaintiff’s reputation. The large number of people who believe News is fake, the media’s rush to publish, and external attacks on credible journalism have created a problematization of truth among members of society. The potential for defamatory harm is minimal when every news story is questionable. Ultimately, the presence of Fake News is a blight on the tort of defamation and, like the credibility of present-day news organizations, will erode it to the point of irrelevance.

Is there any hope for a world without Fake News?

 

Social Media Has Gone Wild

Increasing technological advances and consumer demands have taken shopping to a new level. You can now buy clothes, food, and household items from the comfort of your couch, and in a few clicks: add to cart, pay, ship, and confirm. Not only are you limited to products sold in nearby stores, but shipping makes it possible to obtain items internationally. Even social media platforms have shopping features for users, such as Instagram Shopping, Facebook Marketplace, and WhatsApp. Despite its convenience, online shopping has also created an illegal marketplace for wildlife species and products.

Most trafficked animal-the Pangolin

Wildlife trafficking is the illegal trading or sale of wildlife species and their products. Elephant ivory, rhinoceros horns, turtle shells, pangolin scales, tiger furs, and shark fins are a few examples of highly sought after wildlife animal products. As social media platforms expand, so does wildlife trafficking.

Wildlife Trafficking Exists on Social Media?

Social media platforms make it easier for people to connect with others internationally. These platforms are great for staying in contact with distant aunts and uncles, but it also creates another method for criminals and traffickers to communicate. It provides a way to remain anonymous without having to meet in-person, which makes it harder for law enforcement to identify a user’s true identity. Even so, can social media platforms be held responsible for making it easier for criminals to commit wildlife trafficking crimes?

Thanks to Section 230 of the Communications Decency Act, the answer is most likely: no.

Section 230 provides broad immunity to websites for content a third-party user posts on the website. Even when a user posts illegal content on a website, the website cannot be held liable for such content. However, there are certain exceptions where websites have no immunity. It includes human and sex trafficking. Although these carve-outs are fairly new, it is clear that there is an interest in protecting people vulnerable to abuse.

So why don’t we apply the same logic to animals? Animals are also a vulnerable population. Many species are unmatched to guns, weapons, traps, and human encroachment on their natural habitats. Similar to children, animals may not have the ability to understand what trafficking is or even the physical strength to fight back. Social media platforms, like Facebook, attempt to combat the online wildlife trade, but its efforts continue to fall short.

How is Social Media Fighting Back?

 

In 2018, the World Wildlife Fund and 21 tech companies created the Coalition to End Wildlife Trafficking Online. The goal was to reduce illegal trade by 80% by 2020. While it is difficult to measure whether this goal is achievable, some social media platforms have created new policies to help meet this goal.

“We’re delighted to join the coalition to end wildlife trafficking online today. TikTok is a space for creative expression and content promoting wildlife trafficking is strictly prohibited. We look forward to partnering with the coalition and its members as we work together to share intelligence and best-practices to help protect endangered species.”

Luc Adenot, Global Policy Lead, Illegal Activities & Regulated Goods, TikTok

In 2019, Facebook banned the sale of animals altogether on its platform. But this did not stop users. A 2020 report showed a variety of illegal wildlife was for sale on Facebook. This clearly shows the new policies were ineffective. Furthermore, the report stated:

“29% of pages containing illegal wildlife for sale were found through the ‘Related Pages’ feature.”

This suggests that Facebook’s algorithm purposefully connects users to pages and similar content based on a user’s interest. Algorithms incentivize users to rely and depend on wildlife trafficking content. They will continue to use social media platforms because it does half of the work for them:

      • Facilitating communication
      • Connecting users to potential buyers
      • Connecting users to other sellers
      • Discovering online chat groups
      • Discovering online community pages

This fails to reduce wildlife trafficking outreach. Instead, it accelerates visibility of this type of content to other users. Does Facebook’s algorithms go beyond Section 230 immunity?

Under these circumstances, Facebook maintains immunity. In Gonzalez v. Google LLC, the court explains how websites are not liable for user content when the website employs content-neutral algorithms. This means that a website did nothing more than program an algorithm to present similar content to a user’s interest. The website did not offer direct encouragement to publish illegal content, nor did it treat the content differently from other user content.

What about when a website profits from illegal posts? Facebook receives a 5% selling fee for each shipment sold by a user. Since illegal wildlife products are rare, these transactions are highly profitable. A pound of ivory can be worth up to $3,300. If a user sells five pounds of ivory from endangered elephants on Facebook, the platform would profit $825 from one transaction. The Facebook Marketplace algorithm is similar to the algorithm based on user interest and engagement. Here, Facebook’s algorithm can push illegal wildlife products to a user who has searched for similar products. Yet, if illegal products are constantly pushed and successful sales are made, Facebook then benefits and makes a profit off these transactions. Does this mean that Section 230 will continue to protect Facebook when it profits from illegal activity?

Evading Detection

Even with Facebook’s prohibited sales policy, users get creative to avoid detection. A simple search of “animals for sale” led me to a public Facebook group. Within 30 seconds of scrolling, I found a user selling live coral, and another user selling an aquarium system with live coral, and live fish. The former reads: Leather $50. However, the picture shows a live coral in a fish tank. Leather identifies the type of coral it is, without saying it’s coral. Even if this was fake coral, a simple Google search shows a piece of fake coral is worth less than $50. If Facebook is failing to prevent users from selling live coral and live fish, it is most likely failing to prevent online wildlife trafficking on its platform.

Another method commonly used to evade detection is when users post a vague description or a photo of an item and include the words “pm me” or “dm me.” These are abbreviations for “private message me” or “direct message me.” It is a quick way to direct interested users to personally reach out to the individual and discuss details in a private chat. It is a way to communicate outside of the leering public eye. Sometimes a user will offer alternative contact methods, such as a personal phone number or an email address. This transitions the interaction off of or to a new social media platform.

Due to high profitability, there are lower stakes when transactions are conducted anonymously online. Social media platforms are great for concealing a user’s identity. Users can use fake names to maintain anonymity behind their computer and phone screen. There are no real consequences for using a fake name when the user is unknown. Nor is there any type of identity verification to truly discover the user’s true identity. Even if a user is banned, the person can create a new account under a different alias. Some users are criminals tied to organized crime syndicates or terrorist groups. Many users operate outside of the United States and are overseas, which makes it difficult to locate them. Thus, social media platforms incentivize criminals to hide among various aliases with little to lose.

Why Are Wildlife Products Popular?

Wildlife products have a high demand for human benefit and use. Common reasons why humans value wildlife products include:

Do We Go After the Traffickers or the Social Media Platform?

Taking down every single wildlife trafficker, and users that facilitate these transactions would be the perfect solution to end wildlife trafficking. Realistically, it’s too difficult to identify these users due to online anonymity and geographical limitations. On the other hand, social media platforms continue to tolerate these illegal activities.

Here, it is clear that Facebook is not doing enough to stop wildlife trafficking. With each sale made on Facebook, Facebook receives a percentage. Section 230 should not protect Facebook when it reaps the benefits of illegal transactions. This takes it a step too far and should open Facebook to the market of: Section 230 liability.

Should Facebook maintain Section 230 immunity when it receives proceeds from illegal wildlife trafficking transactions? Where do we draw the line?

Mental Health Advertisements on #TikTok

The stigma surrounding mental illness has persisted since the mid-twentieth century. This stigma is one of the many reasons why 60% of adults with a mental illness often go untreated. The huge treatment disparity demonstrates a significant need to spread awareness and make treatment more readily available. Ironically, social media, which has been ridiculed for its negative impact on the mental health of its users, has become a really important tool for spreading awareness about and de-stigmatizing mental health treatment.

The content shared on social media is a combination of users sharing their experiences with a mental health condition and companies who treat mental health using advertisements to attract potential patients. At the first glance, this appears to be a very powerful way to use social media to bridge treatment gaps. However, it highlights concerns over vulnerable people seeing content and self-diagnosing themselves with a condition that they might not have and undergoing unnecessary, and potentially dangerous, treatment. Additionally, they might fail to undergo needed treatment because they are overlooking the true cause of their symptoms due to the misinformation they were subjected to.

Attention Deficit Hyperactivity Disorder (“ADHD”) is an example of a condition that social media has jumped on. #ADHD has 14.5 billion views on TikTok and 3 million posts on Instagram. Between 2007 and 2016, diagnoses of ADHD increased by 123%. Further, prescriptions for stimulants, which treat ADHD, have increased 16% since the pandemic. Many experts are attributing this, in large part, to the use of social media in spreading awareness about ADHD and the rise of telehealth companies that have emerged to treat ADHD during the pandemic. These companies have jumped on viral trends with targeted advertisements that oversimplify what ADHD actually looks like and then offers treatment to those that click on the advertisement.

The availability and reliance of telemedicine grew rapidly during the COVID-19 pandemic and many restrictions regarding telehealth were suspended. This created an opening in the healthcare industry for these new companies. ‘Done’ and ‘Cerebral’ are two examples of companies that have emerged during the pandemic to treat ADHD. These companies attract, accept, and treat patients through a very simplistic procedure: (1) social media advertisements, (2) short online questionnaire, (2) virtual visit, and (3) prescription.

Both Done and Cerebral have utilized social media platforms like Instagram and TikTok to lure potential patients to their services. The advertisements vary, but they all highlight how easy and affordable treatment is by emphasizing convenience, accessibility, and low cost. Accessing the care offered is as simple as swiping up on an advertisements that appear as users are scrolling on the platform. These targeted ads depict images of people seeking treatment, taking medication, and having their symptoms go away. Further, these companies utilize viral trends and memes to increase the effectiveness of the advertisements, which typically oversimplify complex ADHD symptoms and mislead consumers.

ADHD content is popular on TikTok, as America faces an Adderall shortage - Vox

While these companies are increasing healthcare access for many patients due to the low cost and virtual platform, this speedy version of healthcare is blurring the line between offering treatment to patients and selling prescriptions to customers through social media. Further, medical professionals are concerned with how these companies are marketing addictive stimulants to young users, and, yet, remain largely unregulated due to outdated guidelines on advertisements for medical services.

The advertising model utilized by these telemedicine companies emphasize a need to modify existing laws to ensure that these advertisements are subjected to the FDA’s unique oversight to protect consumers. These companies are targeting young consumers and other vulnerable people to self-diagnose themselves with misleading information as to the criteria for a diagnosis. There are eighteen symptoms of ADHD and the average person meets at least one or two of those in the criteria, which is what these ads are emphasizing.

Advertisements in the medical sphere are regulated by either the FDA or the FTC. The FDA has unique oversight to regulate the marketing of prescription drugs by manufacturers and drug distributors in what is known as direct-to-consumer (“DTC”) drug advertising. The critics of prescription drug advertisements highlight the negative impact that DTC advertising has on the patient-provider relationship because patients go to providers expecting or requesting particular prescription treatment. In order to minimize these risks, the FDA requires that a prescription drug advertisement must be truthful, present a fair balance of the risks and benefits associated with the medications, and state an approved used of the medication. However, if the advertisement does not mention a particular drug or treatment, it eludes the FDA’s oversight.

Thus, the marketing of medical services, which does not market prescription drugs, is regulated only by the Federal Trade Commission (“FTC”) in the same manner as any other consumer good, which just means that the advertisement must not be false or misleading.

The advertisements these Telehealth companies are putting forward demonstrate that it is time for the FDA to step in because they are combining medical services and prescription drug treatment. They use predatory tactics to lure consumers into believing they have ADHD and then provide them direct treatment on a monthly subscription basis.

The potential for consumer harm is clear and many experts are pointing to the similarities between the opioid epidemic and stimulant drugs. However, the FDA has not currently made any changes to how they regulate advertising in light of social media. The laws regarding DTC drug advertising were prompted in part by the practice of self-diagnosis/self-medication by consumers and the false therapeutic claims made by manufacturers. The telemedicine model these companies are using is emphasizing these exact concerns by targeting consumers, convincing them they have a specific condition, and then offering the medication to treat it after quick virtual visit. Instead of patients going to their doctors requesting a specific prescription that may be inappropriate for a patient’s medical needs, patients are going to the telehealth providers that only prescribe a particular prescription that may also be inappropriate for a patient’s medical needs.

Through the use of social media, diagnosis and treatment with addictive prescription drugs can be initiated by an interactive advertisement in a manner that was not possible when the FDA made the distinctions that these types of advertisements would not be subject to its oversight. Thus, to protect consumers, it is vital that telemedicine advertisements are subjected to a more intrusive monitoring than consumer goods. This will require the companies making these advertisements to properly address the complex symptoms associated with conditions like ADHD and give fair balance to the harms of treatment.

According to the Pew Research Center, 69% of adults and 81% of teens in the United States use social media. Further, about 48% of Americans get their information regularly from social media. We often talk about misinformation in politics and news stories, but it’s permeating every corner of the internet. As these numbers continue to grow, it’s crucial to develop new methods to protect consumers, and regulating these advertisements is only the first step.

Is it HIGH TIME we allow Cannabis Content on Social Media?

 

Is it HIGHT TIME we allow Cannabis Content on Social Media?

The Cannabis Industry is Growing like a Weed

Social media provides a relationship between consumers and their favorite brands. Just about every company has a social media presence to advertise its products and grow its brand. Large companies command the advertising market, but smaller companies and one-person startups have their place too. The opportunity to expand your brand using social media is limitless to just about everyone. Except for the cannabis industry. With the developing struggle between social media companies and the politics of cannabis, comes an onslaught of problems facing the modern cannabis market. With recreational marijuana use legal in 21 states and Washington, D.C., and medical marijuana legal in 38 states, it may be time for this community to join the social media metaverse.

We know now that algorithms determine how many followers on a platform see a business’ content, whether or not the content is permitted, and whether the post or the user should be deleted. The legal cannabis industry has found itself in a similar struggle to legislators with social media giants ( like Facebook, Twitter, and Instagram) for increased transparency about their internal processes for filtering information, banning users, and moderating its platform. Mainstream cannabis businesses have been prevented from making their presence known on social media in the past, but legitimate businesses are being placed in a box with illicit drug users and prevented from advertising on public social media sites. The Legal cannabis industry is expected to be worth over $60 billion by 2024, and support for federal legalization is at an all-time high (68%). Now more than ever, brands are fighting for higher visibility amongst cannabis consumers.

Recent Legislation Could Open the Door for Cannabis

The question remains, whether the legal cannabis businesses have a place in the ever-changing landscape of the social media metaverse. Marijuana is currently a Schedule 1 narcotic on the Controlled Substances Act (1970). This categorization of Marijuana as Schedule 1 means that it has no currently accepted medical use and has a high potential for abuse. While that definition was acceptable when cannabis was placed on the DEAs list back in 1971, there has been evidence presented in opposition to that decision. Historians note, overt racism, combined with New Deal reforms and bureaucratic self-interest is often blamed for the first round of federal cannabis prohibition under the Marihuana Tax Act of 1937, which restricted possession to those who paid a steep tax for a limited set of medical and industrial applications.    The legitimacy of cannabis businesses within the past few decades based on individual state legalization (both medical and recreational) is at the center of debate for the opportunity to market as any other business has. Legislation like the MORE act (Marijuana Opportunity Reinvestment and Expungement) which was passed by The House of Representatives gives companies some hope that they can one day be seen as legitimate businesses. If passed into law, Marijuana will be lowered or removed from the schedule list which would blow the hinges off the cannabis industry, legitimate businesses in states that have legalized its use are patiently waiting in the wings for this moment.

States like New York have made great strides in passing legislation to legalize marijuana the “right” way and legitimize business, while simultaneously separating themselves from the illegal and dangerous drug trade that has parasitically attached itself to this movement. The  Marijuana Regulation and Tax Act (MRTA)  establishes a new framework for the production and sale of cannabis, creates a new adult-use cannabis program, and expands the existing medical cannabis and cannabinoid (CBD) hemp programs. MRTA also established the Office of Cannabis Management (OCM), which is the governing body for cannabis reform and regulation, particularly for emerging businesses that wish to establish a presence in New York. The OCM also oversees the licensure, cultivation, production, distribution, sal,e and taxation of medical, adult-use, and cannabinoid hemp within New York State. This sort of regulatory body and structure are becoming commonplace in a world that was deemed to be like the “wild-west” with regulatory abandonment, and lawlessness.

 

But, What of the Children?

In light of all the regulation that is slowly surrounding the Cannabis businesses, will the rapidly growing social media landscape have to concede to the demands of the industry and recognize their presence? Even with regulations cannabis exposure is still an issue to many about the more impressionable members of the user pool. Children and young adults are spending more time than ever online and on social media.  On average, daily screen use went up among tweens (ages 8 to 12) to five hours and 33 minutes from four hours and 44 minutes, and to eight hours and 39 minutes from seven hours and 22 minutes for teens (ages 13 to 18). This group of social media consumers is of particular concern to both the legislators and the social media companies themselves. MRTA offers protection from companies advertising with the intent of looking like common brands marketed to children. Companies are restricted to using their name and their logo, with explicit language that the item inside of the wrapper has cannabis or Tetrahydrocannabinol (THC) in it. MRTA restrictions along with strict community guidelines from several social media platforms and government regulations around the promotion of marijuana products, many brands are having a hard time building their communities’ presence on social media. The cannabis companies have resorted to creating their own that promote the content they are being prevented from blasting on other sites. Big-name rapper and cannabis enthusiast, Berner who created the popular edible brand “Cookies”, has been approached to partner with the creators to bolster their brand and raise awareness.  Unfortunately, the sites became what mainstream social media sites feared in creating their guideline, an unsavory haven for illicit drug use and other illegal behavior. One of the pioneer apps in this field Social Club was removed from the app store after multiple reports of illegal behavior. The apps have since been more internally regulated but have not taken off like the creators intended. Legitimate cannabis businesses are still being blocked from advertising on mainstream apps.

These Companies Won’t go Down Without a Fight

While cannabis companies aren’t supposed to be allowed on social media sites, there are special rules in place if a legal cannabis business were to have a presence on a social media site. Social media is the fastest and most efficient way to advertise to a desired audience. With appropriate regulatory oversight and within the confines of the changing law, social media sites may start to feel pressure to allow more advertising from cannabis brands.

A Petition has been generated to bring META, the company that owns Facebook and Instagram among other sites, to discuss the growing frustrations and strict restrictions on their social media platforms. The petition on Change.org has managed to amass 13,000 signatures. Arden Richard, the founder of WeedTube, has been outspoken about the issues saying  “This systematic change won’t come without a fight. Instagram has already begun deleting posts and accounts just for sharing the petition,”. He also stated, “The cannabis industry and community need to come together now for these changes and solutions to happen,”. If not, he fears, “we will be delivering this industry into the hands of mainstream corporations when federal legalization happens.”

Social media companies recognize the magnitude of the legal cannabis community because they have been banning its content nonstop since its inception. However, the changing landscape of the cannabis industry has made their decision to ban their content more difficult. Until federal regulation changes, businesses operating in states that have legalized cannabis will be force banned by the largest advertising platforms in the world.

 

Skip to toolbar