Privacy Please: Privacy Law, Social Media Regulation and the Evolving Privacy Landscape in the US

Social media regulation is a touchy subject in the United States.  Congress and the White House have proposed, advocated, and voted on various bills, aimed at protecting and guarding people from data misuse and misappropriation, misinformation, harms suffered by children, and for the implications of vast data collection. Some of the most potent concerns about social media stem from use and misuse of information by the platforms- from the method of collection, to notice of collection and use of collected information. Efforts to pass a bill regulating social media have been frustrated, primarily by the First Amendment right to free speech. Congress has thus far failed to enact meaningful regulation on social media platforms.

The way forward may well be through privacy law. Privacy laws give people some right to control their own personhood including their data, right to be left alone, and how and when people see and view them. Privacy laws originated in their current form in the late 1800’s with the impetus being one’s freedom from constant surveillance by paparazzi and reporters, and the right to control your own personal information. As technology mutated, our understanding of privacy rights grew to encompass rights in our likeness, our reputation, and our data. Current US privacy laws do not directly address social media, and a struggle is currently playing between the vast data collection practices of the platforms, immunity for platforms under Section 230, and private rights of privacy for users.

There is very little Federal Privacy law, and that which does exist is narrowly tailored to specific purposes and circumstances in the form of specific bills. Somes states have enacted their own privacy law scheme, California being on the forefront, Virginia, Colorado, Connecticut, and Utah following in its footsteps. In the absence of a comprehensive Federal scheme, privacy law is often judge-made, and offers several private rights of action for a person whose right to be left alone has been invaded in some way. These are tort actions available for one person to bring against another for a violation of their right to privacy.

Privacy Law Introduction

Privacy law policy in the United States is premised on three fundamental personal rights to privacy:

  1. Physical right to privacy- Right to control your own information
  2. Privacy of decisions– such as decisions about sexuality, health, and child-rearing. These are the constitutional rights to privacy. Typically not about information, but about an act that flows from the decision
  3. Proprietary Privacy – the ability to protect your information from being misused by others in a proprietary sense.

Privacy Torts

Privacy law, as it concerns the individual, gives rise to four separate tort causes of action for invasion of privacy:

  1. Intrusion upon Seclusion- Privacy law provides a tort cause of action for intrusion upon seclusion when someone intentionally intrudes upon the reasonable expectation of seclusion of another, physically or otherwise, and the intrusion is objectively highly offensive.
  2. Publication of Private Facts- One gives publicity To a matter concerning the Private life of another that is not of legitimate concern to the public, and the matter publicized would be objectively highly offensive. The first amendment provides a strong defense for publication of truthful matters when they are considered newsworthy.
  3. False Light – One who gives publicity to a matter concerning another that places the other before the public in a false light when The false light in which the other was placed would be objectively highly offensive and the actor had knowledge of or acted in reckless disregard as to the falsity of the publicized matter and the false light in which the other would be placed.
  4. Appropriation of name and likeness- Appropriation of one’s name or likeness to the defendant’s own use or benefit. There is no appropriation when a persona’s picture is used to illustrate a non-commercial, newsworthy article. This is usually commercial in nature but need not be. The appropriation could be of “identity”. It need not be misappropriation of name, it could be the reputation, prestige, social or commercial standing, public interest, or other value on the plaintiff’s likeness.

These private rights of action are currently unavailable for use against social media platforms because of Section 230 of the Decency in Communications Act, which provides broad immunity to online providers for posts on their platforms. Section 230 prevents any of the privacy torts from being raised against social media platforms.

The Federal Trade Commission (FTC) and Social Media

Privacy law can implicate social media platforms when their practices become unfair or deceptive to the public through investigation by the Federal Trade Commission (FTC). The FTC is the only federal agency with both consumer protection and competition jurisdiction in broad sectors of the economy. FTC investigates business practices where those practices are unfair or deceptive. FTC Act 15 U.S.C S 45- Act prohibits “unfair or deceptive acts or practices in or affecting commerce” and grants broad jurisdiction over privacy practices of businesses to the FTC. Trade practice is unfair if it causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and is not outweighed by countervailing benefits to consumers or competition. A deceptive act or practice is a material representation, omission, or practice that is likely to mislead the consumer acting reasonably in the circumstances, to the consumer’s detriment.

Critically, there is no private right of action in FTC enforcement. The FTC has no ability to enforce fines for S5 violations but can provide injunctive relief. By design, the FTC has very limited rulemaking authority, and looks to consent decrees and procedural, long-lasting relief as an ideal remedy. The FTC pursues several types of misleading or deceptive policy and practices that implicate social media platforms: notice and choice paradigms, broken promises, retroactive policy changes, inadequate notice, and inadequate security measures. Their primary objective is to negotiate a settlement where the company submits to certain measures of control of oversight by the FTC for a certain period of time. Violations of the agreements could yield additional consequences, including steep fines and vulnerability to class action lawsuits.

Relating to social media platforms, the FTC has investigated misleading terms and conditions, and violations of platform’s own policies. In Re Snapchat, the platform claimed that user’s posted information disappeared completely after a certain period of time, however, through third party apps and manipulation of user’s posts off of the platform, posts could be retained. The FTC and Snapchat settled, through a consent decree, to subject Snapchat to FTC oversight for 20 years.

The FTC has also investigated Facebook for violation of its privacy policy. Facebook has been ordered to pay a $5 billion penalty and to submit to new restrictions and a modified corporate structure that will hold the company accountable for the decisions it makes about its users’ privacy to settle FTC charges claiming that they violated a 2012 agreement with the agency.

Unfortunately, none of these measures directly give individuals more power over their own privacy. Nor do these policies and processes give individuals any right to hold platforms responsible for being misled by algorithms using their data, or for intrusion into their privacy by collecting data without allowing an opt-out.

Some of the most harmful social media practices today relate to personal privacy. Some examples include the collection of personal data, the selling and dissemination of data through the use of algorithms designed to subtly manipulate our pocketbooks and tastes, collection and use of data belonging to children, and the design of social media sites to be more addictive- all in service of the goal of commercialization of data.

No current Federal privacy scheme exists. Previous Bills on Privacy have been few and narrowly tailored to relatively specific circumstances and topics like healthcare and medical data protection by HIPPA, protection of data surrounding video rentals as in the Video Privacy Protection Act, and narrow protection for children’s data in Children’s Online Protection Act. All the schemes are outdated and fall short of meeting the immediate need of broad protection of widely collected and broadly utilized data from social media.

Current Bills on Privacy

Upon request from some of the biggest platforms, outcry from the public, and the White House’s request for Federal Privacy regulation, Congress appears poised to act. The 118th Congress has pushed privacy law as a priority in this term by introducing several bills related to social media privacy. There are at least ten Bills currently pending between the House of the Senate addressing a variety of issues and concerns from Children’s data privacy to the minimum age for use and designation of a new agency to monitor some aspects of privacy.

S744The Data Care Act of 2023 aims to protect social media user’s data privacy by imposing fiduciary duties on the platforms. The original iteration of the bill was introduced in 2021 and failed to receive a vote. It was re-introduced in March of 2023 and is currently pending. Under the act, social media platforms would have the duty to reasonably secure user’s data from access, refrain from using the data in a way that could foreseeably “benefit the online service provider to the detriment of the end user” and to prevent disclosure of user’s data unless the party is also bound by these duties. The bill authorizes the FTC and certain state officials to take enforcement actions upon breach of those duties. The states would be permitted to take their own legal action against companies for privacy violations. The bill would also allow the FTC to intervene in the enforcement efforts by imposing fines for violations.

H.R.2701 – Perhaps the most comprehensive piece of legislation on the House floor is the Online Privacy Act. In 2023, the bill was reintroduced by democrat Anna Eshoo after an earlier version on the bill failed to receive a vote and died in Congress. The Online Privacy Act aims to protect users by providing individuals rights relating to the privacy of their personal information. The bill would also provide privacy and security requirements for treatment of personal information. To accomplish this, the bill established a new agency – the Digital Privacy Agency- which would be responsible for enforcement of the rights and requirements. The new individual rights in privacy are broad and include the rights of access, correction, deletion, human review of automated decision, individual autonomy, right to be informed, and right to impermanence, amongst others. This would be the most comprehensive plan to date. The establishment of a new agency with a task specific to administration and enforcement of privacy laws would be incredibly powerful. The creation of this agency would be valuable irrespective of whether this bill is passed.

HR 821– The Social Media Child Protection Act is a sister bill to one by a similar name which originated in the Senate. This bill aims to protect children from the harms of social media by limiting children’s access to it. Under the bill, Social Media platforms are required to verify the age of every user before accessing the platform by submitting a valid identity document or by using another reasonable verification method. A social media platform will be prohibited from allowing users under the age of 16 to access the platform. The bill also requires platforms to establish and maintain reasonable procedures to protect personal data collected from users. The bill affords for a private right of action as well as state and FTC enforcement.

S 1291The Protecting Kids on Social Media Act is similar to its counterpart in the House, with slightly less tenacity. It similarly aims to protect children from social media’s harms. Under the bill, platforms must verify its user’s age, not allow the user to use the service unless their age has been verified, and must limit access to the platform for children under 12. The bill also prohibits retention and use of information collected during the age verification process. Platforms must take reasonable steps to require affirmative consent from the parent or guardian of a minor who is at least 13 years old for the creation of a minor account, and reasonably allow access for the parent to later revoke that consent. The bill also prohibits use of data collected from minors for algorithmic recommendations. The bill would require the Department of Commerce to establish a voluntary program for secure digital age verification for social media platforms. Enforcement would be through the FTC or state action.

S 1409– The Kids Online Safety Act, proposed by Senator Blumenthal of Connecticut, also aims to protect minors from online harms. This bill, as does the Online Safety Bill, establishes fiduciary duties for social media platforms regarding children using their sites. The bill requires that platforms act in the best interest of minors using their services, including mitigating harms that may arise from use, sweeping in online bullying and sexual exploitation. Social media sites would be required to establish and provide access to safeguards such as settings that restrict access to minor’s personal data and granting parents the tools to supervise and monitor minor’s use of the platforms. Critically, the bill establishes a duty for social media platforms to create and maintain research portals for non-commercial purposes to study the effect that corporations like the platforms have on society.

Overall, these bills indicate Congress’s creative thinking and commitment to broad privacy protection for users from social media harms. I believe the establishment of a separate body to govern, other than the FTC which lacks the powers needed to compel compliance, to be a necessary step. Recourse for violations on par with the EU’s new regulatory scheme, mainly fines in the billions, could help.

Many of the bills, for myriad aims, establish new fiduciary duties for the platforms in preventing unauthorized use and harms for children. There is real promise in this scheme- establishing duty of loyalty, diligence and care for one party has a sound basis in many areas of law and would be more easily understood in implementation.

The notion that platforms would need to be vigilant in knowing their content, studying its affects, and reporting those effects may do the most to create a stable future for social media.

The legal responsibility for platforms to police and enforce their policies and terms and conditions is another opportunity to further incentivize platforms. The FTC currently investigates policies that are misleading or unfair, sweeping in the social media sites, but there could be an opportunity to make the platforms legally responsible for enforcing their own policies, regarding age, against hate, and inappropriate content, for example.

What would you like to see considered in Privacy law innovation for social media regulation?

Sharing is NOT Always Caring

Where There’s Good, There’s Bad

Social media’s vast growth over the past several years has attracted millions of users who use these platforms to share content, connect with others, conduct business, and spread news and information. However, social media is a double-edged sword. While it creates communities of people and bands them together, it destroys privacy in the meantime. All of the convenient aspects of social media that we know and love lead to significant exposure of personal information and related privacy risks. Social media companies retain massive amounts of sensitive information regarding users’ online behavior, including their interests, daily activities, and political views. Algorithms are embedded within these functions to promote specific goals of social media companies, such as user engagement and targeted advertising. As a result, the means to achieve these goals conflict with consumers’ privacy concerns.

Common Issues

In 2022, several U.S. state and federal agencies banned their employees from using TikTok on government-subsidized devices, fearful that foreign governments could acquire confidential information. While a lot of the information collected through these platforms is voluntarily shared by users, much of it is also tracked using “cookies,” and you can’t have these with a glass of milk! Tracking cookies allows information regarding users’ online browsing activity to be stored and displayed in a way that targets specific interests and personalizes content tailored to these particular likings. Signing up for a social account and agreeing to the platform’s terms permits companies to collect all of this data.

Social media users leave a “digital footprint” on the internet when they create and use their accounts. Unfortunately, enabling a “private” account does not solve the problem because data is still retrieved in other ways. For example, engagement in certain posts through likes, shares, comments, buying history, and status updates all increase the likelihood that privacy will be intruded on.

Two of the most notorious issues related to privacy on social media are data breaches and data mining. Data breaches occur when individuals with unauthorized access steal private or confidential information from a network or computer system. Data mining on social media is the process in which user information is analyzed to identify specific tendencies which are subsequently used to inform research and other advertising functions.

Other issues that affect privacy are certain loopholes that can be taken around preventive measures already in place. For example, if an individual maintains a private social account but then shares something with their friend, others who are connected with the friend can view the post. Moreover, location settings enable a person’s location to be known even if the setting is turned off. Other means, such as Public Wi-Fi and websites can still track users’ locations.

Taking into account all of these prevailing issues, only a small amount of information is actually protected under federal law. Financial and healthcare transactions as well as details regarding children are among the classes of information that receive heightened protection. Most other data that is gathered through social media can be collected, stored, and used. Social media platforms are unregulated to a great degree with respect to data privacy and consumer data protection. The United States does have a few laws in place to safeguard privacy on social media but more stringent ones exist abroad.

Social media platforms are required to implement certain procedures to comply with privacy laws. They include obtaining user consent, data protection and security, user rights and transparency, and data breach notifications. Social media platforms typically ask their users to agree to their Terms and Conditions to obtain consent and authorization for processing personal data. However, most are guilty of accepting without actually reading these terms so that they can quickly get to using the app.

Share & Beware: The Law

Privacy laws are put in place to regulate how social media companies can act on all of the information users share, or don’t share. These laws aim to ensure that users’ privacy rights are protected.

There are two prominent social media laws in the United States. The first is the Communications Decency Act (CDA) which regulates indecency that occurs through computer networks. Nevertheless, Section 230 of the CDA provides enhanced immunity to any cause of action that would make internet providers, including social media platforms, legally liable for information posted by other users. Therefore, accountability for common issues on social media like data breaches and data misuse is limited under the CDA. The second is the Children’s Online Privacy Protection Act (COPPA). COPPA protects privacy on websites and other online services for children under the age of thirteen. The law prevents social media sites from gathering personal information without first providing written notice of disclosure practices and obtaining parental consent. The challenge remains in actually knowing whether a user is underage because it’s so easy to misrepresent oneself when signing up for an account. On the other hand, the European Union has General Data Protection Regulation (GDPR) which grants users certain control over when and how their data is processed. The GDPR contains a set of guidelines that restrict personal data from being disseminated on social media platforms. In the same way, it also gives internet users a long set of rights in cases where their data is shared and processed. Some of these rights include the ability to withdraw consent that was previously given, access information that is collected from them, and delete or restrict personal data in certain situations. The most similar domestic law to the GDPR is the California Consumer Privacy Act (CCPA) which was enacted in 2020. The CCPA regulates what kind of information can be collected by social media companies, giving platforms like Google and Facebook much less freedom in harvesting user data. The goal of the CCPA is to make data collection transparent and understandable to users.

Laws on the state level are lacking and many lawsuits have occurred as a result of this deficiency. A class action lawsuit was brought in response to the collection of users’ information by Nick.com. These users were all children under the age of thirteen who sued Viacom and Google for violating privacy laws. They argued that the data collected by the website together with Google’s stored data relative to its users was personally identifiable information. A separate lawsuit was brought against Facebook for tracking users when they visited third-party websites. Individuals who brought suit claimed that Facebook was able to personally identify and track them through shares and likes when they visited certain healthcare websites. Facebook was able to collect sensitive healthcare information as users browsed these sites, without their consent. However, the court asserted that users did indeed consent to these actions when they agreed to Facebook’s data tracking and data collection policies. The court also stated that the nature of this data was not subject to any stricter requirements as plaintiffs claimed it was because it was all available on publicly accessible websites. In other words, public information is fair game for Facebook and many other social media platforms when it comes to third-party sites.

In contrast to these two failed lawsuits, TikTok agreed to pay a $92 million settlement for twenty-one combined lawsuits due to privacy violations earlier this year. The lawsuit included substantial claims, such as allegations that the app analyzed users’ faces and collected private data on users’ devices without obtaining their permission.

We are living in a new social media era, one that is so advanced that it is difficult to fully comprehend. With that being said, data privacy is a major concern for users who spend a large amount of time sharing personal information, whether they realize it or not. Laws are put in place to regulate content and protect users, however, keeping up with the growing presence of social media is not an easy task–sharing is inevitable and so are privacy risks.

To share or not to share? That is the question. Will you think twice before using social media?

New York is Protecting Your Privacy

TAKING A STANCE ON DATA PRIVACY LAW

The digital age has brought with it unprecedented complexity surrounding personal data and the need for comprehensive data legislation. Recognizing this gap in legislative protection, New York has introduced the New York Privacy Act (NYPA), and the Stop Addictive Feeds Exploitation (SAFE) For Kids Act, two comprehensive initiatives designed to better document and safeguard personal data from the consumer side of data collection transactions. New York is taking a stand to protect consumers and children from the harms of data harvesting.

Currently under consideration in the Standing Committee on Consumer Affairs And Protection, chaired by Assemblywoman Nily Rozic, the New York Privacy Act was introduced as “An Act to amend the general business law, in relation to the management and oversight of personal data.” The NYPA was sponsored by State Senator Kevin Thomas and closely resembles the California Consumer Privacy Act (CCPA), which was finalized in 2019. In passing the NYPA, New York would become just the 12th state to adopt a comprehensive data privacy law protecting state residents.

DOING IT FOR THE DOLLAR

Companies buy and sell millions of user’s sensitive personal data in the pursuit of boosting profits. By purchasing personal user data from social media sites, web browsers, and other applications, advertisement companies can predict and drive trends that will increase product sales among different target groups.

Social media companies are notorious for selling user data to data collection companies, things such as your: name, phone number, payment information, email address, stored videos and photos, photo and file metadata, IP address, networks and connections, messages, videos watched, advertisement interactions, and sensor data, as well as time, frequency, and duration of activity on the site. The NYPA targets businesses like these by regulating legal persons that conduct business in the state of New York, or who produce products and services aimed at residents of New York. The entity that stands to be regulated must:

  • (a) have annual gross revenue of twenty-five million dollars or more;
  • (b) control or process personal data of fifty thousand consumers or more;
  • or (c) derive over fifty percent of gross revenue from the sale of personal data.

The NYPA does more for residents of New York because it places the consumer first, as the Act is not restricted to regulating businesses operating within New York but encompasses every resident of New York State who may be subject to targeted data collection, an immense step forward in giving consumers control over their digital footprint.

MORE RIGHTS, LESS FRIGHT

The NYPA works by granting all New Yorkers additional rights regarding how their data is maintained by controllers to which the Act applies. The comprehensive rights granted to New York consumers include the right to notice, opt out, consent, portability, correct, and delete personal information. The right to notice requires each controller provide a conspicuous and readily available notice statement describing the consumer’s rights, indicating the categories of personal data the controller will be collecting, where its collected from, and what it may be used for. The right to opt out includes allowing for consumers to opt out of processing their personal data for the purposes of targeted advertising, the sale of their personal data, and for profiling purposes. This gives the consumer an advantage when browsing sites and using apps, as they will be duly informed of exactly what information they are giving up when online.

While all the rights included in the NYPA are groundbreaking for the New York consumer, the right to consent to sensitive data collection and the right to delete data cannot be understated. The right to consent requires controllers to conspicuously ask for express consent to collect sensitive personal data. It also contains a zero-discrimination clause to protect consumers who do not give controllers express consent to use their personal data. The right to delete requires controllers to delete any or all of a consumer’s personal data upon request, demanding controllers delete said data within 45 days of receiving the request. These two clauses alone can do more for New Yorker’s digital privacy rights than ever before, allowing for complete control over who may access and keep sensitive personal data.

BUILDING A SAFER FUTURE

Following the early success of the NYPA, New York announced their comprehensive plan to better protect children from the harms of social media algorithms, which are some of the main drivers of personal data collection. Governor Kathy Hochul, State Senator Andrew Gounardes, and Assemblywoman Nily Rozic recently proposed the Stop Addictive Feeds Exploitation (SAFE) For Kids Act, directly targeting social media sites and their algorithms. It has long been suspected that social media usage contributes to worsening mental health conditions in the United States, especially among youths. The SAFE For Kids Act seeks to require parental consent for children to have access to social media feeds that use algorithms to boost usage.

On top of selling user data, social media sites like Facebook, YouTube, and X/Twitter also use carefully constructed algorithms to push content that the user has expressed interest in, usually based on the profiles they click on or the posts they ‘like’. Social media sites feed user data to algorithms they’ve designed to promote content that will keep the user engaged for longer, which exposes the user to more advertisements and produces more revenue.

Children, however, are particularly susceptible to these algorithms, and depending on the posts they view, can be exposed to harmful images or content that can have serious consequences for their mental health. Social media algorithms can show children things they are not meant to see, regardless of their naiveté and blind trust, traits that are not exactly cohesive with internet use. Distressing posts or controversial images could be plastered across children’s feeds if the algorithm determines it would drive better engagement by putting them there. Under the SAFE For Kids Act, without parental consent, children on social media sites would see their feed in chronological order, and only see posts from users they ‘follow’ on the platform. This change would completely alter the way platforms treat accounts associated with children, ensuring they are not exposed to content they don’t seek out themselves. This legislation would build upon the foundations established by the NYPA, opening the door to even further regulations that could increase protections for the average consumer and more importantly, for the average child online.

New Yorkers: If you have ever spent time on the internet, your personal data is out there, but now you have the power to protect it.

When in Doubt, DISCLOSE it Out!

The sweeping transformation of social media platforms over the past several years has given rise to convenient and cost-effective advertising. Advertisers are now able to market their products or services to consumers (i.e. users) at low cost, right at their fingertips…literally! But convenience comes with a few simple and easy rules. Influencers, such as, athletes, celebrities, and high-profile individuals are trusted by their followers to remain transparent. Doing so does not require anything difficult. In fact, including “Ad” or “#Ad” at the beginning of a post is satisfactory. The question then becomes, who’s making these rules?

The Federal Trade Commission (FTC) works to stop deceptive or misleading advertising and provides guidance on how to go about doing so. Under the FTC, individuals have a legal obligation to clearly and conspicuously disclose their material connection to the products, services, brands, and/or companies they promote on their feeds. The FTC highlights one objective component to help users identify an endorsement. That is, a statement made by the speaker where their relationship with the advertiser is such that the speaker’s statement can be understood to be sponsored by the advertiser. In other words, if the speaker is acting on behalf of the advertiser, then that statement will be taken as an endorsement and subject to guidelines. Several factors will determine this, such as compensation, free products, and the terms of any agreement. Two basic principles of advertising law apply to all types of advertising in any media. They include 1) a reasonable basis to evidence claims and 2) clear and conspicuous disclosure. Overall, the FTC works to ensure transparent sponsorship in an effort to maintain consumer trust.

The Breakdown—When, How, & What Else

Influencers should disclose when they have a financial, employment, personal, or family relationship with a brand. Financial relationships do not have to be limited to money. If for example, a brand gives you a free product, disclosure is required even if you were not asked to mention it in a post. Similarly, if a user posts from abroad, U.S. law still applies if it is reasonably foreseeable that U.S. consumers will be affected.

When disclosing your material connection to the brand, make sure that disclosure is easy to see and understand. The FTC has previously disapproved of disclosure in places that are remote from the post itself. For instance, users should not have to press “show more” in the comments section to see that the post is actually an endorsement.

Another important aspect advertisers and endorsers should consider when disclosing are making sure not to talk about items they have not yet tried. They should also avoid saying that a product was great when they in fact thought it was not. In addition, individuals should not convey information or produce claims that are unsupported by actual evidence.

However, not everyone who posts about a brand needs to disclose. If you want to post a Sephora haul or a Crumbl Cookie review, that is okay! As long as a company is not giving you products for free or paying you to sponsor them, individuals are free to post at their leisure, without disclosing.

Now that you realize how seamless disclosure is, it may be surprising that people still fail to do so.

Rule Breakers

In Spring 2020 we saw an uptick of social media posts due to the fact that most people abided by stay-at-home orders and turned to social media for entertainment. TikTok is deemed particularly addictive, with users spending substantially more time on it over other apps, such as Instagram and Twitter.

TikTok star Charlie D’Amelio spoke positively about the enhancement drink, Muse in a Q&A post. She never acknowledged that the brand was paying her to sponsor their product and failed to use the platform’s content enabling tool which makes it even easier for users to disclose. D’Amelio is the second most followed account on the platform.

The Teami brand found itself in a similar position when stars like Cardi B and Brittany Renner made unfounded claims that the wellness company made products that resulted in unrealistic health benefits. The FTC instituted a complaint alleging that the company misled consumers to think that their 30-day detox pack would ensure weight loss. A subsequent court order prohibited them from making such unsubstantiated claims.

Still, these influencers hardly got punished, but received a mere ‘slap on the wrist’ for making inadequate disclosures. They were ultimately sent warning letters and received some bad press.

Challenges in Regulation & Recourse

Section 5(a) of the FTC Act is the statute that allows the agency to investigate and prevent unfair methods of competition. It is what gives them the authority to seek relief for consumers. This includes injunctions and restitution and in some cases, civil penalties. However, regulation is challenging because noncompliance is so easy. While endorsers have the ultimate responsibility to disclose their content, advertising companies are urged to implement procedures that make doing so more probable. There are never-ending amounts of content on social media to regulate, making it difficult for entities like the FTC to know when rules are actually being broken.

Users can report undisclosed posts through their social media accounts directly, their state attorneys general office, or to the FTC. Private parties can also bring suit. In 2022, a travel agency group sued a travel influencer for deceptive advertising. The influencer made false claims, such as being the first woman to travel to every country and failed to disclose paid promotions on her Instagram and TikTok accounts. The group seeks to enjoin the influencer from advertising without disclosing and to engage in corrective measures on her remaining posts that violate the FTC’s rules. Social media users are better able to weigh the value of endorsements when they can see the truth behind such posts.

In a world filled with filters, when it comes to advertisements on social media, let’s just keep it real.

Destroying Defamation

The explosion of Fake News spread among social media sites is destroying a plaintiff’s ability to succeed in a defamation action. The recent proliferation of rushed journalism, online conspiracy theories, and the belief that most stories are, in fact, “Fake News” have created a desert of veracity. Widespread public skepticism about even the most mainstream social media reporting means plaintiffs need help convincing jurors that third parties believed any reported statement to be true. Such proof is necessary for a plaintiff to prove the elements of defamation.

Fake News Today

Fake News is any journalistic story that knowingly and intentionallyincludes untrue factual statements. Today, many speak of Fake News as a noun. There is no shortage of examples of Fake News and its impact.

      • Pizzagate: During the 2016 Presidential Election, Edgar Madison Welch, 28, read a story on (then) Facebook that Hilary Clinton was running a child trafficking ring out of the basement of a pizzeria. Welch, a self-described vigilante, shot open a locked door of the pizzeria with his AR-15.
      • A study by three MIT scholars found that false news stories spread faster on Twitter than true stories, with the former being 70% more likely to be retweeted than the latter.
      • During the defamation trial of Amber Heard and Johnny Depp, a considerable number of “Fake News” reports circulated across social media platforms, particularly TikTok, Twitter, and YouTube, attacking Ms. Heard at a disproportionality more significant rate than Mr. Depp.

 

What is Defamation?

To establish defamation, a plaintiff must show the defendant published a false assertion of fact that damages the plaintiff’s reputation. Hyperbolic language or other indications that a statement was not meant to be taken seriously are not actionable. Today’s understanding that everything on the Internet is susceptible to manipulation destroys defamation.

Because the factuality of a statement is a question of law, a plaintiff must first convince a judge that the offending statement is fact and not opinion. Courts often find that Internet and social media statements are hyperbole or opinion. If a plaintiff succeeds in persuading the judge, then the issue of whether the statement defamed the plaintiff heads to the jury. A jury faced with defamation must determine whether the statement of fact harmed the defendant’s reputation or livelihood to the extent that it caused the plaintiff to incur damages. The prevalence of Fake News creates another layer of difficulty for the Internet plaintiff, who must convince the jury that the statement was true.

Defamation’s Slow and Steady Erosion

Since the 1960s, the judiciary has limited plaintiffs’ ability to succeed in defamation claims. The decisions in Sullivan v. New York Times and Gertz increased the difficulty for public figures, and those with limited public figure status, to succeed by requiring them to prove actual malice against a defendant, a standard higher than the mere negligence standard allowed for individuals who are not of community interest.

The rise of Internet use, mainly social media, presents plaintiffs with yet another hurdle. Plaintiffs can only succeed if the challenged statement is fact, not opinion. However, judges find that statements made on the Internet are opinions and not points. The combined effect of Supreme Court limitations on proof and the increased belief that social media posts are mostly opinions has limited the plaintiff’s ability to succeed in a defamation claim.

Destroying Defamation

If the Supreme Court and social media have eroded defamation, Fake News has destroyed it. Today, convincing a jury that a false statement purporting to be fact has defamed a plaintiff is difficult given the dual issues of society’s objective mistrust of the media and the understanding that information on the Internet is generally opinion, not fact. Fake News sows confusion and makes it almost impossible for jurors to believe any statement has the credibility necessary to cause harm.

To be clear, in some instances, fake News is so intolerable that a jury will find for the plaintiffs. A Connecticut jury found conspiracy theorist Alex Jones liable for defamation based on his assertion that the government had faked the Sandy Hook shootings.

But often, plaintiffs are unsuccessful where the challenged language is conflated with untruths. Fox News successfully defended itself against a lawsuit claiming that it had aired false and deceptive content about the coronavirus, even though its reporting was, in fact, untrue.

Similarly, a federal judge dismissed a defamation case against Fox News for Tucker Carlson’s report that the plaintiff had extorted then-President Donald Trump. In reaching its conclusion, the judge observed that Carlson’s comments were rhetorical hyperbole and that the reasonable viewer “‘arrive[s] with the appropriate amount of skepticism.”‘ Reports of media success in defending against defamation claims further fuel media mistrust.

The current polarization caused by identity politics is furthering the tendency for Americans to mistrust the media. Sarah Palin announced that the goal of her recent defamation case against The New York Times was to reveal that the “lamestream media” publishes “fake news.”

If jurors believe that no reasonable person could credit a challenged statement as accurate, they cannot find that the statement the plaintiff asserts is defamatory caused harm. An essential element of defamation is that the defendant’s remarks damaged the plaintiff’s reputation. The large number of people who believe News is fake, the media’s rush to publish, and external attacks on credible journalism have created a problematization of truth among members of society. The potential for defamatory harm is minimal when every news story is questionable. Ultimately, the presence of Fake News is a blight on the tort of defamation and, like the credibility of present-day news organizations, will erode it to the point of irrelevance.

Is there any hope for a world without Fake News?

 

The Rise of E-personation

Social media allows millions of users to communicate with one another on a daily basis, but do you really know who is behind the computer screen?

As social media continues to expand into the enormous entity that we know it to be today, the more susceptible users are to abuse online. Impersonation through electronic means, often referred to as e-personation is a rapidly growing trend on social media. E-personation is extremely troublesome because it requires far less information than the other typical forms of identity theft. In order to create a fake social media page, all an e-personator would need is the victim’s name, and maybe a profile picture. While creating a fake account is relatively easy for the e-personator, the impact on the victim’s life can be detrimental.

E-personation Under State Law

It wasn’t until 2008, when New York became the first state to recognized e-personation as a criminally punishable form of identity theft. Under New York law, “a person is guilty of criminal impersonation in the second degree when he … impersonates another by communication by internet website or electronic means with intent to obtain a benefit or injure or defraud another, or by such communication pretends to be a public servant in order to induce another to submit to such authority or act in reliance on such pretense.”

Since 2008, other states, such as California, New Jersey, and Texas, have also amended their identity theft statutes to include online impersonation as a criminal offense. New Jersey amended their impersonation and identity theft statute in 2014, after an e-personator case revealed their current statute lacked any mention of “electronic communication” as means of unlawful impersonation. In 2011, New Jersey Superior Court Judge David Ironson in Morris County, declined to dismiss an indictment of identity theft against Dana Thornton. Ms. Thornton allegedly created a fictitious Facebook page that portrayed her ex-boyfriend, a narcotics detective, unfavorably. On the Facebook page, Thornton, pretending to be her ex, posted admitting to hiring prostitutes, using drugs, and even contracting a sexually transmitted disease. Thornton’s defense counsel argued that New Jersey’s impersonation statute was not applicable because online impersonation was not explicitly mentioned in the statute and therefore, Thornton’s actions do not fall within the scope of activity the statute proscribes. Judge Ironson disagreed by noting the New Jersey statute is “clear and unambiguous” in forbidding impersonation activities that cause injury and does not need to specify the means by which the injury occurs.

Currently under New Jersey law, a person is guilty of impersonation or theft of identity if … “the person engages in one or more of the following actions by any means, but not limited to, the use of electronic communications or an internet website:”

    1. Impersonates another or assumes a false identity … for the purpose of obtaining a benefit for himself or another or to injure or defraud another;
    2. Pretends to be a representative of some person or organization … for the purpose of obtaining a benefit for himself or another or to injure or defraud another;
    3. Impersonates another, assumes a false identity or makes a false or misleading statement regarding the identity of any person, in an oral or written application for services, for the purpose of obtaining services;
    4. Obtains any personal identifying information pertaining to another person and uses that information, or assists another person in using the information … without that person’s authorization and with the purpose to fraudulently obtain or attempt to obtain a benefit or services, or avoid the payment of debt … or avoid prosecution for a crime by using the name of the other person; or
    5. Impersonates another, assumes a false identity or makes a false or misleading statement, in the course of making an oral or written application for services, with the purpose of avoiding payment for prior services.

As social media continues to grow it is likely that more state legislators will amend their statutes to incorporate e-personation into their impersonation and identify theft statutes.

E-personators Twitter Takeover

Over the last week, e-personation has erupted into chaos on Twitter. Elon Musk brought Twitter on October 27, 2022, for $44 billion dollars. He immediately began firing the top Twitter executives including the chief executive and chief financial officer. On the verge of bankruptcy, Elon needed a plan to generate more subscription revenue. At last, the problematic Twitter Blue subscription was created. Under the Twitter Blue policy users could purchase a subscription for $8 a month and receive the blue verification check mark next to their Twitter handle.

The unregulated distribution of the blue verification check mark has led to chaos on Twitter by allowing e-personators to run amuck. Traditionally the blue check mark has been a symbol of authentication for celebrities, politicians, news outlets, and other companies. It was created to protect those susceptible to e-personation. The rollout of Twitter Blue began on November 9, 2022, the policy did not specify any requirements needed to verify a user’s authenticity beyond payment of the monthly fee.

Shortly after the rollout, e-personators began to take advantage of their newly purchased verification subscription by impersonating celebrities, pharmaceutical companies, politicians, and even the new CEO of Twitter, Elon Musk. For example, comedian Kathy Griffin was one of the first Twitter accounts suspended after Twitter Blue’s launch for changing her Twitter name and profile photo to Elon Musk and impersonating the new CEO. Kathy was not the only Twitter user to impersonate Elon and in response Elon tweeted “Going forward, any Twitter handles engaging in impersonation without clearly specifying ‘parody’ will be permanently suspended.”

Elon’s threats of permanent suspension did not stop e-personators from trolling on Twitter. One e-personator used their blue check verification to masquerade as Eli Lilly and Company, an American pharmaceutical company. The fake Eli Lilly account tweeted the company would be providing free insulin to its customers. The real Eli Lilly account tweeted an apology shortly thereafter. Another e-personator used their verification to impersonate former United States President George W. Bush. The fake Bush account tweeted “I miss killing Iraqis” along with a sad face emoji. The e-personators did not stop there, many more professional athletes, politicians, and companies were impersonated under the new Twitter Blue subscription policy. An internal Twitter log seen by the New York Times indicated that 140,000 accounts had signed up for the new Twitter Blue subscription. It is unlikely that Elon will be able to discover every e-personator account and remedy this spread of misinformation.

Twitter’s Term and Conditions 

Before the rollout of Twitter Blue, Twitter’s guidelines included a policy for misleading and deceptive identities. Under Twitter’s policy “you many not impersonate individuals, groups, or organizations to mislead, confuse, or deceive others, nor use a fake identity in a manner that disrupts the experience of others on Twitter.” The guidelines further explain that impersonation is prohibited, specifically “you can’t pose as an existing person, group, or organization in a confusing or deceptive manner.” Based on the terms of Twitter’s guidelines, the recent e-personators are in direct violation of Twitter’s policy, but are these users also criminally liable?

Careful, You Could Get a Criminal Record

Social media networks, such as Facebook, Instagram, and Twitter, have little incentive to protect the interests of individual users because they cannot be held liable for anything their users post. Under section 230 of the Communications Decency Act, “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Because of the lack responsibility placed on social media platforms, victims of e-personation often have a hard time trying to remove the fake online presence. Ironically, in order for a victim to gain control of an e-personator’s fake account, the victim must provide the social media platform with confidential identifying information, while the e-personator effectively remains anonymous.

By now you’re probably asking yourself, but what about the e-personators criminal liability? Under some state statutes, like those mentioned above, e-personators can be found criminally liable. However, there are some barriers that effect the effectiveness of these prosecutions. For example, e-personators maintain great anonymity, therefore finding the actual person behind the fake account could be difficult. Furthermore, many of the state statutes that criminalize e-personation include proving the perpetrator’s intent, which may also pose a risk to prosecution. Lastly, social media is a global phenomenon which means jurisdictional issues will arise when bringing these cases to court. Unfortunately, only a minority of states have amended their impersonation statutes to include e-personation. Hopefully as social media continues to grow more states will follow suit and e-personation will be prosecuted more efficiently and effectively. Remember, not everyone on social media is who they claim to be, so be cautious.

Update Required: An Analysis of the Conflict Between Copyright Holders and Social Media Users

Opening

For anyone who is chronically online as yours truly, in one way or another we have seen our favorite social media influencers, artists, commentators, and content creators complain about their problems with the current US Intellectual Property (IP) system. Be it that their posts are deleted without explanation or portions of their video files are muted, the combination of factors leading to copyright issues on social media is endless. This, in turn, has a markedly negative impact on free and fair expression on the internet, especially within the context of our contemporary online culture. For better or worse, interaction in society today is intertwined with the services of social media sites. Conflict arises when the interests of copyright holders clash with this reality. They are empowered by byzantine and unrealistic laws that hamper our ability to exist as freely as we do in real life. While they do have legitimate and fundamental rights that need to be protected, such rights must be balanced out with desperately needed reform. People’s interaction with society and culture must not be hampered, for that is one of the many foundations of a healthy and thriving society. To understand this, I venture to analyze the current legal infrastructure we find ourselves in.

Current Relevant Law

The current controlling laws for copyright issues on social media are the Copyright Act of 1976 and the Digital Millennium Copyright Act (DMCA). The DMCA is most relevant to our analysis; it gives copyright holders relatively unrestrained power to demand removal of their property from the internet and to punish those using illegal methods to get ahold of their property. This broad law, of course, impacted social media sites. Title II of the law added 17 U.S. Code § 512 to the Copyright Act of 1976, creating several safe harbor provisions for online service providers (OSP), such as social media sites, when hosting content posted by third parties. The most relevant of these safe harbors to this issue is 17 U.S. Code § 512(c), which states that an OSP cannot be liable for monetary damages if it meets several requirements and provides a copyright holder a quick and easy way to claim their property. The mechanism, known as a “notice and takedown” procedure, varies by social media service and is outlined in their terms and conditions of service (YouTube, Twitter, Instagram, TikTok, Facebook/Meta). Regardless, they all have a complaint form or application that follows the rules of the DMCA and usually will rapidly strike objectionable social media posts by users. 17 U.S. Code § 512(g) does provide the user some leeway with an appeal process and § 512(f) imposes liability to those who send unjustifiable takedowns. Nevertheless, a perfect balance of rights is not achieved.

The doctrine of fair use, codified as 17 U.S. Code § 107 via the Copyright Act of 1976, also plays a massive role here. It established a legal pathway for the use of copyrighted material for “purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research” without having to acquire right to said IP from the owner. This legal safety valve has been a blessing for social media users, especially with recent victories like Hosseinzadeh v. Klein, which protected reaction content from DMCA takedowns. Cases like Lenz v. Universal Music Corp further established that fair use must be considered by copyright holders when preparing for takedowns. Nevertheless, failure to consider said rights by true copyright holders still happens, as sites are quick to react to DMCA complaints. Furthermore, the flawed reporting systems of social media sites lead to abuse by unscrupulous actors faking true ownership. On top of that, such legal actions can be psychologically and financially intimidating, especially when facing off with a major IP holder, adding to the unbalanced power dynamic between the holder and the poster.

The Telecommunications Act of 1996, which focuses primarily on cellular and landline carriers, is also particularly relevant to social media companies in this conflict. At the time of its passing, the internet was still in its infancy. Thus, it does not incorporate an understanding of the current cultural paradigm we find ourselves in. Specifically, the contentious Section 230 of the Communication Decency Act (Title V of the 1996 Act) works against social media companies in this instance, incorporating a broad and draconian rule on copyright infringement. 47 U.S. Code § 230(e)(2) states in no uncertain terms that “nothing in this section shall be construed to limit or expand any law pertaining to intellectual property.” This has been interpreted and restated in Perfect 10, Inc. v. CCBill LLC to mean that such companies are liable for user copyright infringement. This gap in the protective armor of Section 230 is a great concern to such companies, therefore they react strongly to such issues.

What is To Be Done?

Arguably, fixing the issues around copyright on social media is far beyond the capacity of current legal mechanisms. With ostensibly billions of posts each day on various sites, regulation by copyright holders and sites is far beyond reason. It will take serious reform in the socio-cultural, technological, and legal arenas before a true balance of liberty and justice can be established. Perhaps we can start with an understanding by copyright holders not to overreact when their property is posted online. Popularity is key to success in business, so shouldn’t you value the free marketing that comes with your copyrighted property getting shared honestly within the cultural sphere of social media?  Social media sites can also expand their DMCA case management teams or create tools for users to accredit and even share revenue with, if they are an influencer or content creator, the copyright holder. Finally, congressional action is desperately needed as we have entered a new era that requires new laws. That being said, achieving a balance between the free exchange of ideas and creations and the rights of copyright holders must be the cornerstone of the government’s approach to socio-cultural expression on social media. That is the only way we can progress as an ever more online society.

 

Image: Freepik.com

https://www.freepik.com/free-vector/flat-design-intellectual-property-concept-with-woman-laptop_10491685.htm#query=intellectual%20property&position=2&from_view=keyword”>Image by pikisuperstar

Shadow Banning Does(n’t) Exist

Shadow Banning Doesn’t Exist

#mushroom

Recent posts from #mushroom are currently hidden because the community has reported some content that may not meet Instagram’s community guidelines.

 

Dear Instagram, get your mind outta the gutter! Mushrooms are probably one of the most searched hashtags in my Instagram history. It all started when I found my first batch of wild chicken-of-the-woods mushrooms. I wanted to learn more about mushroom foraging, so I consulted Instagram. I knew there were tons of foragers sharing photos, videos, and tips about finding different species. But imagine not being able to find content related to your hobby?

What if you loved eggplant varieties? But nothing came up in the search bar? Perhaps you’re an heirloom eggplant farmer trying to sell your product on social media? Yet you’ve only gotten two likes—even though you added #eggplantman to your post. Shadow banned? I think yes.

The deep void of shadow banning is a social media user’s worst nightmare. Especially for influencers whose career depends on engagement. Shadow banning comes with so many uncertainties, but there are a few factors many users agree on:

      1. Certain posts and videos remain hidden from other users
      2. It hurts user engagement
      3. It DOES exist

#Shadowbanning

Shadow banning is an act of restricting or censoring a user’s content on social media without notifying the user. This usually occurs when a user posts content deemed inappropriate or it violates the platform’s guidelines. If a user is shadow banned, the user’s content is only visible to the user and their followers.

Influencers, artists, creators, and business owners are vulnerable victims to the shadow banning void. They depend the most on user engagement, growth, and reaching new audiences. As much as it hurts them, it also hurts other users searching for this specific content. There’s no clear way of telling whether you’ve been shadow banned. You don’t get a notice. You can’t make an appeal to fix your lack of engagement. However, you will see a decline in engagement because no one can see your content in their feeds.

According to the head of Instagram, Adam Mosseri, “shadow banning is not a thing.” In an interview with the Meta CEO, Mark Zuckerberg, he stated Facebook has “no policy that is shadow banning.” Even a Twitter blog stated, “People are asking us if we shadow ban. We do not.” There is no official way of knowing if it exists, but there is evidence it does take place on various social media platforms.

#Shadowbanningisacoverup?

Pole dancing on social media probably would have been deemed inappropriate 20 years ago. But this isn’t the case today. Pole dancing is a growing sport industry. Stigmas associating strippers with pole dancing is shifting with its increasing popularity and trendy nature. However, social media standards may still be stuck in the early 2000s.

In 2019, user posts with hashtags including #poledancing, #polesportorg, and #poledancenation were hidden from Instagram’s Explore page. This affected many users who connect and share new pole dancing techniques with each other. It also had a huge impact on businesses who rely on the pole community to promote their products and services: pole equipment, pole clothing, pole studios, pole sports competitions, pole photographers, and more.

Due to a drastic decrease in user engagement, a petition directing Instagram to stop pole dancing censorship was circulated worldwide. Is pole dancing so controversial it can’t be shared on social media? I think not. There is so much to learn from sharing information virtually, and Section 230 of the Communications Decency Act supports this.

Section 230 was passed in 1996, and it provides limited federal immunity to websites from lawsuits if a user posts something illegal. This means that if User X decides to post illegal content on Twitter, the Twitter platform could not be sued because of User X’s post. Section 230 does not stop the user who posted such content from being sued, so User X can still be held accountable.

It is clear that Section 230 embraces the importance of sharing knowledge. Section 230(a)(1) tells us this. So why would Instagram want to shadow ban pole dancers who are simply sharing new tricks and techniques?

The short answer is: It’s inappropriate.

But users want to know: what makes it inappropriate?

Is it the pole? A metal pole itself does not seem so.

Is it the person on the pole? Would visibility change depending on gender?

Is it the tight clothing? Well, I don’t see how it is any different from my 17  bikini photos on my personal profile.

Section 230 also provides a carve-out for sex-related work, such as sex trafficking. But this is where the line is drawn between appropriate and inappropriate content. Sex trafficking is illegal, but pole dancing is not. Instagram’s community guidelines also support this. Under the guidelines, sharing pole dancing content would not violate it. Shadow banning clearly seeks to suppress certain content, and in this case, the pole dancing community was a target.

Cultural expression also battles with shadow banning. In 2020, Instagram shadow banned Caribbean Carnival content. The Caribbean Carnival is an elaborate celebration to commemorate slavery abolition in the West Indies and showcases ensembles representing different cultures and countries.

User posts with hashtags including #stluciacarnival, #fuzionmas, and #trinidadcarnival2020 could not be found nor viewed by other users. Some people viewed this as suppressing culture and impacting tourism. Additionally, Facebook and Instagram shadow banned #sikh for almost three months. Due to numerous user feedback, the hashtag was restored, but Instagram failed to state how or why the hashtag was blocked.

In March 2020, The Intercept obtained internal TikTok documents alluding to shadow banning methods. Documents revealed moderators were to suppress content depicting users with “‘abnormal body shape,’ ‘ugly facial looks,’ dwarfism, and ‘obvious beer belly,’ ‘too many wrinkles,’ ‘eye disorders[.]'” While this is a short excerpt of the longer list, this shows how shadow banning may not be a coincidence at all.

Does shadow banning exist? What are the pros and cons of shadow banning?

 

 

 

Corporate Use of Social Media: A Fine Line Between What Could-, Would-, and Should-be Posted

 

Introduction

In recent years, social media has taken a hold on nearly every aspect of human interaction and turned the way we communicate on its head. Social media apps’ high speed capability of disseminating information instantaneously have affected the way many sectors of business operate. From entertainment, social, environmental, educational, or financial, social media has bewildered the legal departments of many in house general counsels across all industries. Additionally, the generational shaft between the person actually posting for the account versus their supervisor has only exacerbated the potential for communications to miss their mark and cause controversy or adverse effects.

These days, most companies have social media accounts, but not all accounts are created equal, and they certainly are not all monitored the same. In most cases, these accounts are not regulated at all except by their own internal managers and #CancelCulture. Depending on the product or company, social media managers have done their best to stay abreast of changes in popular hashtags, trends and challenges, and the overall shift from a corporate tone of voice to one of relatability–more Gen-Z-esque, if you will. But with this shift, the rights and implications of corporate speech through social media has been put to the test.

Changes in Corporate Speech on Social Media 

In the last 20 years, corporate use of social media has become a battle of relevance. With the decline of print media, social media, and its apps, have emerged as a marketing necessity. Early social media use was predominantly geared towards social purposes. If we look at the origins of Facebook, Myspace, and Twitter it is clear that these apps were intended for superficial uses—not corporate communications—but this all changed with the introduction of LinkedIn, which sparked a dynamic shift towards business and professional use of social media.

Today social media is used to report on almost every aspect of our lives, from disaster preparation and emergency responses to political updates, to dating and relationship finders, and customer service based tasks, social media truly covers all. It is also more common now days to get backlash for not speaking out or using social media after a major social or political movement occurs. Social media is also increasingly being used for research with geolocation technology, for organizing demonstrations and political unrest, and in the business context, for development in sales, marketing, networking, and hiring or recruiting practices.

These changes are starting to lead to significant conversations in the business world when it comes to company speech, regulated disclosures and First Amendment rights. For example, so far, there is minimal research on how financial firms disseminate communications to investor news outlets via social media and in which format they are being responded to. And while some may view social media as an opportunity to further this kind of investor outreach, others have expressed concerns that disseminating communications in this manner could result in a company’s loss of control over such communications entirely.

The viral nature of social media allows not just investors to connect more easily with companies but also with individuals who may not directly follow that company and would therefore be a lot less likely to be informed about a company’s prior financial communications and the importance of any changes. This creates risk for a company’s investor communications via social media because of the potential to spread and possibly reach uniformed individuals which could in turn produce adverse consequences for the company when it comes to concerns about reliance and misleading information.

Corporate Use, Regulations, and Topics of Interest on Social Media 

With the rise of social media coverage on various societal issues, these apps have become a platform for news coverage, political movements, and social concerns and, for some generations, a platform that replaces traditional news media almost entirely. Specifically, when it comes to the growing interest in ESG related matters and sustainable business practices, social media poses as a great tool for information communication. For example, the Spanish company Acciona has recently been reported by the latest Epsilon Icarus Analytics Panel on ESG Sustainability, as having Spain’s highest resonating ESG content of all their social networks. Acciona demonstrates the potential leadership capabilities for a company to fundamentally impact and effectuate digital communications on ESG related topics. This developing content strategy focuses on brand values, and specifically, for Acciona, strong climate-change based values, female leadership, diversity, and other cultural, societal changes which demonstrates this new age of social media as a business marketing necessity.

Consequentially, this shift in usage of social media and the way we treat corporate speech on these platforms has left room for emerging regulation. Commercial or corporate speech is generally permissible under Constitutional Free Speech rights, so long as the corporation is not making false or misleading statements. Section 230 provides broad protection to internet content providers from accountability based on information disseminated on their platform. In most contexts, social media platforms will not be held accountable for the consequences resulting therefrom (i.e. a bad user’s speech). For example, a recent lawsuit was dismissed in favor of the defendant, TikTok, and its parent company, after a young girl died from participation in a trending challenge that went awry because under § 230 the platform was immune from liability.

In essence, when it comes to ESG-related topics, the way a company handles its social media and the actual posts they put out can greatly affect the company’s success and reputation as often ESG focused perspectives affect many aspects of the operation of the business. The type of communication, and coverage on various issues, can impact a company’s performance in the short term and long term hemispheres–the capability of which can effectuate change in corporate environmental practices, governance, labor and employment standards, human resource management and more.

With ESG trending, investors, shareholders, and regulators now face serious risk management concerns. Companies must now, more publicly, address news concerning their social responsibilities, on a much more frequent basis as ESG concerns continue to rise. Public company activities, through Consumer Service Reports, are mandated in annual 10-K filings and disclosures by the SEC, along with ESG disclosures thanks to a recent rule promulgation. These disclosures are designed to hold accountable and improve environmental, social, and economic performance when it comes to their respective stakeholders’ expectations.

Conclusion

In conclusion, social media platforms have created an entirely new mechanism for corporate speech to be implicated. Companies should proceed cautiously when covering social, political, environmental, and related concerns and their methods of information dissemination as well as the possible effects their posts may have on business performance and reputation overall.

Skip to toolbar