Privacy Please: Privacy Law, Social Media Regulation and the Evolving Privacy Landscape in the US

Social media regulation is a touchy subject in the United States.  Congress and the White House have proposed, advocated, and voted on various bills, aimed at protecting and guarding people from data misuse and misappropriation, misinformation, harms suffered by children, and for the implications of vast data collection. Some of the most potent concerns about social media stem from use and misuse of information by the platforms- from the method of collection, to notice of collection and use of collected information. Efforts to pass a bill regulating social media have been frustrated, primarily by the First Amendment right to free speech. Congress has thus far failed to enact meaningful regulation on social media platforms.

The way forward may well be through privacy law. Privacy laws give people some right to control their own personhood including their data, right to be left alone, and how and when people see and view them. Privacy laws originated in their current form in the late 1800’s with the impetus being one’s freedom from constant surveillance by paparazzi and reporters, and the right to control your own personal information. As technology mutated, our understanding of privacy rights grew to encompass rights in our likeness, our reputation, and our data. Current US privacy laws do not directly address social media, and a struggle is currently playing between the vast data collection practices of the platforms, immunity for platforms under Section 230, and private rights of privacy for users.

There is very little Federal Privacy law, and that which does exist is narrowly tailored to specific purposes and circumstances in the form of specific bills. Somes states have enacted their own privacy law scheme, California being on the forefront, Virginia, Colorado, Connecticut, and Utah following in its footsteps. In the absence of a comprehensive Federal scheme, privacy law is often judge-made, and offers several private rights of action for a person whose right to be left alone has been invaded in some way. These are tort actions available for one person to bring against another for a violation of their right to privacy.

Privacy Law Introduction

Privacy law policy in the United States is premised on three fundamental personal rights to privacy:

  1. Physical right to privacy- Right to control your own information
  2. Privacy of decisions– such as decisions about sexuality, health, and child-rearing. These are the constitutional rights to privacy. Typically not about information, but about an act that flows from the decision
  3. Proprietary Privacy – the ability to protect your information from being misused by others in a proprietary sense.

Privacy Torts

Privacy law, as it concerns the individual, gives rise to four separate tort causes of action for invasion of privacy:

  1. Intrusion upon Seclusion- Privacy law provides a tort cause of action for intrusion upon seclusion when someone intentionally intrudes upon the reasonable expectation of seclusion of another, physically or otherwise, and the intrusion is objectively highly offensive.
  2. Publication of Private Facts- One gives publicity To a matter concerning the Private life of another that is not of legitimate concern to the public, and the matter publicized would be objectively highly offensive. The first amendment provides a strong defense for publication of truthful matters when they are considered newsworthy.
  3. False Light – One who gives publicity to a matter concerning another that places the other before the public in a false light when The false light in which the other was placed would be objectively highly offensive and the actor had knowledge of or acted in reckless disregard as to the falsity of the publicized matter and the false light in which the other would be placed.
  4. Appropriation of name and likeness- Appropriation of one’s name or likeness to the defendant’s own use or benefit. There is no appropriation when a persona’s picture is used to illustrate a non-commercial, newsworthy article. This is usually commercial in nature but need not be. The appropriation could be of “identity”. It need not be misappropriation of name, it could be the reputation, prestige, social or commercial standing, public interest, or other value on the plaintiff’s likeness.

These private rights of action are currently unavailable for use against social media platforms because of Section 230 of the Decency in Communications Act, which provides broad immunity to online providers for posts on their platforms. Section 230 prevents any of the privacy torts from being raised against social media platforms.

The Federal Trade Commission (FTC) and Social Media

Privacy law can implicate social media platforms when their practices become unfair or deceptive to the public through investigation by the Federal Trade Commission (FTC). The FTC is the only federal agency with both consumer protection and competition jurisdiction in broad sectors of the economy. FTC investigates business practices where those practices are unfair or deceptive. FTC Act 15 U.S.C S 45- Act prohibits “unfair or deceptive acts or practices in or affecting commerce” and grants broad jurisdiction over privacy practices of businesses to the FTC. Trade practice is unfair if it causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and is not outweighed by countervailing benefits to consumers or competition. A deceptive act or practice is a material representation, omission, or practice that is likely to mislead the consumer acting reasonably in the circumstances, to the consumer’s detriment.

Critically, there is no private right of action in FTC enforcement. The FTC has no ability to enforce fines for S5 violations but can provide injunctive relief. By design, the FTC has very limited rulemaking authority, and looks to consent decrees and procedural, long-lasting relief as an ideal remedy. The FTC pursues several types of misleading or deceptive policy and practices that implicate social media platforms: notice and choice paradigms, broken promises, retroactive policy changes, inadequate notice, and inadequate security measures. Their primary objective is to negotiate a settlement where the company submits to certain measures of control of oversight by the FTC for a certain period of time. Violations of the agreements could yield additional consequences, including steep fines and vulnerability to class action lawsuits.

Relating to social media platforms, the FTC has investigated misleading terms and conditions, and violations of platform’s own policies. In Re Snapchat, the platform claimed that user’s posted information disappeared completely after a certain period of time, however, through third party apps and manipulation of user’s posts off of the platform, posts could be retained. The FTC and Snapchat settled, through a consent decree, to subject Snapchat to FTC oversight for 20 years.

The FTC has also investigated Facebook for violation of its privacy policy. Facebook has been ordered to pay a $5 billion penalty and to submit to new restrictions and a modified corporate structure that will hold the company accountable for the decisions it makes about its users’ privacy to settle FTC charges claiming that they violated a 2012 agreement with the agency.

Unfortunately, none of these measures directly give individuals more power over their own privacy. Nor do these policies and processes give individuals any right to hold platforms responsible for being misled by algorithms using their data, or for intrusion into their privacy by collecting data without allowing an opt-out.

Some of the most harmful social media practices today relate to personal privacy. Some examples include the collection of personal data, the selling and dissemination of data through the use of algorithms designed to subtly manipulate our pocketbooks and tastes, collection and use of data belonging to children, and the design of social media sites to be more addictive- all in service of the goal of commercialization of data.

No current Federal privacy scheme exists. Previous Bills on Privacy have been few and narrowly tailored to relatively specific circumstances and topics like healthcare and medical data protection by HIPPA, protection of data surrounding video rentals as in the Video Privacy Protection Act, and narrow protection for children’s data in Children’s Online Protection Act. All the schemes are outdated and fall short of meeting the immediate need of broad protection of widely collected and broadly utilized data from social media.

Current Bills on Privacy

Upon request from some of the biggest platforms, outcry from the public, and the White House’s request for Federal Privacy regulation, Congress appears poised to act. The 118th Congress has pushed privacy law as a priority in this term by introducing several bills related to social media privacy. There are at least ten Bills currently pending between the House of the Senate addressing a variety of issues and concerns from Children’s data privacy to the minimum age for use and designation of a new agency to monitor some aspects of privacy.

S744The Data Care Act of 2023 aims to protect social media user’s data privacy by imposing fiduciary duties on the platforms. The original iteration of the bill was introduced in 2021 and failed to receive a vote. It was re-introduced in March of 2023 and is currently pending. Under the act, social media platforms would have the duty to reasonably secure user’s data from access, refrain from using the data in a way that could foreseeably “benefit the online service provider to the detriment of the end user” and to prevent disclosure of user’s data unless the party is also bound by these duties. The bill authorizes the FTC and certain state officials to take enforcement actions upon breach of those duties. The states would be permitted to take their own legal action against companies for privacy violations. The bill would also allow the FTC to intervene in the enforcement efforts by imposing fines for violations.

H.R.2701 – Perhaps the most comprehensive piece of legislation on the House floor is the Online Privacy Act. In 2023, the bill was reintroduced by democrat Anna Eshoo after an earlier version on the bill failed to receive a vote and died in Congress. The Online Privacy Act aims to protect users by providing individuals rights relating to the privacy of their personal information. The bill would also provide privacy and security requirements for treatment of personal information. To accomplish this, the bill established a new agency – the Digital Privacy Agency- which would be responsible for enforcement of the rights and requirements. The new individual rights in privacy are broad and include the rights of access, correction, deletion, human review of automated decision, individual autonomy, right to be informed, and right to impermanence, amongst others. This would be the most comprehensive plan to date. The establishment of a new agency with a task specific to administration and enforcement of privacy laws would be incredibly powerful. The creation of this agency would be valuable irrespective of whether this bill is passed.

HR 821– The Social Media Child Protection Act is a sister bill to one by a similar name which originated in the Senate. This bill aims to protect children from the harms of social media by limiting children’s access to it. Under the bill, Social Media platforms are required to verify the age of every user before accessing the platform by submitting a valid identity document or by using another reasonable verification method. A social media platform will be prohibited from allowing users under the age of 16 to access the platform. The bill also requires platforms to establish and maintain reasonable procedures to protect personal data collected from users. The bill affords for a private right of action as well as state and FTC enforcement.

S 1291The Protecting Kids on Social Media Act is similar to its counterpart in the House, with slightly less tenacity. It similarly aims to protect children from social media’s harms. Under the bill, platforms must verify its user’s age, not allow the user to use the service unless their age has been verified, and must limit access to the platform for children under 12. The bill also prohibits retention and use of information collected during the age verification process. Platforms must take reasonable steps to require affirmative consent from the parent or guardian of a minor who is at least 13 years old for the creation of a minor account, and reasonably allow access for the parent to later revoke that consent. The bill also prohibits use of data collected from minors for algorithmic recommendations. The bill would require the Department of Commerce to establish a voluntary program for secure digital age verification for social media platforms. Enforcement would be through the FTC or state action.

S 1409– The Kids Online Safety Act, proposed by Senator Blumenthal of Connecticut, also aims to protect minors from online harms. This bill, as does the Online Safety Bill, establishes fiduciary duties for social media platforms regarding children using their sites. The bill requires that platforms act in the best interest of minors using their services, including mitigating harms that may arise from use, sweeping in online bullying and sexual exploitation. Social media sites would be required to establish and provide access to safeguards such as settings that restrict access to minor’s personal data and granting parents the tools to supervise and monitor minor’s use of the platforms. Critically, the bill establishes a duty for social media platforms to create and maintain research portals for non-commercial purposes to study the effect that corporations like the platforms have on society.

Overall, these bills indicate Congress’s creative thinking and commitment to broad privacy protection for users from social media harms. I believe the establishment of a separate body to govern, other than the FTC which lacks the powers needed to compel compliance, to be a necessary step. Recourse for violations on par with the EU’s new regulatory scheme, mainly fines in the billions, could help.

Many of the bills, for myriad aims, establish new fiduciary duties for the platforms in preventing unauthorized use and harms for children. There is real promise in this scheme- establishing duty of loyalty, diligence and care for one party has a sound basis in many areas of law and would be more easily understood in implementation.

The notion that platforms would need to be vigilant in knowing their content, studying its affects, and reporting those effects may do the most to create a stable future for social media.

The legal responsibility for platforms to police and enforce their policies and terms and conditions is another opportunity to further incentivize platforms. The FTC currently investigates policies that are misleading or unfair, sweeping in the social media sites, but there could be an opportunity to make the platforms legally responsible for enforcing their own policies, regarding age, against hate, and inappropriate content, for example.

What would you like to see considered in Privacy law innovation for social media regulation?

Sharing is NOT Always Caring

Where There’s Good, There’s Bad

Social media’s vast growth over the past several years has attracted millions of users who use these platforms to share content, connect with others, conduct business, and spread news and information. However, social media is a double-edged sword. While it creates communities of people and bands them together, it destroys privacy in the meantime. All of the convenient aspects of social media that we know and love lead to significant exposure of personal information and related privacy risks. Social media companies retain massive amounts of sensitive information regarding users’ online behavior, including their interests, daily activities, and political views. Algorithms are embedded within these functions to promote specific goals of social media companies, such as user engagement and targeted advertising. As a result, the means to achieve these goals conflict with consumers’ privacy concerns.

Common Issues

In 2022, several U.S. state and federal agencies banned their employees from using TikTok on government-subsidized devices, fearful that foreign governments could acquire confidential information. While a lot of the information collected through these platforms is voluntarily shared by users, much of it is also tracked using “cookies,” and you can’t have these with a glass of milk! Tracking cookies allows information regarding users’ online browsing activity to be stored and displayed in a way that targets specific interests and personalizes content tailored to these particular likings. Signing up for a social account and agreeing to the platform’s terms permits companies to collect all of this data.

Social media users leave a “digital footprint” on the internet when they create and use their accounts. Unfortunately, enabling a “private” account does not solve the problem because data is still retrieved in other ways. For example, engagement in certain posts through likes, shares, comments, buying history, and status updates all increase the likelihood that privacy will be intruded on.

Two of the most notorious issues related to privacy on social media are data breaches and data mining. Data breaches occur when individuals with unauthorized access steal private or confidential information from a network or computer system. Data mining on social media is the process in which user information is analyzed to identify specific tendencies which are subsequently used to inform research and other advertising functions.

Other issues that affect privacy are certain loopholes that can be taken around preventive measures already in place. For example, if an individual maintains a private social account but then shares something with their friend, others who are connected with the friend can view the post. Moreover, location settings enable a person’s location to be known even if the setting is turned off. Other means, such as Public Wi-Fi and websites can still track users’ locations.

Taking into account all of these prevailing issues, only a small amount of information is actually protected under federal law. Financial and healthcare transactions as well as details regarding children are among the classes of information that receive heightened protection. Most other data that is gathered through social media can be collected, stored, and used. Social media platforms are unregulated to a great degree with respect to data privacy and consumer data protection. The United States does have a few laws in place to safeguard privacy on social media but more stringent ones exist abroad.

Social media platforms are required to implement certain procedures to comply with privacy laws. They include obtaining user consent, data protection and security, user rights and transparency, and data breach notifications. Social media platforms typically ask their users to agree to their Terms and Conditions to obtain consent and authorization for processing personal data. However, most are guilty of accepting without actually reading these terms so that they can quickly get to using the app.

Share & Beware: The Law

Privacy laws are put in place to regulate how social media companies can act on all of the information users share, or don’t share. These laws aim to ensure that users’ privacy rights are protected.

There are two prominent social media laws in the United States. The first is the Communications Decency Act (CDA) which regulates indecency that occurs through computer networks. Nevertheless, Section 230 of the CDA provides enhanced immunity to any cause of action that would make internet providers, including social media platforms, legally liable for information posted by other users. Therefore, accountability for common issues on social media like data breaches and data misuse is limited under the CDA. The second is the Children’s Online Privacy Protection Act (COPPA). COPPA protects privacy on websites and other online services for children under the age of thirteen. The law prevents social media sites from gathering personal information without first providing written notice of disclosure practices and obtaining parental consent. The challenge remains in actually knowing whether a user is underage because it’s so easy to misrepresent oneself when signing up for an account. On the other hand, the European Union has General Data Protection Regulation (GDPR) which grants users certain control over when and how their data is processed. The GDPR contains a set of guidelines that restrict personal data from being disseminated on social media platforms. In the same way, it also gives internet users a long set of rights in cases where their data is shared and processed. Some of these rights include the ability to withdraw consent that was previously given, access information that is collected from them, and delete or restrict personal data in certain situations. The most similar domestic law to the GDPR is the California Consumer Privacy Act (CCPA) which was enacted in 2020. The CCPA regulates what kind of information can be collected by social media companies, giving platforms like Google and Facebook much less freedom in harvesting user data. The goal of the CCPA is to make data collection transparent and understandable to users.

Laws on the state level are lacking and many lawsuits have occurred as a result of this deficiency. A class action lawsuit was brought in response to the collection of users’ information by Nick.com. These users were all children under the age of thirteen who sued Viacom and Google for violating privacy laws. They argued that the data collected by the website together with Google’s stored data relative to its users was personally identifiable information. A separate lawsuit was brought against Facebook for tracking users when they visited third-party websites. Individuals who brought suit claimed that Facebook was able to personally identify and track them through shares and likes when they visited certain healthcare websites. Facebook was able to collect sensitive healthcare information as users browsed these sites, without their consent. However, the court asserted that users did indeed consent to these actions when they agreed to Facebook’s data tracking and data collection policies. The court also stated that the nature of this data was not subject to any stricter requirements as plaintiffs claimed it was because it was all available on publicly accessible websites. In other words, public information is fair game for Facebook and many other social media platforms when it comes to third-party sites.

In contrast to these two failed lawsuits, TikTok agreed to pay a $92 million settlement for twenty-one combined lawsuits due to privacy violations earlier this year. The lawsuit included substantial claims, such as allegations that the app analyzed users’ faces and collected private data on users’ devices without obtaining their permission.

We are living in a new social media era, one that is so advanced that it is difficult to fully comprehend. With that being said, data privacy is a major concern for users who spend a large amount of time sharing personal information, whether they realize it or not. Laws are put in place to regulate content and protect users, however, keeping up with the growing presence of social media is not an easy task–sharing is inevitable and so are privacy risks.

To share or not to share? That is the question. Will you think twice before using social media?

Destroying Defamation

The explosion of Fake News spread among social media sites is destroying a plaintiff’s ability to succeed in a defamation action. The recent proliferation of rushed journalism, online conspiracy theories, and the belief that most stories are, in fact, “Fake News” have created a desert of veracity. Widespread public skepticism about even the most mainstream social media reporting means plaintiffs need help convincing jurors that third parties believed any reported statement to be true. Such proof is necessary for a plaintiff to prove the elements of defamation.

Fake News Today

Fake News is any journalistic story that knowingly and intentionallyincludes untrue factual statements. Today, many speak of Fake News as a noun. There is no shortage of examples of Fake News and its impact.

      • Pizzagate: During the 2016 Presidential Election, Edgar Madison Welch, 28, read a story on (then) Facebook that Hilary Clinton was running a child trafficking ring out of the basement of a pizzeria. Welch, a self-described vigilante, shot open a locked door of the pizzeria with his AR-15.
      • A study by three MIT scholars found that false news stories spread faster on Twitter than true stories, with the former being 70% more likely to be retweeted than the latter.
      • During the defamation trial of Amber Heard and Johnny Depp, a considerable number of “Fake News” reports circulated across social media platforms, particularly TikTok, Twitter, and YouTube, attacking Ms. Heard at a disproportionality more significant rate than Mr. Depp.

 

What is Defamation?

To establish defamation, a plaintiff must show the defendant published a false assertion of fact that damages the plaintiff’s reputation. Hyperbolic language or other indications that a statement was not meant to be taken seriously are not actionable. Today’s understanding that everything on the Internet is susceptible to manipulation destroys defamation.

Because the factuality of a statement is a question of law, a plaintiff must first convince a judge that the offending statement is fact and not opinion. Courts often find that Internet and social media statements are hyperbole or opinion. If a plaintiff succeeds in persuading the judge, then the issue of whether the statement defamed the plaintiff heads to the jury. A jury faced with defamation must determine whether the statement of fact harmed the defendant’s reputation or livelihood to the extent that it caused the plaintiff to incur damages. The prevalence of Fake News creates another layer of difficulty for the Internet plaintiff, who must convince the jury that the statement was true.

Defamation’s Slow and Steady Erosion

Since the 1960s, the judiciary has limited plaintiffs’ ability to succeed in defamation claims. The decisions in Sullivan v. New York Times and Gertz increased the difficulty for public figures, and those with limited public figure status, to succeed by requiring them to prove actual malice against a defendant, a standard higher than the mere negligence standard allowed for individuals who are not of community interest.

The rise of Internet use, mainly social media, presents plaintiffs with yet another hurdle. Plaintiffs can only succeed if the challenged statement is fact, not opinion. However, judges find that statements made on the Internet are opinions and not points. The combined effect of Supreme Court limitations on proof and the increased belief that social media posts are mostly opinions has limited the plaintiff’s ability to succeed in a defamation claim.

Destroying Defamation

If the Supreme Court and social media have eroded defamation, Fake News has destroyed it. Today, convincing a jury that a false statement purporting to be fact has defamed a plaintiff is difficult given the dual issues of society’s objective mistrust of the media and the understanding that information on the Internet is generally opinion, not fact. Fake News sows confusion and makes it almost impossible for jurors to believe any statement has the credibility necessary to cause harm.

To be clear, in some instances, fake News is so intolerable that a jury will find for the plaintiffs. A Connecticut jury found conspiracy theorist Alex Jones liable for defamation based on his assertion that the government had faked the Sandy Hook shootings.

But often, plaintiffs are unsuccessful where the challenged language is conflated with untruths. Fox News successfully defended itself against a lawsuit claiming that it had aired false and deceptive content about the coronavirus, even though its reporting was, in fact, untrue.

Similarly, a federal judge dismissed a defamation case against Fox News for Tucker Carlson’s report that the plaintiff had extorted then-President Donald Trump. In reaching its conclusion, the judge observed that Carlson’s comments were rhetorical hyperbole and that the reasonable viewer “‘arrive[s] with the appropriate amount of skepticism.”‘ Reports of media success in defending against defamation claims further fuel media mistrust.

The current polarization caused by identity politics is furthering the tendency for Americans to mistrust the media. Sarah Palin announced that the goal of her recent defamation case against The New York Times was to reveal that the “lamestream media” publishes “fake news.”

If jurors believe that no reasonable person could credit a challenged statement as accurate, they cannot find that the statement the plaintiff asserts is defamatory caused harm. An essential element of defamation is that the defendant’s remarks damaged the plaintiff’s reputation. The large number of people who believe News is fake, the media’s rush to publish, and external attacks on credible journalism have created a problematization of truth among members of society. The potential for defamatory harm is minimal when every news story is questionable. Ultimately, the presence of Fake News is a blight on the tort of defamation and, like the credibility of present-day news organizations, will erode it to the point of irrelevance.

Is there any hope for a world without Fake News?

 

Corporate Use of Social Media: A Fine Line Between What Could-, Would-, and Should-be Posted

 

Introduction

In recent years, social media has taken a hold on nearly every aspect of human interaction and turned the way we communicate on its head. Social media apps’ high speed capability of disseminating information instantaneously have affected the way many sectors of business operate. From entertainment, social, environmental, educational, or financial, social media has bewildered the legal departments of many in house general counsels across all industries. Additionally, the generational shaft between the person actually posting for the account versus their supervisor has only exacerbated the potential for communications to miss their mark and cause controversy or adverse effects.

These days, most companies have social media accounts, but not all accounts are created equal, and they certainly are not all monitored the same. In most cases, these accounts are not regulated at all except by their own internal managers and #CancelCulture. Depending on the product or company, social media managers have done their best to stay abreast of changes in popular hashtags, trends and challenges, and the overall shift from a corporate tone of voice to one of relatability–more Gen-Z-esque, if you will. But with this shift, the rights and implications of corporate speech through social media has been put to the test.

Changes in Corporate Speech on Social Media 

In the last 20 years, corporate use of social media has become a battle of relevance. With the decline of print media, social media, and its apps, have emerged as a marketing necessity. Early social media use was predominantly geared towards social purposes. If we look at the origins of Facebook, Myspace, and Twitter it is clear that these apps were intended for superficial uses—not corporate communications—but this all changed with the introduction of LinkedIn, which sparked a dynamic shift towards business and professional use of social media.

Today social media is used to report on almost every aspect of our lives, from disaster preparation and emergency responses to political updates, to dating and relationship finders, and customer service based tasks, social media truly covers all. It is also more common now days to get backlash for not speaking out or using social media after a major social or political movement occurs. Social media is also increasingly being used for research with geolocation technology, for organizing demonstrations and political unrest, and in the business context, for development in sales, marketing, networking, and hiring or recruiting practices.

These changes are starting to lead to significant conversations in the business world when it comes to company speech, regulated disclosures and First Amendment rights. For example, so far, there is minimal research on how financial firms disseminate communications to investor news outlets via social media and in which format they are being responded to. And while some may view social media as an opportunity to further this kind of investor outreach, others have expressed concerns that disseminating communications in this manner could result in a company’s loss of control over such communications entirely.

The viral nature of social media allows not just investors to connect more easily with companies but also with individuals who may not directly follow that company and would therefore be a lot less likely to be informed about a company’s prior financial communications and the importance of any changes. This creates risk for a company’s investor communications via social media because of the potential to spread and possibly reach uniformed individuals which could in turn produce adverse consequences for the company when it comes to concerns about reliance and misleading information.

Corporate Use, Regulations, and Topics of Interest on Social Media 

With the rise of social media coverage on various societal issues, these apps have become a platform for news coverage, political movements, and social concerns and, for some generations, a platform that replaces traditional news media almost entirely. Specifically, when it comes to the growing interest in ESG related matters and sustainable business practices, social media poses as a great tool for information communication. For example, the Spanish company Acciona has recently been reported by the latest Epsilon Icarus Analytics Panel on ESG Sustainability, as having Spain’s highest resonating ESG content of all their social networks. Acciona demonstrates the potential leadership capabilities for a company to fundamentally impact and effectuate digital communications on ESG related topics. This developing content strategy focuses on brand values, and specifically, for Acciona, strong climate-change based values, female leadership, diversity, and other cultural, societal changes which demonstrates this new age of social media as a business marketing necessity.

Consequentially, this shift in usage of social media and the way we treat corporate speech on these platforms has left room for emerging regulation. Commercial or corporate speech is generally permissible under Constitutional Free Speech rights, so long as the corporation is not making false or misleading statements. Section 230 provides broad protection to internet content providers from accountability based on information disseminated on their platform. In most contexts, social media platforms will not be held accountable for the consequences resulting therefrom (i.e. a bad user’s speech). For example, a recent lawsuit was dismissed in favor of the defendant, TikTok, and its parent company, after a young girl died from participation in a trending challenge that went awry because under § 230 the platform was immune from liability.

In essence, when it comes to ESG-related topics, the way a company handles its social media and the actual posts they put out can greatly affect the company’s success and reputation as often ESG focused perspectives affect many aspects of the operation of the business. The type of communication, and coverage on various issues, can impact a company’s performance in the short term and long term hemispheres–the capability of which can effectuate change in corporate environmental practices, governance, labor and employment standards, human resource management and more.

With ESG trending, investors, shareholders, and regulators now face serious risk management concerns. Companies must now, more publicly, address news concerning their social responsibilities, on a much more frequent basis as ESG concerns continue to rise. Public company activities, through Consumer Service Reports, are mandated in annual 10-K filings and disclosures by the SEC, along with ESG disclosures thanks to a recent rule promulgation. These disclosures are designed to hold accountable and improve environmental, social, and economic performance when it comes to their respective stakeholders’ expectations.

Conclusion

In conclusion, social media platforms have created an entirely new mechanism for corporate speech to be implicated. Companies should proceed cautiously when covering social, political, environmental, and related concerns and their methods of information dissemination as well as the possible effects their posts may have on business performance and reputation overall.

A Uniquely Bipartisan Push to Amend/Repeal CDA 230

Last month, I wrote a blog post about the history and importance of the Communications Decency Act, section 230 (CDA 230). I ended that blog post by acknowledging the recent push to amend or repeal section 230 of the CDA. In this blog post, I delve deeper into the politics behind the push to amend or repeal this legislation.

“THE 26 WORDS THAT SHAPED THE INTERNET”

If you are unfamiliar with CDA 230, it is the sole legislation that governs the internet world. Also known as “the 26 words that shaped the internet” Congress specifically articulated in the act that the internet is able to flourish, due to a “minimum of government regulation.” This language has resulted in an un-regulated internet, ultimately leading to problems concerning misinformation.

Additionally, CDA 230(c)(2) limits civil liability for posts that social media companies publish. This has caused problems because social media companies lack motivation to filter and censor posts that contain misinformation.

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” (47 U.S.C. § 230).

Section 230’s liability shade has been extended far beyond Congress’s original intent, which was to protect social media companies against defamation claims. The features of this legislation have resulted in a growing call to update section 230.

In this day and age, an idea or movement rarely gains bi-partisan support anymore. Interestingly, though, amending, or repealing section 230 has gained recent bipartisan support. As expected, however, each party has differing reasons as to why the law should be changed.

BIPARTISAN OPPOSITION

Although the two political parties are in agreement that the legislation should be amended, their reasoning behind it stems from differing places. Republicans tend to criticize CDA 230 for allowing social media companies to selectively censor conservative actors and posts. In contrast, democrats criticize the law for allowing social media companies to disseminate false, and deceptive information.

 DEMOCRATIC OPPOSITION

On the democratic side of the aisle, President Joe Biden has repeatedly called for Congress to repeal the law. In an interview with The New York Times, President Biden was asked about his personal view regarding CDA 230, in which he replied…

“it should be revoked. It should be revoked because it is not merely an internet company. It is propagating falsehoods they know to be false, and we should be setting standards not unlike the Europeans are doing relative to privacy. You guys still have editors. I’m sitting with them. Not a joke. There is no editorial impact at all on Facebook. None. None whatsoever. It’s irresponsible. It’s totally irresponsible.”

House Speaker Nancy Pelosi has also voiced opposition, calling CDA 230 “a gift” to the tech industry that could be taken away.

The law has often been credited by the left for fueling misinformation campaigns, like Trumps voter fraud theory, and false COVID information. In response, social media platforms began marking certain posts as unreliable.  This led to the reasoning behind republicans opposition to section 230.

REPUBLICAN OPPOSITION

Former President Trump has voiced his opposition to CDA 230 numerous times. He first started calling for the repeal of the legislation in May of 2020, after Twitter flagged two of his tweets regarding mail-in voting, with a warning label that stated “Get the facts about mail-in ballots.” In fact, in December, Donald Trump, the current President at the time, threatened to veto the National Defense Authorization Act annual defense funding bill, if CDA 230 was not revoked. The former presidents opposition was so strong, he issued an Executive Order in May of last year urging the government to re-visit CDA 230. Within the order, the former president wrote…

“Section 230 was not intended to allow a handful of companies to grow into titans controlling vital avenues for our national discourse under the guise of promoting open forums for debate, and then to provide those behemoths blanket immunity when they use their power to censor …”

The executive order also asked the Federal Communications Commission to write regulations that would remove protections for companies that “censored” speech online. Although the order didn’t technically affect CDA 230, and was later revoked by President Biden, it resulted in increased attention on this archaic legislation.

LONE SUPPORTERS

Support for the law has not completely vanished, however. As expected, many social media giants support leaving CDA 230 untouched. The Internet Association, an industry group representing some of the largest tech companies like Google, Facebook, Amazon, and Microsoft, recently announced that the “best of the internet would disappear” without section 230, warning that it would lead to numerous companies being subject to an array of lawsuits.

In a Senate Judiciary hearing in October 2020, Facebook CEO Mark Zuckerberg and Twitter CEO Jack Dorsey warned that revoking Section 230 could…

“collapse how we communicate on the Internet.”

However, Mark Zuckerberg took a more moderate position as the hearing continued, telling Congress that he thought lawmakers should update the law.

Facebook has taken a more moderate approach by acknowledging that 230 should be updated. This approach is likely in response to public pressure due to increased awareness. Irregardless, it signifies a likely chance that section 23o will be updated in the future, since Facebook represents one of the largest social media companies protected by 230. A complete repeal of this law would create such major impacts, however, that this scenerio seems unlikely to happen. Nevertheless, growing calls for change, and a Democratic controlled Congress points to a likelihood of future revision of the section.

DIFFERING OPINIONS

Although both sides of Washington, and even some social media companies, agree the law should be amended; the two sides differ greatly on how to change the law.

As mentioned before, President Biden has voiced his support for repealing CDA 230 altogether. Alternatively, senior members of his party, like Nancy Pelosi have suggested simply revising or updating the section.

Republican Josh Hawley recently introduced legislation to amend section 230. The proposed legislation would require companies to prove a “duty of good faith,” when moderating their sites, in order to receive section 230 immunity. The legislation included a $5,000 fee for companies that don’t comply with the legislation.

Adding to the confusion of the section 230 debate, many fear the possible implications of repealing or amending the law.

FEAR OF CHANGE

Because CDA 230 has been referred to as “the first amendment of the internet,” many people fear that repealing this section altogether would result in a limitation on free speech online. Although President Biden has voiced his support for this approach, it seems unlikely to happen, as it would result in massive implications.

One major implication of repealing or amending CDA 230 is that it could allow for numerous lawsuits against social media companies. Not only would major social media companies be affected by this, but even smaller companies like Slice, could become the subject of defamation litigation by allowing reviews to be posted on their website. This could lead to an existence of less social media platforms, as some would not be able to afford legal fees. Many fear that these companies would further censor online posts for fear of being sued. This may also result in higher costs for these platforms. In contrast, companies could react by allowing everything, and anything to be posted, which could result in an unwelcome online environment. This would be in stark contrast to the Congress’s original intent in the creation of the CDA, to protect children from seeing indecent posts on the internet.

FUTURE CHANGE..?

 

Because of the intricacy of the internet, and the archaic nature of CDA 230, there are many differing opinions as to how to successfully fix the problems the section creates. There are also many fears about the consequences of getting rid of the legislation. Are there any revisions you can think of that could successfully deal with the republicans main concern, censorship? Can you think of any solutions for dealing with the democrats concern of limiting the spread of misinformation. Do you think there is any chance that section 230 will be repealed altogether? If the legislation were to be repealed, would new legislation need to be created to replace CDA 230?

 

Should Social Media Be Used as a Sentencing Tool?

Mass Incarceration in the US – A Costly Issue

The United States has a costly over-incarceration issue. As of May 2021, the United States has the highest rate of incarceration in the world with 639 prisoners per 100,000 of the national population. New York State alone has more prisoners than the entire country of Canada. In 2016, the US Government spent over $88 billion on prisons, jails, parole, and probation systems. Not to mention the social cost of incarcerating nearly 1% of our entire adult population. Alternative sentences can provide a substitute for costly incarceration.

incarceration statistics

What Are Alternative Sentences?

Typically, punishment for a crime is imprisonment. Alternative sentences are sentences other than imprisonment, such as:

  • community service,
  • drug rehabilitation programs,
  • probation, and
  • mental health programs.

While many generalizations about alternative sentences cannot be made, as the results vary by program and location, alternative sentences can and do keep people out of the overcrowded, problematic prison system in the US.

Could Social Media Play a Part in Alternative Sentencing?

In June 2021, a tourist in Hawaii posted a video of herself on TikTok touching a monk seal. The video went viral, and copycats hopped on the trend of poking wildlife for views. Hawaiian people, outraged, called for enforcement action and local media outlets echoed their call. Eventually, the Hawaii Governor released a statement that people who messed with local wildlife would be “prosecuted to the fullest extent of the law.”

monk seal

There are essentially three avenues of prosecution for interfering with wildlife: in federal court, state court, or civil court through the National Oceanic and Atmospheric Administration. Disturbing wildlife is a misdemeanor under federal law, but it’s a felony under state law, with a maximum penalty of five years in prison and a $10,000 fine. However, enforcement is unlikely, even after the Governor’s proclamation. Additionally, when enforcement does take place, it often happens out of the public eye. This imbalance of highly publicized crime and underpublicized enforcement led to a suggestion by Kauai Prosecuting Attorney Justin Kollar.

Kollar suggested sentencing criminals like the Hawaiian tourist to community service that would be posted on social media. Kollar looked to Hawaii’s environmental court as a potential model. Established in 2014 for the purpose of adjudicating environmental and natural resource violations, the environmental court has more sentencing tools at its disposal. For example, the court can sentence people to work with groups that do habitat restoration.

According to Kollar, requiring criminal tourists to take time out from their vacation to work with an environmental group — and possibly publicizing the consequence on social media — would not only be a more productive and just penalty, it would also create a positive and contrite image to spread across the internet. The violators would have an opportunity to become more educated and understand the harm they caused. Kollar wants people to learn from their mistakes, address the harm they caused, and take responsibility for their actions.

In an age when many crimes are visible on social media, what would be the pros and cons of using social media as a sentencing tool?

Some Pros and Cons of Using Social Media as a Sentencing Tool

In law school, we’re taught the theories of punishment, but not the consequences of punishment. While it’s important to think about the motivation for punishment, it’s equally, if not more, important to think about what happens because of punishment. In the case of using social media as a sentencing tool, there would likely be pros and cons.

One pro of using social media to publicize enforcement would be a rebalancing of the scale of crime v. enforcement publicity. This rebalance could help prevent vigilante justice from occurring when there is too big of a perceived gap between crime and enforcement. For example, when the TikToker posted her crime, she began to receive death threats. Many Hawaiians are fed up with their environment being exploited for financial profits. The non-enforcement and bold display of a wildlife crime led them to want to take matters into their own hands. In a situation like this, society does not benefit, the criminal does not learn from or take responsibility for their actions, and the victim is not helped.

An alternative sentence of wildlife-related community service publicized on social media could have benefited society because there is justice being done in a publicly known way that does not contribute to costly mass incarceration; helped the criminal learn from and take responsibility for their actions without being incarcerated; and, helped the victim, the environment, via the actual work done.

Additionally, this type of sentence falls into the category of restorative justice. Restorative Justice (RJ) is “a system of criminal justice which focuses on the rehabilitation of offenders through reconciliation with victims and the community at large.” The social media addition to an alternative sentence could provide the reconciliation with the “community at large” piece of the RJ puzzle. This would be a large pro, as RJ has been shown to lower recidivism rates and help victims.

While these pros are appealing, it is important to keep in mind that social media is a powerful tool that can facilitate far-reaching and lasting stigmatization of people. Before the age of social media and Google, a person’s criminal record could only be found in state-sponsored documents or small write-ups in a newspaper. As social scientists Sarah Lageson and Shadd Maruna put it, “although these records were “public,” they often remained in practical obscurity due to access limitations.” Today, any discretion, or presumed and unproven discretion in the case of online mug shots and police use of social media, can be readily found with a quick search. This can increase recidivism rates and make it harder for people with a criminal record to build relationships, find housing, and gain employment. The consequences of a readily available criminal record result in punishments not fitting to many crimes, as stigmatization is a part of punishment. Using social media as a sentencing tool could make the stigmatization situation worse, a huge con.

Perhaps there is a middle ground. To protect people from long-term stigmatization, faces and other identifying features could be blurred prior to publication. Similarly, identifying information, like names, could be excluded from the posts. By keeping the perpetrators anonymous, the scale of crime v. enforcement publicity could be rebalanced, the community aspect of RJ could be accomplished, and harmful stigmatization could be avoided. To completely avoid the possibility of stigmatization via social media postings, the program coordinators could post adjacent content. For example, they could post a before and after of the service project, completely leaving out the violators, while still publicizing enforcement.

Any iteration of the idea to use social media as a sentencing tool should be studied intensely regarding its consequences related to society, the criminal, and the victim, as it is a new idea.

 

Do you think social media should be used as a sentencing tool?

Alarming Side of Youtube

Social media has now become an integrated part of an individual’s life. From Facebook to twitter, Instagram, snapchat to the latest edition, that is TikTok, social media has made its way into a person’s life and occupies the same value as that of eating, sleeping, exercising etc. There is no denying the dopamine hit you get from posting on Instagram or scrolling endlessly, liking, sharing, commenting and re-sharing etc. From checking your notifications and convincing yourself, “Right, just five minutes, I am going to check my notifications” to spending hours on social media, it is a mixed bag. While I find that being in social media is to an extent a way to relax and alleviate stress, I also believe social media and its influence on peoples’ lives should not cross a certain threshold.

We all like a good laugh. We get a good laugh from people doing funny things on purpose or people pranking other people to get a laugh. Most individuals nowadays use some sort of social medial platforms to watch content or make content. YouTube is once such platform. After Google, YouTube is the most visited website on the internet. Everyday about a billion hours of videos are watched by people all over the world. I myself, contribute to those billion hours.

Now imagine you are on YouTube, you start watching a famous youtuber’s videos, you then realize this video is not only disturbing but is also very offensive. You stop watching the video. That’s it. You think that is a horrible video and think no more of it. On the contrary, there have been videos on YouTube which have caused mass controversy all over the internet since the platforms birth in 2005. Let us now explore the dark side of YouTube.

There is an industry that centers around pranks done to members of the public which is less about humor and more about shock value. There is nothing wrong with a harmless prank, but when doing a prank, one must be considerate how their actions are perceived by others, one wrong move and you could end facing charges or a conviction.

Across the social media platform there are many creators of such prank videos. Not all of them have been well received by the public or by the fands of the creators. One such incident is where YouTube content creators, Alan and Alex Stokes who are known for their gag videos plead guilty to charges centering around fake bank robberies staged by them.

The twins wore black clothes and ski masks, carried cash filled duffle bags for a video where they pretended to have robbed a bank. They then ordered an uber who, unaware of the prank had refused to drive them. An onlooker called the police believing that the twins had robbed a bank and were attempting to carjack the vehicle. Police arrived at the scene and held the driver at gunpoint until it was revealed and determined that it was a prank. The brothers were not charged and let off with a warning. They however, pulled the same stunt at a university some four hours later and were arrested.

They were charged with one felony count of false imprisonment by violence, menace or fraud, or deceit and one misdemeanor count of falsely reporting an emergency. The charges carry a maximum penalty of five years in prison. “These were not pranks. These are crimes that could have resulted in someone getting seriously injured or even killed.” said Todd Spitzer, Orange County district attorney.

The brothers accepted a bargain from the judge. In return for a guilty plea, the felony count would be reduced a misdemeanor resulting in one year probation and 160 hours of community service and compensation. The plea was entered despite the prosecution stating that tougher charges were necessary. The judge also warned the brothers, who have over 5 million YouTube subscribers not to make such videos.

Analyzing the scenario above, I would agree with the district attorney. Making prank videos and racking up videos should not come at the cost of inciting fear and panic in the community. The situation with the police could have escalated severely which might have led to a more gruesome outcome. The twins were very lucky, however, in the next incident, the man doing a prank video in Tennessee was not.

In filming a YouTube prank video, 20 year old Timothy Wilks was shot dead in a parking lot of an Urban Air indoor trampoline park. David Starnes Jr, admitted to shooting Wilks when he and an unnamed individual approached him and a group wielding butcher knives and lunged at them. David told the police that he shot one of them in defense of himself and others.

Wilks’s friend said they were filming a video of a robbery prank for their YouTube channel. This was a supposed to be a recorded YouTube video meant to capture the terrified reactions of their prank victims. David was unaware of this prank and pulled out his gun to protect himself and others. No one has been charged yet in regard to the incident.

The above incident is an example of how pranks can go horribly wrong and result in irreparable damage. This poses the question, who do you blame, the 20 years old man staging a very dangerous prank video, or the 23-year-old who fired his gun in response to that?

Monalisa Perez, a youtuber from Minnesota fatally shot and killed her boyfriend in an attempt to film a stunt of firing a gun 30 cm away from her boyfriend, Predo Ruiz, who only had a thick book of 1.5inch to protect him. Perez pleaded guilty to second degree manslaughter and was sentenced to six months’ imprisonment.

Perez and her boyfriend Ruiz would document their everyday lives in Minnesota by posting pranks videos on YouTube to gain views. Before the fatal stunt, Perez tweeted, “Me and Pedro are probably going to shoot one of the most dangerous videos ever. His idea, not mine.”

Perez had previously experimented before and thought that the hardback Encyclopedia would be enough to stop the bullet. Perez fired a .50-calibre Desert Eagle, which is known to be an extremely powerful handgun which pierced the encyclopedia and fatally wounded Ruiz.

Perez will serve a 180-day jail term, serve 10 years of supervised probation, be banned for life from owning firearms and make no financial gain from the case. The sentence is below the minimum guidelines, but it was allowed on the ground that the stunt was mostly Ruiz’s idea.

Dangerous pranks such as the one above has left a man dead and a mother of two grieving for fatally killing her partner.

In response to the growing concerns of filming various trends and videos, YouTube have updated their policies regarding “harmful and dangerous” content and explicitly banned pranks and challenges that may cause immediate or lasting physical or emotional harm. The policies page showcases three types of videos that are now prohibited. They are: 1) Challenges that encourage acts that have an inherent risk of sever harm; 2) Pranks that make victims they are physical danger and 3) Pranks that cause emotional distress to children.

Prank videos may depict the dark side of how content crating can go wrong but they are not the only ones. In 2017, youtuber, Logan Paul became the source of controversy after posting a video of him in a Japanese forest called Aokigahara near the base of Mount Fuji. Aokigahara is a dense forest with lush trees and greenery. The forest is, however, infamous for being known as the suicide forest. It is a frequent site for suicides and is also considered haunted.

Upon entering the forest, the youtuber came across a dead body hung from a tree. The actions and depictions of Logan Paul around the body are what caused controversy and outrage. The video has since been taken down from YouTube. An apology video was posted by Logan Paul trying to defend his actions. This did nothing to quell the anger on the internet. He then came out with a second video where he could be seen tearing up on camera. In addressing the video, YouTube expressed condolences and stated that they prohibit such content which are shocking or disrespectful. Paul lost the ability to make money on his videos through advertisement which is known as demonetization. He was also removed from the Google Preferred program, where brands can sell advertisement to content creators on YouTube.

That consequences of Logan Paul’s actions did not end there. A production company is suing the youtuber on the claims that the video of him in the Aokigahara resulted in the company losing a multimillion-dollar licencing agreement with Google. The video caused Google to end its relationship with Planeless Pictures, the production company and not pay the $3.5 million. Planeless Pictures are now suing Paul claiming that he pay the amount as well as additional damage and legal fees.

That is not all. Youtube has been filled with controversies which have resulted in lawsuits.

A youtuber by the name of Kanghua Ren was fined $22300 and was also sentenced to 15 months imprisonment for filming himself giving a homeless man an oreo filled with toothpaste. He gave 20 euros and oreo cookies to a homeless which were laced with toothpaste instead of cream. The video depicts the homeless man vomiting after eating the cookie. In the video Ren stated that although he had gone a bit far, the action would help clean the homeless person’s teeth. The court, however, did not take this lightly and sentenced him. The judge stated that this was not an isolated act and that Ren had shown cruel behaviour towards vulnerable victims.

These are some of the pranks and videos that have gained online notoriety. There are many other videos which have portrayed child abuse, following a trend by eating tidepods as well as making sharing anti-Semitic videos and using racist remarks. The most disturbing thing about these videos is that they are not only viewed by adults but also children. In my opinion these videos could be construed as having some influence on young individuals.

Youtube is a diverse platform home to millions of content creators. Since its inception it has served as a mode of entertainment and means of income to many individuals. From posting cat videos online to making intricate, detailed, and well directed short films, YouTube has revolutionized the video and content creation spectrum. Being an avid viewer of many channels on YouTube, I find that incidents like these, give YouTube a bad name. Proper policies and guidelines should be enacted and imposed and if necessary government supervision may also be exercised.

Facebook Posts Can Land You In Jail!

Did you know that a single Facebook post can land you in jail?  Its true, an acting judge in Westchester NY recently ruled that a ‘tag’ notification on Facebook violated  a protective order.  The result of the violation; second-degree contempt, which can lead to punishment of up to a year in jail.   In January, the a judge issued a  restraining order against Maria Gonzalez, prohibiting her from communicating with her former sister-in-law, Maribel Calderon.  Restraining orders are issued to prevent person from making contact with protected individuals.  Traditionally, courts interpreted contact to mean direct communications in person, mail, email, phone, voicemail or even text.   Facebook tags, however, present a slightly different form of contact.

Unlike Facebook messages, tagging someone identifies the tagged person on the poster’s Facebook page.  The tag, however, has the concurrent effect of linking to the identified person’s profile; thereby notifying them of the post.  Ms. Gonzalez tagged Calderon in a post on her (Gonzalez’s) timeline calling Calderon stupid and writing “you have a sad family.”  Gonzalez argued the post did not violate the protective order since there was no contact aimed directly at Calderon.  Acting Westchester (NY) County Supreme Court Justice Susan Capeci felt otherwise writing a restraining order includes “contacting the protected party by electronic or other means.”  Other means, it seems, is through personal posts put out on social media.

And Social Media posts aren’t just evidence of orders of protection violations, they are also grounds for supporting the issuance of restraining orders.  In 2013, a court granted an order of protection for actress Ashley Tinsdale against an alleged stalker.  Tinsdale’s lawyers presented evidence of over 19,000 tweets that the alleged stalker posted about the actress (an average of 100 tweets per day).

The bottom line:  Naming another on a social media post, even one that is directed to the twittersphere or Facebook community, rather than toward a particular individual,  is sufficient contact for purposes of supporting restraining orders or violations thereof.   We should all keep our posts positives –even more so if we have been told to stay away!!!

From Twitter to Terrorism

A teen was arrested for Tweeting an airline terrorist threat. A 14 year old Dutch girl named Sarah with twitter name @QueenDemetriax tweeted to American Airlines the following: “@AmericanAir hello my name’s lbrahim and I’m from Afghanistan. I’m part of Al Qaida and on June 1st I’m gonna do something really big bye.”

In response American Airlines wrote to Sarah from their official Twitter account saying “we take these threats very seriously. Your IP address and details will be forwarded to security and the FBI.” Moments after their response, Sarah replied saying “I’m just a girl” and that her initial tweet was simply a joke that her friend wrote! She had also posted a tweet apologizing to American Airlines and stating that she is scared now.

Sarah turned herself in to the Dutch police station, where the police department stated that they are taking her tweet seriously since it is an alarming threat. The girl was charged with “posting a false or alarming announcement” under Dutch law. It was unconfirmed whether the FBI was involved or not but she gained thousands of followers on Twitter as a result of this incident. Could this be a new trend in order to gain popularity or recognition? Should Sarah be punished and if so how?

Update:

Others are now tweeting similar tweets @AmericanAir and other airlines. Kale tweeted @SouthwestAir “I bake really good pies and my friends call me ‘the bomb’ am I still allowed to fly?” Donnie Cyrus tweeted @SouthwestAir “@WesleyWalrus is gonna bomb your next few flights.” ArmyJacket tweeted @AmericanAir “I have a bomb under the next plane to take off” There are many other tweets with similar language all aimed at airlines.

There are no reports yet of any of these follow up twitter threats being reported to the appropriate authorities. Are these tweeters going too far? These tweets can potentially be translated into legitimate threats or have they now crossed into the realm of freedom of speech?

Skip to toolbar