Can Social Media Be Regulated?

In 1996 Congress passed what is known as Section 230 of the Communications Decency Act (CDA) which provides immunity to website publishers for third-party content posted on their websites. The CDA holds that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This Act passed in 1996, was created in a different time and era, one that could hardly envision how fast the internet would grow in the coming years. In 1996, social media for instance consisted of a little-known social media website called Bolt, the idea of a global world wide web, was still very much in its infancy. The internet was still largely based on dial-up technology, and the government was looking to expand the reach of the internet. This Act is what laid the foundation for the explosion of Social Media, E-commerce, and a society that has grown tethered to the internet.

The advent of Smart-Phones in the late 2000s, coupled with the CDA, set the stage for a society that is constantly tethered to the internet and has allowed companies like Facebook, Twitter, YouTube, and Amazon to carve out niches within our now globally integrated society.   Facebook alone in the 2nd quarter of 2021 has averaged over 1.9 billion daily users.

Recent studs conducted by the Pew Research Center show that “[m]ore than eight in ten Americans get news from digital services”

Large majority of Americans get news on digital devices

While older members of society still rely on news media online, the younger generation, namely those 18-29 years of age, receive their news via social media.

Online, most turn to news websites except for the youngest, who are more likely to use social media

The role Social Media plays in the lives of the younger generation needs to be recognized. Social Media has grown at a far greater rate than anyone could imagine. Currently, Social Media operates under its modus operandi, completely free of government interference due to its classification as a private entity, and its protection under Section 230.

Throughout the 20th century when Television News Media dominated the scenes, laws were put into effect to ensure that television and radio broadcasters would be monitored by both the courts and government regulatory commissions. For example, “[t]o maintain a license, stations are required to meet a number of criteria. The equal-time rule, for instance, states that registered candidates running for office must be given equal opportunities for airtime and advertisements at non-cable television and radio stations beginning forty-five days before a primary election and sixty days before a general election.”

What these laws and regulations were put in place for was to ensure that the public interest in broadcasting was protected. To give substance to the public interest standard, Congress has from time to time enacted requirements for what constitutes the public interest in broadcasting. But Congress also gave the FCC broad discretion to formulate and revise the meaning of broadcasters’ public interest obligations as circumstances changed.

The Federal Communications Commission (FCC) authority is constrained by the first amendment but acts as an intermediary that can intervene to correct perceived inadequacies in overall industry performance, but it cannot trample on the broad editorial discretion of licensees. The Supreme Court has continuously upheld the public trustee model of broadcast regulation as constitutional. The criticisms of regulating social media center on the notion that they are purely private entities that do not fall under the purviews of the government, and yet these same issues are what presented themselves in the precedent-setting case of Red Lion Broadcasting Co. v. Federal Communications Commission (1969.  In this case, the court held that “rights of the listeners to information should prevail over those of the broadcasters.” The Court’s holding centered on the public right to information over the rights of a broadcast company to choose what it will share, this is exactly what is at issue today when we look at companies such as Facebook, Twitter, and Snapchat censuring political figures who post views that they feel may be inciteful of anger or violence.

In essence, what these organizations are doing is keeping information and views from the attention of the present-day viewer. The vessel for the information has changed, it is no longer found in television or radio but primarily through social media. Currently, television and broadcast media are restricted by Section 315(a) of the Communications Act and Section 73.1941 of the Commission’s rules which “require that if a station allows a legally qualified candidate for any public office to use its facilities (i.e., make a positive identifiable appearance on the air for at least four seconds), it must give equal opportunities to all other candidates for that office to also use the station.” This is a restriction that is nowhere to be found for Social Media organizations. 

This is not meant to argue for one side or the other but merely to point out that there is a political discourse being stifled by these social media entities, that have shrouded themselves in the veils of a private entity. However, what these companies fail to mention is just how political they truly are. For instance, Facebook proclaims itself to be an unbiased source for all parties, and yet what it fails to mention is that currently, Facebook employs one of the largest lobbyist groups in Washington D.C. Four Facebooks lobbyist have worked directly in the office of House Speaker Pelosi. Pelosi herself has a very direct connection to Facebook, she and her husband own between $550,000 to over $1,000,000 in Facebook stock. None of this is illegal, however, it raises the question of just how unbiased is Facebook.

If the largest source of news for the coming generation is not television, radio, or news publications themselves, but rather Social Media such as Facebook, then how much power should they be allowed to wield without there being some form of regulation? The question being presented here is not a new one, but rather the same question asked in 1969, simply phrased differently. How much information is a citizen entitled to, and at what point does access to that information outweigh the rights of the organization to exercise its editorial discretion? I believe that the answer to that question is the same now as it was in 1969 and that the government ought to take steps similar to those taken with radio and television. What this looks like is ensuring that through Social Media, that the public has access to a significant amount of information on public issues so that its members can make rational political decisions. At the end of that day that it was at stake, the public’s ability to make rational political decisions.

These large Social Media conglomerates such as Facebook and Twitter have long outgrown their place as a private entity, they have grown into a public medium that has tethered itself to the realities of billions of people. Certain aspects of it need to be regulated, mainly those that interfere with the Public Interest, there are ways to regulate this without interfering with the overall First Amendment right of Free Speech for all Americans. Where however Social Media blends being a private forum for all people to express their ideas under firmly stated “terms and conditions”, and being an entity that strays into the political field whether it be by censoring heads of state, or by hiring over $50,000,000 worth of lobbyist in Washington D.C, there need to be some regulations put into place that draw the line that ensures the public still maintains the ability to make rational political decisions. Rational decisions that are not influenced by anyone organization. The time to address this issue is now when there is still a middle ground on how people receive their news and formulate opinions.

What Evidence is Real in a World of Digitally Altered Material?

Imagine you are prosecuting a child pornography case and have incriminating chats made through Facebook showing the Defendant coercing and soliciting sexually explicit material from minors.  Knowing that you will submit these chats as evidence in trial, you acquire a certificate from Facebook’s records custodian authenticating the documents.  The custodian provides information that confirms the times, accounts and users.  That should be enough, right?

Wrong.  Your strategy relies on the legal theory that chats made through a third-party provider fall into a hearsay exception known as the “business records exemption.”  Under the Federal Rules of Evidence 902(11) “self-authenticating” business records “provides that ‘records of a regularly conducted activity’ that fall into the hearsay exception under Rule 803(6)—more commonly known as the “business records exception”—may be authenticated by way of a certificate from the records custodian.”  (Fed. R. Evid. 902(11)), (United States v. Browne, 834 F.3d 403 (3d Cir. 2016)).

Why does this certification fail to actually show authenticity?  The Third Circuit answers, saying there must be additional, outside evidence (extrinsic) establishing relevance of the evidence.  (United States v. Browne, 834 F.3d 403 (3d Cir. 2016)).

Relevance is another legal concept where “its existence simply has some ‘tendency to make the existence of any fact that is of consequence to the determination of the action more probable or less probable than it would be without the evidence.’”  (United States v. Jones, 566 F.3d 353, 364 (3d Cir. 2009) (quoting Fed. R. Evid. 401)).  Put simply, the existence of this evidence has a material effect on the evaluation of an action.

In Browne, the Third Circuit says the “business records exemption” is not enough because Facebook chats are fundamentally different than business records.  Business records are “supplied by systematic checking, by regularity and continuity which produce habits of precision, by actual experience of business in relying upon them, or by a duty to make an accurate record as part of a continuing job or occupation,” which results in records that can be relied upon as legitimate.

The issue here deals with authenticating the entirety of the chat – not just the timestamps or cached information.  The court delineates this distinction, saying “If the Government here had sought to authenticate only the timestamps on the Facebook chats, the fact that the chats took place between particular Facebook accounts, and similarly technical information verified by Facebook ‘in the course of a regularly conducted activity,’ the records might be more readily analogized to bank records or phone records conventionally authenticated and admitted under Rules 902(11) and 803(6).”

In contrast, Facebook chats are not authenticated based on confirmation of their substance, but instead on the user linked to that account.  Moreover, in this case, the Facebook records certification showed “alleged” activity between user accounts but not the actual identification of the person communicating, which the court found is not conclusive in determining authorship.

The policy concern is that information is easily falsified – accounts may be created with a fake name and email address, or a person’s account may be hacked into and operated by another.  As a result of the ruling in Browne, submitting chat logs into evidence made through a third party such as Facebook requires more than verification of technical data.  The Browne court describes the second step for evidence to be successfully admitted – there must be, extrinsic, or additional outside evidence, presented to show that the chat logs really occurred between certain people and that the content is consistent with the allegations.  (United States v. Browne, 834 F.3d 403 (3d Cir. 2016))

When there is enough extrinsic evidence, the “authentication challenge collapses under the veritable mountain of evidence linking [Defendant] and the incriminating chats.”  In the Browne case, there was enough of this outside evidence that the court found there was “abundant evidence linking [Defendant] and the testifying victims to the chats conducted… [and the] Facebook records were thus duly authenticated” under Federal Rule of Evidence 901(b)(1) in a traditional analysis.

The idea that extrinsic evidence must support authentication of evidence collected from third-party platforms is echoed in the Seventh Circuit decision United States v. Barber, 937 F.3d 965 (7th Cir. 2019).  Here, “this court has relied on evidence such as the presence of a nickname, date of birth, address, email address, and photos on someone’s Facebook page as circumstantial evidence that a page might belong to that person.”

The requirement for extrinsic evidence represents a shift in thinking from the original requirement that the government carries the burden of only ‘“produc[ing] evidence sufficient to support a finding’ that the account belonged to [Defendant] and the linked messages were actually sent and received by him.”  United States v. Barber, 937 F.3d 965 (7th Cir. 2019) citing Fed. R. Evid. 901(a), United States v. Lewisbey, 843 F.3d 653, 658 (7th Cir. 2016).  Here, “Facebook records must be authenticated through the ‘traditional standard’ of Rule 901.” United States v. Frazier, 443 F. Supp. 3d 885 (M.D. Tenn. 2020).

The bottom line is that Facebook cannot attest to the accuracy of the content of its chats and can only provide specific technical data.  This difference is further supported by a District Court ruling mandating traditional analysis under Rule 901 and not allowing a business hearsay exception, saying “Rule 803(6) is designed to capture records that are likely accurate and reliable in content, as demonstrated by the trustworthiness of the underlying sources of information and the process by which and purposes for which that information is recorded… This is no more sufficient to confirm the accuracy or reliability of the contents of the Facebook chats than a postal receipt would be to attest to the accuracy or reliability of the contents of the enclosed mailed letter.”  (United States v. Browne, 834 F.3d 403, 410 (3rd Cir. 2016), United States v. Frazier, 443 F. Supp. 3d 885 (M.D. Tenn. 2020)).

Evidence from social media is allowed under the business records exemption in a select-few circumstances.  For example, United States v. El Gammal, 831 F. App’x 539 (2d Cir. 2020) presents a case that does find authentication of Facebook’s message logs based on testimony from a records custodian.  However, there is an important distinction here – the logs admitted were directly from a “deleted” output, where Facebook itself created the record, rather than a person.  Accordingly, the Tenth Circuit agreed that “spreadsheets fell under the business records exception and, alternatively, appeared to be machine-generated non-hearsay.”  United States v. Channon, 881 F.3d 806 (10th Cir. 2018).

What about photographs – are pictures taken from social media dealt with in the same way as chats when it comes to authentication?  Reviewing a lower court decision, the Sixth Circuit in United States v. Farrad, 895 F.3d 859 (6th Cir. 2018) found that “it was an error for the district court to deem the photographs self-authenticating business records.”  Here, there is a bar on using the business exception that is similar to that found in the authentication of chats, where photographs must also be supported by extrinsic evidence.

While not using the business exception to do so, the court in Farrad nevertheless found that social media photographs were admissible because it would be logically inconsistent to allow “physical photos that police stumble across lying on a sidewalk” while barring “electronic photos that police stumble across on Facebook.”  It is notable that the court does not address the ease with which photographs may be altered digitally, given that was a major concern voiced by the Browne court regarding alteration of digital text.

United States v. Vazquez-Soto, 939 F.3d 365 (1st Cir. 2019) further supports the idea that photographs found through social media need to be authenticated traditionally.  Here, the court explains the authentication process, saying “The standard [the court] must apply in evaluating a[n] [item]’s authenticity is whether there is enough support in the record to warrant a reasonable person in determining that the evidence is what it purports to be.” United States v. Vazquez-Soto, 939 F.3d 365 (1st Cir. 2019) quoting United States v. Blanchard, 867 F.3d 1, 6 (1st Cir. 2017) (internal quotation marks omitted); Fed. R. Evid. 901(a).”  In other words, based on the totality of the evidence to include extrinsic evidence, do you believe the photograph is real?  Here, “what is at issue is only the authenticity of the photographs, not the Facebook page” – it does not necessarily matter who posted the photo, only what was depicted.

Against the backdrop of an alterable digital world, courts seek to emplace guards against falsified information.  The cases here represent the beginning of a foray into what measures can be realistically taken to protect ourselves from digital fabrications.

 

https://www.rulesofevidence.org/article-ix/rule-902/

https://www.rulesofevidence.org/article-viii/rule-803/

https://casetext.com/case/united-states-v-browne-12

https://www.courtlistener.com/opinion/1469601/united-states-v-jones/?order_by=dateFiled+desc&page=4

https://www.rulesofevidence.org/article-iv/rule-401/

https://www.rulesofevidence.org/article-ix/rule-901/

https://casetext.com/case/united-states-v-barber-103

https://casetext.com/case/united-states-v-lewisbey-4

https://casetext.com/case/united-states-v-frazier-175

https://casetext.com/case/united-states-v-el-gammal

https://casetext.com/case/united-states-v-channon-8

https://casetext.com/case/united-states-v-farrad

https://casetext.com/case/united-states-v-vazquez-soto-1?q=United%20States%20v.%20Vazquez-Soto,%20939%20F.3d%20365%20(1st%20Cir.%202019)&PHONE_NUMBER_GROUP=P&sort=relevance&p=1&type=case&tab=keyword&jxs=

Say Bye to Health Misinformation on Social Media?

A study from the Center of Countering Digital Hate, found that social media platforms failed to act on 95% of coronavirus-related disinformation reported to them.

      Over the past few weeks, social media companies have been in the hot seat regarding their lack of action against limiting the amount of fake news and misinformation on their platforms. Especially, the information regarding COVID-19 and the vaccine. Even President Biden remarked on social media platforms- stating Facebook and other companies were “killing people” by serving as platforms for misinformation about the Covid-19 vaccine. Later, Biden clarified his earlier statements by saying that he wasn’t accusing Facebook of killing people, but that he meant he wanted the companies to do something about the misinformation, the outrageous information about the vaccine.”

A few weeks later, Senator, Amy Klobuchar introduced the Health Misinformation Act, which would ultimately create an exemption to Section 230 of the Communication Decency Act. Section 230 has always shielded social media companies from being liable for almost any of the content that is posted on their platform. However, under the Health Misinformation Act, social media companies would be liable for the spread of health-related misinformation. Further, the bill would only apply to social media platforms that use an algorithm that promotes health misinformation- which most social media platforms use algorithms and would only apply to health misinformation during a health crisis. Additionally, if this bill were to pass, then the Department of Health and Human Services would be authorized to define “health misinformation.” Finally, the proposed bill would only apply during a national public health crisis, such as COVID-19. Therefore, this exemption would not apply during “normal” times, when there is no public health crisis.

        Senator Amy Klobuchar and some of her peers believe the time has come to create an exemption to Section 230 because “for far too long, online platforms have not done enough to protect the health of Americans.” Further, Klobuchar believes that the misinformation spread about COVID-19 the vaccine proves to the world that the social media companies have no desire to do anything about this because the misinformation gives the social media platform more activity and because the companies cannot be liable for this misinformation because of Section 230.
Instead, these social media companies, use this misinformation to their advantage to get more activity on their platform by creating features within to incentivizing their users to share the misinformation and to get likes, comments, and other engagements, which rewards engagement rather than accuracy.” Furthermore, a study conducted by MIT found that false news stories are 70% more likely to be retweeted than true stories. Therefore, social media platforms, have no reason to limit this information because of the activity they receive for the misinformation. Especially, when this misinformation benefits the social media platform.

What are the concerns with the Health Misinformation Act?

How will the Department of Health and Human Services define “health misinformation?” it seems very difficult to define such a narrow topic, that the majority will agree upon. Also, I believe there will be a huge amount of criticism from the social media companies about this act. For instance, I can imagine the social media companies arguing how will they be able to implement the definition of “health misinformation” to their algorithm? Such as, what if the information on the health crisis changes? Will the social media company have to constantly change the algorithms with the constant change of health information? For example, at the beginning of the pandemic the information of the importance of the mask changed; from mask not being necessary to masking being crucial to ensure the health and safety of yourself and others.

Will the Bill Pass?

With that being said, I do like the concept of the Health Misinformation Act, because it’s wanting to hold the social media companies accountable for their inaction while trying to protect the public so they receive accurate health-related information. However, I do not believe this bill will pass because of a few issues; first, it may violate the First Amendment, for people’s freedom of speech. Second, while it isn’t right; it is not illegal for individuals to post their opinion or misinformation on social media. Therefore, the bill might not pass because it may violate the First Amendment. Finally, as stated earlier how would social media companies implement these new policies and the change of “health misinformation” and how would the federal agencies regulate the social media companies?

What should be done?

“These are some of the biggest, richest companies in the world and they must do more to prevent the spread of deadly vaccine misinformation.”

     I believe we need to create more regulations and create more exemptions to Section 230. Especially, because Section 230 was created in 1996, our world looks and operates very differently than it did in 1996. Social media is an essential part of our business and cultural world.
Overall, I believe there need to be more regulations put into place to oversee social media companies. We need to create transparency with these companies, so the world can understand what is going on behind the closed doors of these companies. Transparency will allow for agencies to fully understand the algorithms and make for proper regulations.

To conclude, social media companies are a monopoly- even though there are a lot of them, there is only a handful that holds most of the popularity and power. With that being said, all major businesses and monopolies must follow strict regulations from the government. Yet, social media companies seem exempt from these types of strict regulations.

While there has been a push over the past few years to repeal or make changes to Section 230, do you think this bill can pass? If not, what can be done to create more regulations?

Getting Away with Murder

It’s probably not best to “joke” around with someone seeking legal advice about how to get away with murder. Even less so doing it on social media where tone infamously, is not always easily construed. Alas, that is what happened recently in January 2021, in the case In re Sitton out of Tennessee.

Let’s lay out the facts of the case first. Mr. Sitton is an attorney who has been practicing for almost 25 years. He has a Facebook page in which he labels himself as an attorney. A Facebook “friend” of his, named Lauren Houston had posted a publicly viewable question, asking about the legality of carrying a gun in her car in the state of Tennessee. The reason for the inquiry was because she had been involved in a toxic relationship with her ex-boyfriend, who was also the father of her child. As Mr. Sitton had become aware of her allegations of abuse, harassment, violations of child custody arrangement, and requests for orders of protection against the ex, he decided to comment on the post and offer some advice to Ms. Houston. The following was Mr. Sitton’s response to her question:

“I have a carry permit Lauren. The problem is that if you pull your gun, you must use it. I am afraid that, with your volatile relationship with your baby’s daddy, you will kill your ex     your son’s father. Better to get a taser or a canister of tear gas. Effective but not deadly. If you get a shot gun, fill the first couple rounds with rock salt, the second couple with bird shot, then load for bear.

If you want to kill him, then lure him into your house and claim he broke in with intent to do you bodily harm and that you feared for your life. Even with the new stand your ground law, the castle doctrine is a far safer basis for use of deadly force.”

 

Ms. Houston then replied to Mr. Sitton, “I wish he would try.” Mr. Sitton then replied again, “As a lawyer, I advise you to keep mum about this if you are remotely serious. Delete this thread and keep quiet. Your defense is that you are afraid for your life     revenge or premeditation of any sort will be used against you at trial.” Ms. Houston then subsequently deleted the post, following the advice of Mr. Sitton.

Ms. Houston’s ex-boyfriend eventually found out about the post, including Mr. Sitton’s comments and passed screenshots of it to the Attorney General of Shelby County who then sent them to the Tennessee’s Board of Professional Responsibility (“Board”). In August 2018, the Board filed a petition for discipline against him. The petition alleged Mr. Sitton violated Rule of Professional Conduct by “counseling Ms. Houston about how to engage in criminal conduct in a manner that would minimize the likelihood of arrest or conviction.”

Mr. Sitton admitted most of the basic facts but attempted to claim his comments were taken out of context. One of the things Mr. Sitton has admitted to during the Board’s hearing on this matter was that he identified himself as a lawyer in his Facebook posts and intended to give Ms. Houston legal advice and information. He noted Ms. Houston engaged with him on Facebook about his legal advice, and he felt she “appreciated that he was helping her understand the laws of the State of Tennessee.” Mr. Sitton went on to claim his only intent in posting the Facebook comments was to convince Ms. Houston not to carry a gun in her car. He maintained that his Facebook posts about using the protection of the “castle doctrine” to lure Mr. Henderson into Ms. Houston’s home to kill him were “sarcasm” or “dark humor.”

The hearing panel found Mr. Sitton’s claim that his “castle doctrine” comments were “sarcasm” or “dark humor” to be unpersuasive, noting that this depiction was challenged by his own testimony and Ms. Houston’s posts. The panel instead came to the determination that Mr. Sitton intended to give Ms. Houston legal advice about a legally “safer basis for use of deadly force.” Pointing out that the Facebook comments were made in a “publicly posted conversation,” the hearing panel found that “a reasonable person reading these comments certainly would not and could not perceive them to be ‘sarcasm’ or ‘dark humor. They also noted Mr. Sitton lacked any remorse for his actions. It acknowledged that he conceded his Facebook posts were “intemperate” and “foolish,” but it also pointed out that he maintained, “I don’t think what I told her was wrong.”

The Board decided to only suspend Mr. Sitton for 60 days. However, the Supreme Court of Tennessee reviews all punishments once the Board submits a proposed order of enforcement against an attorney to ensure the punishment is fair and uniform to similar circumstances/punishments throughout the state. The Supreme Court found the 60-day suspension to be insufficient and increased Mr. Sitton’s suspension to 1-year active suspension and 3 years on probation.

Really? While I’m certainly glad the Tennessee Supreme Court increased his suspension, I still think one year is dramatically too short. How do you allow an attorney who has been practicing for nearly 30 years to only serve a 1-year suspension for instructing someone on how to get away with murder? Especially when both the court and hearing panel found no mitigating factors, that a reasonable person would not interpret his comments to have been dark humor and that it was to be interpreted as real legal advice? What’s even more mind boggling is that the court found Mr. Sitton violated ABA Standards 5.1 (Failure to Maintain Personal Integrity) and 6.1 (False Statements, Fraud, and Misrepresentation), but then twisted their opinion and essentially said there was no real area in which Mr. Sitton’s actions neatly fall into within those two rules and therefore that is why they are only giving a 1-year suspension. The thing is, that is simply inaccurate for the sentencing guidelines (which the court included in their opinion) for violations of 5.1 and 6.1, it is abundantly obvious that Mr. Sitton’s actions do fall into them clearly, so it is a mystery as to how the court found otherwise.

 

If you were the judge ruling on this disciplinary case, what sentencing would you have handed down?

If I were to sue “Gossip Girl.”

If you grew up in New York and were a teenager in the early 2000s, you probably know the top-rated show “Gossip Girl.” “Gossip Girl” is the alias for an anonymous blogger who creates chaos by making public the very intimate and personal lives of upper-class high school students. The show is very scandalous due to the nature of these teenagers’ activities, but what stands out is the influence gossip girl had on these young teenagers. And it makes one think, what could I do if Gossip Girl came after me?

 

Anonymity

When bringing a claim for internet defamation against an anonymous blogger, the trickiest part is getting over the anonymity. In Cohen v. Google, Inc., 887 N.Y.S.2d 424 (N.Y. Sup. Ct. 2009), a New York state trial court granted plaintiff, model Liskula Cohen, pre-suit discovery from Google to reveal the identity of the anonymous publisher of the “Skanks in NYC” blog. Cohen alleged that the blog author defamed her by calling her a “skank” and a “ho” and posting photographs of her in provocative positions with sexually suggestive captions, all creating the false impression that she is sexually promiscuous. The court analyzed the discovery request under New York CPLR § 3102(c), which allows for discovery “to aid in bringing an action.” The court ruled that, under CPLR § 3102(c), a party seeking pre-action discovery must make a prima facie showing a meritorious cause of action before obtaining the identity of an anonymous defendant. The court acknowledges the First Amendment issues at stake, and citing Dendrite; the court opined that New York law’s requirement of a prima facie showing appears to address the constitutional concerns raised in the context of this case. The court held that Cohen adequately made this prima facie showing defamation, finding that the “skank” and “ho” statements, along with the sexually suggestive photographs and captions, conveyed a factual assertion that Cohen was sexually promiscuous, rather than an expression of protected opinion.

In Cohen, the court decided that Kiskula Cohen was entitled to the pre-suit discovery under CPLR § 3102(c). To legally obtain “Gossip Girl’s” true identity under this statute, we would have to prove that the statement posted on her blog against us is on its face defamatory and not simply an expression of protected opinion.

 

Defamation

Now that we may have uncovered our anonymous blogger, “Gossip Girl,” aka Dan Humphrey now we may dive into the defamation issue. There are two types of defamation: 1) Libel is the written form of defamation, and 2) Slander is the oral form of defamation. Because Gossip Girl’s choice of media is a written blog, our case would fall under Libel. But does our claim meet the legal elements of defamation?

In New York, there are four elements that the alleged defamation must meet:

  1. A false statement;
  2. Published to a third-party without privilege or authorization;
  3. With fault amounting to at least negligence;
  4. That caused special harm or ‘defamation per se.’

Dillon v. City of New York, 261 AD2d 34, 38, 704 NYS2d1 (1999)

Furthermore, our defamation claim for the plaintiff must “set forth the particular words allegedly constituting defamation and it must also allege time when, place where, and the manner in which the false statement was made, and specific to whom it was made.” Epifani v. Johnson, 65 A.D.3d 224, 233, 882 N.Y.S.2d 234 (2d Dept. 2009). The court simply means that we must provide details such as: what specific words were used? What were the terms used? Was the plaintiff labeled a “how” or “skank” like in Cohen, or did they simply call you “ugly”? When? The time said words were spoken, written, or published. Where? The place where they were spoken, written, or published (platform). How? The manner in which they were spoken, written, or published. Lastly Whom? The party or source to whom the statement was made to.

The plaintiff’s status determines the level of burden of proof in defamation lawsuits in N.Y. Is the plaintiff considered a “public” figure or a “private” citizen? To determine this status New York State courts use the “vortex notion.” This term simply means that a person who would generally qualify as a “private” citizen is considered a “public” figure if they draw public attention to themselves, like jumping right into a tornado vortex. Defamation for a “public” figure has a higher preponderance of evidence in defamation lawsuits. The plaintiff must prove that the defendant acted with actual malice (reckless disregard for the truth or falsity of the statement). For defamation of a “private” citizen, the plaintiff the N.Y. court apply a negligence standard of fault for the defendant unless the statements were related to a matter of legitimate public concern.

When the plaintiff is a private figure, and the allegedly defamatory statements relate to a matter of legitimate public concern, they must prove that the defendant acted “in a grossly irresponsible manner without due consideration for the standards of information gathering and dissemination ordinarily followed by responsible parties.” Chapeau v. Utica Observer-Dispatch, 28 N.Y.S.2d 196, 199 (N.Y. 1975) This standard focuses on the objective evaluation of the defendant’s actions rather than looking at the defendant’s state of mind at the time of publication.

If the statements Gossip Girl published are so inherently apparent, we may explore defamation per se. There are four elements to defamation per se in New York:

  1. Statement charging a plaintiff with a serious crime.
  2. Statements that tend to injure another in his or her trade, business, or profession
  3. Statements imputing a loathsome disease on a plaintiff, &
  4. Statements imputing unchastity on a woman

Liberman v. Gelstein, 80 NY2d 429, 435, 605 NE2d 344, 590 NYS2d 857 (1992). If the statements meet these elements, the court may find that the statements were inherently injurious that the damages to the plaintiff’s person are presumed. Another option to consider is defamation per quod which requires the plaintiff to provide extrinsic and supporting evidence to prove the defamatory nature of the alleged statement(s) in question that is not inherently apparent.

 

Privileges and Defenses

After concluding that Gossip Girl defamed the plaintiff, we must ensure that the defamatory statement is not protected under any privileges. New York courts recognize several privileges and defenses in the context of defamation actions, including the fair report privilege (a defamation lawsuit cannot be sustained against any person making a “fair and true report of any judicial proceeding, legislative proceeding or other official proceeding.”) N.Y.Civ.Rights §74, the opinion and fair comment privileges, substantial truth (the maker cannot be held liable for saying things that are actually true), and the wire service defense. There is also Section 230 of the Communications Decency Act, which may protect media platforms or publishers if a third party, not acting under their direction, posts something on their blog or website that is defamatory. Suppose a statement is privileged or defense applies. In that case, the maker of that statement may be immune from any lawsuit arising from those privileged statements.

 

Statute of Limitations

A New York plaintiff must start an action within one (1) year of the date the defamatory material was published or communicated to a third-party CPLR § 15 Sub 3. New York has also adopted a law directed explicitly to internet posts. The “single publication,” a party that causes the mass publication of defamatory content, may only be sued once for its initial publication of that content. For example, suppose a blog publishes a defamatory article that is circulated to thousands of people. In the case above, the blog may only be sued once. The Statute of Limitations begins to run at the time of first publication. “Republication” of the allegedly defamatory content will restart the statute of limitations. A republication occurs when “a separate aggregate publication from the original, on a different occasion, which is not merely a ‘delayed circulation of the original edition.'” Firth v. State, 775 N.E.2d 463, 466 (N.Y. 2002). Courts examine whether the republication was intended to and actually reached new audiences. Altering the allegedly defamatory content and moving web content to a different web address may trigger republication.

 

Damages

Damages to defamation claims are proportionate to the harm suffered by the plaintiff. If a plaintiff is awarded damages, it may be in the form of compensatory, nominal, or punitive damages. There are two types of compensatory damages 1) special damages and 2) general damages. Special damages are based on economic harm and must have a specific amount identified. General damages are challenging to assess. The jury has the discretion to determine the award amount after weighing all the facts. Nominal damages are small monetary sums awarded to vindicate the plaintiff’s name. Punitive damages are intended to punish the defendant and are meant to deter the defendant from repeating defamatory conduct.

 

When Gossip Girl first aired, the idea of a blog holding cultural relevance was not yet mainstream. Gossip Girl’s unchecked power kept many characters from living their lives freely and without scrutiny. After Gossip Girl aired, an anonymous blog, “Socialite Rank,” emerged. It damaged the reputation of the targeted victim, Olivia Palermo, who eventually dropped the suit she had started against the blog. The blog “Skanks in NYC” painted a false image of who Kiskula Cohen was and caused her to lose potential jobs. In the series finale, after the identity of Gossip Girl is revealed, the characters laugh. Still, one of the characters exclaimed, “why do you all think that this is funny? Gossip Girl ruined our lives!” Defamation can ruin lives. As technology advances, the law should as well. New York has adopted its defamation laws that were in place to ensure that person cannot hide behind anonymity to ruin another person’s life.

 

Do you feel protected against online defamation?

XOXO

Who Pays When Your Amazon Purchase Catches Fire?

As technology develops, one of the most debated issues remains: how much responsibility should internet service providers bear in respect to third party content published through their website?  Is Section 230 of the of the Communications Decency Act a relic of primitive internet usage?  When products are sold through the internet, does responsibility shift to the marketplace provider?
To shed light on the issue, a parallel arises between how consumer law and internet usage is developing.
Take the Texas case where a third-party sold a remote control through Amazon.com.  The remote was purchased and delivered to a customer with no issues.  However, the customer’s nineteen-month-old child later ingested the remote’s battery which resulted in permanent esophagus damage.  Who is responsible for the damages – Amazon or the third-party seller?  (Amazon.com, Inc. v. McMillan, No. 20-0979, 2021 WL 2605885 (Tex. June 25, 2021))
The customer, Ms. McMillin, brought a lawsuit against both.  Ultimately, the Supreme Court of Texas found that legal liability for the personal injury did not lie with Amazon but remained with the third-party seller.  This decision determined who was the “Seller” under Tex. Civ. Prac. & Rem. Code Ann. § 82.001 and the legal framework behind placing items into a stream of commerce.  The dispositive factor was whether or not, at any point during the “chain of distribution”, title to the remote had been transferred to Amazon.  In other words, who owned the remote?
The court found that unless Amazon held and relinquished title, or the “legal right to control and dispose of property” (TITLE, Black’s Law Dictionary (11th ed. 2019)), they could not be considered an actual “Seller” under the law and therefore were not liable for injury.  Even though throughout use of the marketplace Amazon “controlled the process of the transaction and the delivery of the product,” the third-party seller retained title and was thus the liable “Seller.”
These ideas run parallel to those behind Section 230, where internet service providers are not liable for content published through their services.  Under this section, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” (47 U.S.C. § 230, subd. (c)(1)).  In the McMillin case, the court notes the “significant potential consequences of holding online marketplaces responsible for third-party sellers’ faulty products.”
In a case originating in Arizona, the deciding factor in a strict liability consumer law case again comes down to ownership.  Here, personal injury was incurred by a defective battery in a hoverboard that caught fire which had been sold through Amazon.com.  The court ruled in favor of Amazon after applying state law in a “contextual analysis [that] balanced multiple factors to determine whether a company ‘participate[d] significantly in a stream of commerce.’”  (State Farm Fire & Cas. Co. v. Amazon.com, Inc., 835 F. App’x 213 (9th Cir. 2020))
Under Arizona law, the “realities of the marketplace” bear on the outcome and evaluated using a seven-part test to determine whether strict liability can be held against a party.  This occurs when:
“(1) provide a warranty for the product’s quality; (2) are responsible for the product during transit; (3) exercise enough control over the product to inspect or examine it; (4) take title or *216 ownership over the product; (5) derive an economic benefit from the transaction; (6) have the capacity to influence a product’s design and manufacture; or (7) foster consumer reliance through their involvement.”
In these factors we again see the idea of ownership being used to help draw the line.  Just because Amazon facilitated the sale of defective products through their website, they were not involved enough with the product to actually be liable for its deficiencies.
This trend is not limited to Amazon.com.  Take the example of the consumer in Indiana who purchased a 3D printer through Walmart.com that later caught fire and damaged property.  The consumers, the Terrys, brought a lawsuit for merchantability issues under Indiana Code § 26-1-2-314, which provides that the “warranty that the goods shall be merchantable is implied in a contract for their sale if the seller is a merchant with respect to goods of that kind.”  (Indiana Farm Bureau Ins. v. Shenzhen Anet Tech. Co., No. 419CV00168TWPDML, 2020 WL 7711346 (S.D. Ind. Dec. 29, 2020))
Here, again, the court determined that Walmart.com was not a “Seller” or “Manufacturer” under the Indiana law and could not be held liable for the damages caused by defective third-party products.
However, law developing out of California reaches a contradictory conclusion.  Here, in a case where an Amazon.com-sold laptop battery caught fire, the court ruled that in regard to strict liability the Communications Decency Act did not offer immunity to internet marketplaces.  The court supported the finding by determining that Amazon played an “’integral part of the overall producing and marketing enterprise.’”  Here, Amazon’s role providing speech, which is immunized, is differentiated from its “role in the chain of production and distribution of an allegedly defective product.”  Bolger v. Amazon.com, LLC, 53 Cal. App. 5th 431, 267 Cal. Rptr. 3d 601 (2020), review denied (Nov. 18, 2020)
The convergence between consumer law and Section 230 helps develop an understanding of how we think about the responsibility of internet service providers when the content or products they facilitate cause damage.  Ultimately, the emerging trend is that a party must in some way own the content in question.

Dear loyal followers: will you endorse this product for me?

Hi, lambs. It’s me, your Gram celeb. Who wouldn’t like to start their day with a bowl of oatmeal with these freshly picked berries? Remember that I’m always with you, lambs. #yummy #healthybreakfast #mylifestyle #protein #Thanks XYZ.

Up to this point, you may think that your Instagram celebrity wants to share a healthy breakfast with you. You may even feel pleased to see that your celebrity took the time to post such a personal picture, until you read further:

Thank you, XYZ Company, for your healthy breakfast delivery. Love, XOXO.

While some people feel nonchalant about promoted products, some followers may feel betrayed to know that their celebrities are only using their accounts to earn money. You feel tired of seeing these sponsored posts. Are there any legal actions against the Instagram celebrity to melt your deceived heart?

What’s so disturbing and tricky about Instagram influencer marketing is that users cannot always detect whether they are being exposed to digital advertising.

According to an Instagram internal data research, approximately 130 million accounts tap on shopping posts every month. There are a plethora of guides on digital marketing for rising influencers, and one of the highest noted tips is to advertise a product with storytelling. The main theme is to advertise as naturally as possible to make consumers feel engaged—and subsequently, have them make a purchase.

In Jianming Jyu v. Ruhn Holdings Ltd., the court held that social media has become so influential that being a social media influencer is now recognized as a profession. The court defined social media influencers as “individuals who create content on social media platforms such as Facebook, YouTube, Tik Tok, and Instagram with the hope of garnering a large public following [and] who are paid to promote, market and advertise products and services to their fans and followers.” Id.

Take this as another example: your cherished, ever-so-harmless Instagram mega-celebrity wore a beautiful Gucci belt. The celebrity mentioned that the same belt was available on Amazon, which was on sale for less than a quarter of the actual price at Gucci. You immediately purchased the belt, thanking your celebrity and yourself for following the celebrity. Upon the belt’s arrival, you realized that the belt was conspicuously fake with the brand named Pucci. On November 12, 2020, Amazon sued 13 individuals and businesses (collectively, the “defendants”) for advertising, promoting, and facilitating the sale of counterfeit luxury goods on Amazon. The defendants used their Instagram and other social media accounts to promote their knockoff goods being sold on Amazon. Amazon stated that they are seeking damages and an injunction against the influencers to bar them from using Amazon. As of July 4, 2021, the case is still pending.

Okay, we get it. But that’s something Instagram celebrities have to resolve. What about us, the innocent lambs?

Are there any legal actions on digital marketing? Yes!

A digital advertising claim may be brought in state or federal court or action brought by a federal administrative agency, such as the Federal Trade Commission (FTC). Generally, Instagram advertising is considered online advertising, which the FTC regulates. The FTC Act prohibits deceptive advertising in any medium. That is, “advertising must tell the truth and not mislead consumers.” A claim can be misleading if relevant information is left out or if the claim implies something that is not true. So, if an influencer promotes a protein bar that says it has 20 grams of protein, but it actually had 10 grams of protein, it is misleading.

Furthermore, the FTC Act states that all advertising claims must be “substantiated,” primarily when they concern “health, safety, or performance.” If the influencer quoted the protein bar company, which stated that there was research that their protein bar lowered blood pressure, the FTC Act requires a certain level of support for that claim. Thus, online influencers are liable for the products they endorse on social media platforms.

Wait, there is one more. Due to the growing number of fraudulent activities on Instagram, the FTC released new regulations targeted at Instagram influencers. According to 16 CFR § 255.5, the FTC requires that an influencer shall “clearly and conspicuously disclose either the payment or promise of compensation prior to and in exchange for the endorsement or the fact that the endorser knew or had reason to know or to believe that if the endorsement favored the advertised product some benefit.” In sum, an influencer must disclose that the post is sponsored. The FTC noted that the hashtags like “partner,” “sp,” “thanks [Brand]” are not considered adequate disclosures. Otherwise, it is a violation subject to penalty.

Simply putting hashtag “ad” is not an option

The marketing Mediakix issued a report on top celebrity Federal Trade Commission compliance for sponsored posts and found that 93% of the top Instagram endorsements did not meet the FTC’s guidelines.

Going back to the oatmeal example, using the hashtag “#Thanks XYZ” is not sufficient to show that the post is sponsored, and the celebrity is subject to penalty.

As a rule of thumb, all Instagram sponsorships must be disclosed no matter what, and the disclosures must be clear about the sponsorship. Playing hide-and-seek with hashtags is never an option.

What is your opinion on digital marketing? If you were a legislator, what should the regulation on digital marketing be?

How One Teenager’s Snapchat Shaped Students Off-Campus Free Speech Rights

Did you ever not make your high school sports team or get a bad grade on an exam? What did you do to blow off steam? Did you talk to your friends or parents about it or write in your journal about it? When I was in High school- some of my classmates would use Twitter or Snapchat to express themselves. However, the rates for the use of smartphones and social media were much lower than they are today. For instance, today high school students use their smartphones and social media at an incredibly high rate compared to when I was in high school almost ten years ago. In fact, according to Pew Research Center, 95% of teenagers have access to smartphones and 69% of teenagers use Snapchat. This is exactly why the recent Supreme Court decision on Mahanoy Area School District v. B.L. is more important than ever, as it pertains to student’s free speech rights and how much power schools have in controlling their student’s off-campus speech.  Further, this decision is even more necessary because the last time the Supreme Court ruled on student’s free speech was over fifty years ago in Tinker v. Des Moines, way before anyone had smartphones or social media. Therefore, the latest decision by the Supreme Court will shape the future of the power of school districts and the first Amendment rights for students for maybe the next fifty years.

 

The main issue in Mahanoy Area School District v. B.L. is whether public schools can discipline students over something they said off-campus. The facts in this case, occurred when Levy, was a sophomore at Mahoney Area School District. Levy didn’t make the varsity cheerleading team; naturally, she was upset and frustrated about the situation. So, that weekend, Levy was at the convenience store in town with a friend. Levy and the friend took a Snap Chat with their middle finger raised with the caption “F- School, F-Softball, F-Cheerleading, F-Everything” and sent it to her Snap Chat friends. Then, the picture was screenshotted and shown to the cheerleading coach. Which lead to Levy being suspended from the cheerleading team for one year.

 

Furthermore, Levy and her parents did not agree with the suspension and the school’s involvement in Levy’s off-campus speech. Therefore, Levy and her parents filed a lawsuit claiming their suspension violated Levy’s First Amendment free speech rights. Levy sued the school under 42 U.S.C. § 1983 alleging (1) that her suspension from the team violated the First Amendment; (2) that the school and team rules were overbroad and viewpoint discriminatory; and (3) that those rules were unconstitutionally vague. The district court granted summary judgment in favor of Levy, stating that the school had violated her First Amendment rights. The U.S. Court of Appeals for the Third Circuit affirmed the district court decision. The Mahoney School District petitioned for a writ of certiorari.

 

In an 8-1 decision the Supreme Court ruled in favor of Levy. The Supreme Court held that the Mahoney Area School District violated Levy’s First Amendment rights by punishing her for using vulgar language that criticized the school on social media. The Supreme Court noted numerous reasons why they ruled in favor of Levy. Additionally, The Supreme Court noted the importance of schools monitoring and punishing some off-campus speech. Such as, speech and behavior that is “serious or severe bullying or harassment targeting particular individuals; threats aimed at teachers or other students.” This is more necessary than ever before due to the increase in online bullying and harassment; that can impact the day-to-day activities of the school and the development of minors.

 

While it’s important in some circumstances for schools to monitor and address off-campus speech. The Supreme Court noted three reasons that would limit schools from interfering with student’s off-campus speech. First, a school, concerning off-campus speech, will rarely stand in loco parentis. Therefore, schools do not have more authority than parents. Especially not for off-campus speech. The parent is the authority figure; and will decide to discipline or not in most activities in their child’s life, especially what happens outside of school. This is important because parents have the authority to raise and discipline their children the way they believe, not based on the school district’s beliefs.

 

Second, “from the student perspective, regulations of off-campus speech, when coupled with regulations of on-campus speech, include all the speech a student utters during the full 24-hour day.” There would be no boundaries or limitations to what the school district would be allowed to discipline their students on. For instance, what if a group of students on a Saturday night decided to make a Tik Tok, and during the Tik Tok, the students curse and use vulgar language, would they be in trouble? If there were no limits to what the school could punish for off-campus speech, then those students could be in trouble for their Tik Tok video. Therefore, it’s important that the Supreme Court made this distinction to protect the student Frist Amendment rights.

 

Finally, the third reason is the school itself has an interest in protecting a student’s unpopular expression, especially when the expression takes place off-campus.” For instance, the Supreme Court stated that if schools did not protect their students’ unpopular opinions, this would limit and ruin the student’s ability to express themselves and schools are a place for students to learn and create their own opinion- even if that opinion differs from the school’s. To conclude, this would severely impact the student’s ability to think for themselves and create their own opinion, and respect other’s opinions that differ from their own.

 

Overall, I agree with the Supreme Court’s decision in this case. I believe it’s essential to separate in-school speech and off-campus speech. However, the only time off-campus speech should be monitored and addressed by the school is if there is bullying, harassing, or threatening language against the school, groups, or individuals at the school. With that being said, the Supreme Court noted three very important reasons as to why the public schools cannot have full control of students’ off-campus speech. All three of these reasons are fair and justifiable to protect the parents and students from being overly controlled by the school. To conclude, there is still a lot of questions and uncertainty, especially since technology is rapidly advancing and new social media platforms emerging frequently. I am curious if the Supreme Court will rule on a similar within the next fifty years and how this will impact schools in the next few years.

 

Do you agree with the Supreme Court decision and how do you see this ruling impacting public schools over the next few years?

Is social media promoting or curbing Asian hate?

The COVID-19 pandemic has caused our lives to twist and turn in many unexpected ways. Of all the ethnicities in the world, the Asian population took the hardest hit since the virus originated from China. This ultimately caused a significant increase in hate crimes, particularly towards the Asian community, in the real world as well as the cyber world. Since the number of internet users is almost uncountable, the impact that it creates online, as well as offline, is massive. Social media can create bias and social media has the power to remedy bias. The question becomes which side of the scale is it currently tipping towards? Is the internet making social network platform users more vulnerable to manipulation? Are hatred and bias “contagious” through cyber means? On the contrary, is social media remedying the bias that people have created through the internet?

Section 230 of the Communications Decency Act governs the cyber world. It essentially provides legal immunity to internet providers such as TikTok, Facebook, Instagram, Snapchat and etc. The Act states: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” With that being said, posts and comments that appear on these social media platforms do not have any legal ramifications for the tech companies. Hence, do these tech companies have incentives to regulate what is posted on their websites? With the Asian hate wave currently going on, will it evolve into a giant snowball of problems if social media platforms fail to step in? On the other hand, if these tech companies elect to step in, to what extent can they regulate or supervise?

The hatred and bias sparked by the pandemic have not been limited to the real world. Asian Americans have reported the biggest increase in serious incidents of online hate and harassment throughout such a crazy time. Many of them were verbally attacked or insulted by racist and xenophobic slurs merely because they have Asian last names or that they look Asian. According to a new survey shared exclusively with USA TODAY, comparing to last year, there was an 11% increase in sexual harassment, stalking, physical threats, and other incidents reported by Asian Americans, of which many were through online social media platforms. Pursuant to the findings by the Center for the Study of Hate and Extremism at California State University, hate crimes against Asian Americans rose 149% from 2019 to 2020. That is 149% in one year. In addition, an AI-based internet abuse detection organization named L1ght reported a 900% increase on Twitter since the start of this pandemic. This may just be the tip of an iceberg as many of the hate crime incidents may have gone unreported. As you may recall, former President Trump publicly referred the COVID-19 coronavirus as the “Chinese Virus” which led to a record-breaking level of brutal online harassment against Asian Americans. This also gave rise to other similar remarks such as “Kung Flu” or “Wuhan Virus.” Social media users began using hashtags of the like. Just the hashtag “#ChineseVirus” alone has been used over 68,000 times on Instagram.

We must not forget that the real world and the cyber world are interconnected. Ideas consumed online can have a significant impact on our offline actions which may lead to violence. Last week, I had the privilege to interview New York Police Department Lieutenant Mike Wang who is in charge of the NYPD’s Asian Hate Crimes Task Force in Brooklyn, he expressed his concerns about the Asian community being attacked, seniors in particular. Lieutenant Wang said during the interview: “It’s just emotionally difficult and heartbreaking. New York Police Department is definitely taking unprecedented measures to combat these crimes. These incidents cannot be overlooked.” Most of these incidents were unprovoked. Some examples include an elderly Thai immigrant who died after being shoved to the ground, a Filipino-American citizen being slashed in the face with a box cutter leaving a big permanent scar on his face, a Chinese lady being slapped and then set on fire, as well as six Asian-Americans being brutally shot to death in a spa one night. Wang indicated that crimes against Asian-Americans in general are nothing new, they have been in existence for quite some time; however, the rage and frustration of the COVID-19 pandemic fueled this fire to an uncontrollable level. Wang encourages citizens to report crimes in general, not just hate crimes, as we need to be more vocal. You can read more about hate crimes and bias on the city’s website.

From verbal harassment to physical assaults, there have been thousands of reported cases since the pandemic started. These are typically hate crimes as offenders believe that the Asian population should be blamed for the spread of the virus. Perhaps people’s daily interactions online play an important role here. Almost everyone uses some sort of social network in our country, the more hatred and bias they see online, the more likely they will exhibit violence in real life. Why? Because people would think such behaviors are acceptable since many others are doing it. Accountability does not seem to be an issue, especially through social channels. At the most, the user’s post would be removed or the account would get suspended. With that being said, it is questionable as to whether the tech companies are doing enough to address these issues? When encountering these hateful behaviors in the cyber world, what are the policies of the social media giants? For instance, Twitter has implemented a policy on hate speech that prohibits accounts whose primary purpose was to incite harm towards others. Twitter does reserve the discretion to remove inappropriate content or suspend users who violated their policy. You can read more about their Hateful Conduct Policy on their website. Other social media platforms such as Facebook, TikTok, and YouTube all have similar policies in place to address hateful behaviors, violent threats, and harassment; however, are they sufficient? According to the CEO of the Anti-Defamation League, online users continue to experience strong hateful comments despite that the social network companies alleged that they are taking things seriously. Facebook and YouTube are still allowing users to use the racially incentive term “Kung Flu” while TikTok has prohibited it. A comics artist Ethan Van Sciver joked about killing Chinese people in one of his videos but later claimed that it was “facetious sarcasm.” YouTube only removed the video stating that it was a violation of its hate speech policy. Like I previously mentioned, the accountability with these social networks is minimal.

Social networks have definitely helped spread the news keeping everyone in the country informed about the horrible incidents that are happening on a regular basis. Other than spreading the virus of hatred and bias online, social networks also raise awareness and promote positivity on the other hand. As Asian hate crimes spike, public figures, and celebrities are taking part to stand against this battle. Allure magazine’s editor-in-chief Michelle Lee and designer Phillip Lim are one of them. They have posted videos on Instagram sharing their very own experiences of racism in an effort to raise awareness. They also used the hashtag #StopAsianHate in their posts. On March 20, 2021, “Killing Eve” star Sandra Oh joined a “Stop Asian Hate” protest in Pittsburgh. She said she is “proud to be Asian” while giving a powerful speech urging people to fight against racism and hatred towards the Asian community. The video of her speech went viral online in just a day and there have been more than ninety-three thousand views on YouTube since.  I have to say that our generation is not afraid to speak up about the hate and injustice we face in our society today. This generation is taking it upon ourselves to prove racism instead of relying on authorities to recognize the threats and implement policy changes. This is how #StopAAPIHate came about. The hashtag stands for “Stop Asian American and Pacific Islander Hate.” Stop AAPI Hate is a nonprofit organization that tracks incidents of hate and discrimination against Asian Americans and Pacific Islanders in the United States. It was recently created as a social media platform to bring awareness, education, and resources to the Asian community and its allies. Stop AAPI Hate also utilized social networks like Instagram to organize support groups, provide aid and pressure those in power to act. The following is a list of influential members of the AAPI community who are vocalizing their concerns and belief: Christine Chiu, “The Bling Empire” star who is also a producer and an entrepreneur; Chriselle Lim, who is a digital influencer, content creator and entrepreneur; Tina Craig, who is the founder and CEO of U Beauty; Daniel Martin, who is the makeup artist and global director of Artistry & Education at Tatcha; Yu Tsai, who is a celebrity and fashion photographer & host; Sarah Lee and Christine Chang, who are the co-founders and co-CEOs of Glow Recipe; Aimee Song, who is an entrepreneur and digital influencer; Samuel Hyun, who is the chairman of the Massachusetts Asian American Commission; Daniel Nguyen who is an actor; Mai Quynh, who is a celebrity makeup artist; Ann McFerran, who is the founder and CEO of Glamnetic; Nadya Okamoto, who is the founder of August; Sharon Pak who is the founder of INH; Sonja Rasula, who is the founder of Unique Markets; as well as Candice Kumai, who is a writer, journalist, director and best-selling author. The list can go on but the purpose of these influential speakers is that taking things to social media is not just about holding people or companies accountable, instead, it is about creating meaningful changes in our society.

The internet is more powerful than we think it is. It is dangerous to allow individuals to attack or harass others, even through the screen. I understand that the social media platforms cannot blatantly censor contents or materials as they see inappropriate on their websites as it may be a violation of the user’s First Amendment rights; however, there has to be more that they can do. Perhaps creating more rigorous policies as an effort to combat hate speech. If we are able to track the user’s identity to his or her real-life credentials, it may curb the tendency of potential offenders or repeated offenders. The question is how do you draw the line between freedom of speech and social order?

 

Skip to toolbar