Social Media: a pedophile’s digital playground.

A few years ago, as I got lost in the wormhole that is YouTube, I stumbled upon a family channel, “The ACE Family.” They had posted a cute video where the mom was playing a prank on her boyfriend by dressing their infant daughter in a cute and tiny crochet bikini. I didn’t think much of it at the time as it seemed innocent and cute, but then I pondered and thought about it. I stumbled on this video without any malintent, but how easy would it be for someone looking for content like this with other disgusting intent.

When you Google “Social media child pornography,” you get many articles from 2019. In 2019 a YouTuber using the name “MattsWhatItIs” posted a YouTube video titled “YouTube is Facilitating the Sexual Exploitation of Children, and it’s Being Monetized(2019)”; this video has 4,305,097 views to date and has not been removed from the platform. The author of the video discusses a potential child pornography ring on YouTube that was being facilitated due to a glitch in the algorithm. He demonstrates how with a brand-new account on a VPN, all it takes is two clicks to end up in this ring. The search started with a “Bikini Haul” search. After two clicks on the recommended videos section, he stumbles upon an innocent-looking homemade video. The video looks innocent, but he scrolls down to the comments to expose the dark side. Multiple random account comment time stamps, those timestamps are linked to slots on the video where the children are in comprising implicit sexual positions. The most disturbing part is that the algorithm glitches once you enter the wormhole, and you get stuck on these “child pornography” videos. Following the vast attention this video received, YouTube has created an algorithm that is supposed to catch this predatory behavior; when the video was posted, it didn’t seem to be doing much.

YouTube has since implemented a “Child Safety Policy,” which details all the content which the social media platform has aimed to protect. It also includes recommended steps for parents or agents posting content with children being the focus. “To protect minors on YouTube, content that doesn’t violate our policies but features children may have some features disabled at both the channel and video level. These features may include:

  • Comments
  • Live chat
  • Live streaming
  • Video recommendations (how and when your video is recommended)
  • Community posts”

Today when you look up the news on this topic, you don’t find much. There are forums and such exposing the many methods these predators use to get around the algorithms set up by platforms to detect their activity. Many predators leave links to child pornography in the comments section of specific videos. Others used generic terms with the initials “C.P.,” a common abbreviation for “child pornography,” and codes like “caldo de pollo,” which means “chicken soup” in Spanish. Many dedicated and concerned parents have formed online communities that scour the Internet for this disgusting content and report it to social media platforms. Online communities scan the Internet for this activity and report it, but why haven’t social media platforms created departments for this issue? Most technology companies use automated tools to detect images and videos that law enforcement has already categorized as child sexual exploitation material. Still, they struggle to identify new, previously unknown material and rely heavily on user reports.

The Child Rescue Coalition has created the “Child Protection System Software.” This tool has a growing database of more than a million hashed images and videos, which it uses to find computers that have downloaded them. The software can track I.P. addresses, which are shared by people connected to the same Wi-Fi network and individual devices. According to the Child Rescue Coalition, the system can follow widgets even if the owners move or use virtual private networks or VPNs to mask the I.P. addresses. Last year they expressed interest in partnering with social media platforms to combine resources to crack down on child pornography. Unfortunately, some are against this as it would allow social media companies access to this unregulated database of suspicious I.P. addresses. Thankfully, many law enforcement departments have partnered up and used this software and as the president of the Child Rescue Coalition said: “Our system is not open-and-shut evidence of a case. It’s for probable cause.”

The United States Department of Justice has created a “Citizen’s Guide to U.S. Federal Law on Child Pornography.” The first line on this page reads, “Images of child pornography are not protected under First Amendment rights, and are illegal contraband under federal law.” Commonly federal jurisdiction is applied if the child pornography offense occurred in interstate or foreign commerce. In today’s digital era, federal law almost always applies when the Internet is used to commit child pornography offenses. The United States has implemented multiple laws that define child pornography and what constitutes a crime related to child pornography.

Whose job is it to protect children from these predators? Should social media have to regulate this? Should parents be held responsible for contributing to the distribution of these media?

 

“Unfortunately, we’ve also seen a historic rise in the distribution of child pornography, in the number of images being shared online, and in the level of violence associated with child exploitation and sexual abuse crimes. Tragically, the only place we’ve seen a decrease is in the age of victims.

 This is – quite simply – unacceptable.”

-Attorney General Eric Holder Jr. speaks at the National Strategy Conference on Combating Child Exploitation in San Jose, California, May 19, 2011.

Should Social Media Be Used as a Sentencing Tool?

Mass Incarceration in the US – A Costly Issue

The United States has a costly over-incarceration issue. As of May 2021, the United States has the highest rate of incarceration in the world with 639 prisoners per 100,000 of the national population. New York State alone has more prisoners than the entire country of Canada. In 2016, the US Government spent over $88 billion on prisons, jails, parole, and probation systems. Not to mention the social cost of incarcerating nearly 1% of our entire adult population. Alternative sentences can provide a substitute for costly incarceration.

incarceration statistics

What Are Alternative Sentences?

Typically, punishment for a crime is imprisonment. Alternative sentences are sentences other than imprisonment, such as:

  • community service,
  • drug rehabilitation programs,
  • probation, and
  • mental health programs.

While many generalizations about alternative sentences cannot be made, as the results vary by program and location, alternative sentences can and do keep people out of the overcrowded, problematic prison system in the US.

Could Social Media Play a Part in Alternative Sentencing?

In June 2021, a tourist in Hawaii posted a video of herself on TikTok touching a monk seal. The video went viral, and copycats hopped on the trend of poking wildlife for views. Hawaiian people, outraged, called for enforcement action and local media outlets echoed their call. Eventually, the Hawaii Governor released a statement that people who messed with local wildlife would be “prosecuted to the fullest extent of the law.”

monk seal

There are essentially three avenues of prosecution for interfering with wildlife: in federal court, state court, or civil court through the National Oceanic and Atmospheric Administration. Disturbing wildlife is a misdemeanor under federal law, but it’s a felony under state law, with a maximum penalty of five years in prison and a $10,000 fine. However, enforcement is unlikely, even after the Governor’s proclamation. Additionally, when enforcement does take place, it often happens out of the public eye. This imbalance of highly publicized crime and underpublicized enforcement led to a suggestion by Kauai Prosecuting Attorney Justin Kollar.

Kollar suggested sentencing criminals like the Hawaiian tourist to community service that would be posted on social media. Kollar looked to Hawaii’s environmental court as a potential model. Established in 2014 for the purpose of adjudicating environmental and natural resource violations, the environmental court has more sentencing tools at its disposal. For example, the court can sentence people to work with groups that do habitat restoration.

According to Kollar, requiring criminal tourists to take time out from their vacation to work with an environmental group — and possibly publicizing the consequence on social media — would not only be a more productive and just penalty, it would also create a positive and contrite image to spread across the internet. The violators would have an opportunity to become more educated and understand the harm they caused. Kollar wants people to learn from their mistakes, address the harm they caused, and take responsibility for their actions.

In an age when many crimes are visible on social media, what would be the pros and cons of using social media as a sentencing tool?

Some Pros and Cons of Using Social Media as a Sentencing Tool

In law school, we’re taught the theories of punishment, but not the consequences of punishment. While it’s important to think about the motivation for punishment, it’s equally, if not more, important to think about what happens because of punishment. In the case of using social media as a sentencing tool, there would likely be pros and cons.

One pro of using social media to publicize enforcement would be a rebalancing of the scale of crime v. enforcement publicity. This rebalance could help prevent vigilante justice from occurring when there is too big of a perceived gap between crime and enforcement. For example, when the TikToker posted her crime, she began to receive death threats. Many Hawaiians are fed up with their environment being exploited for financial profits. The non-enforcement and bold display of a wildlife crime led them to want to take matters into their own hands. In a situation like this, society does not benefit, the criminal does not learn from or take responsibility for their actions, and the victim is not helped.

An alternative sentence of wildlife-related community service publicized on social media could have benefited society because there is justice being done in a publicly known way that does not contribute to costly mass incarceration; helped the criminal learn from and take responsibility for their actions without being incarcerated; and, helped the victim, the environment, via the actual work done.

Additionally, this type of sentence falls into the category of restorative justice. Restorative Justice (RJ) is “a system of criminal justice which focuses on the rehabilitation of offenders through reconciliation with victims and the community at large.” The social media addition to an alternative sentence could provide the reconciliation with the “community at large” piece of the RJ puzzle. This would be a large pro, as RJ has been shown to lower recidivism rates and help victims.

While these pros are appealing, it is important to keep in mind that social media is a powerful tool that can facilitate far-reaching and lasting stigmatization of people. Before the age of social media and Google, a person’s criminal record could only be found in state-sponsored documents or small write-ups in a newspaper. As social scientists Sarah Lageson and Shadd Maruna put it, “although these records were “public,” they often remained in practical obscurity due to access limitations.” Today, any discretion, or presumed and unproven discretion in the case of online mug shots and police use of social media, can be readily found with a quick search. This can increase recidivism rates and make it harder for people with a criminal record to build relationships, find housing, and gain employment. The consequences of a readily available criminal record result in punishments not fitting to many crimes, as stigmatization is a part of punishment. Using social media as a sentencing tool could make the stigmatization situation worse, a huge con.

Perhaps there is a middle ground. To protect people from long-term stigmatization, faces and other identifying features could be blurred prior to publication. Similarly, identifying information, like names, could be excluded from the posts. By keeping the perpetrators anonymous, the scale of crime v. enforcement publicity could be rebalanced, the community aspect of RJ could be accomplished, and harmful stigmatization could be avoided. To completely avoid the possibility of stigmatization via social media postings, the program coordinators could post adjacent content. For example, they could post a before and after of the service project, completely leaving out the violators, while still publicizing enforcement.

Any iteration of the idea to use social media as a sentencing tool should be studied intensely regarding its consequences related to society, the criminal, and the victim, as it is a new idea.

 

Do you think social media should be used as a sentencing tool?

Alarming Side of Youtube

Social media has now become an integrated part of an individual’s life. From Facebook to twitter, Instagram, snapchat to the latest edition, that is TikTok, social media has made its way into a person’s life and occupies the same value as that of eating, sleeping, exercising etc. There is no denying the dopamine hit you get from posting on Instagram or scrolling endlessly, liking, sharing, commenting and re-sharing etc. From checking your notifications and convincing yourself, “Right, just five minutes, I am going to check my notifications” to spending hours on social media, it is a mixed bag. While I find that being in social media is to an extent a way to relax and alleviate stress, I also believe social media and its influence on peoples’ lives should not cross a certain threshold.

We all like a good laugh. We get a good laugh from people doing funny things on purpose or people pranking other people to get a laugh. Most individuals nowadays use some sort of social medial platforms to watch content or make content. YouTube is once such platform. After Google, YouTube is the most visited website on the internet. Everyday about a billion hours of videos are watched by people all over the world. I myself, contribute to those billion hours.

Now imagine you are on YouTube, you start watching a famous youtuber’s videos, you then realize this video is not only disturbing but is also very offensive. You stop watching the video. That’s it. You think that is a horrible video and think no more of it. On the contrary, there have been videos on YouTube which have caused mass controversy all over the internet since the platforms birth in 2005. Let us now explore the dark side of YouTube.

There is an industry that centers around pranks done to members of the public which is less about humor and more about shock value. There is nothing wrong with a harmless prank, but when doing a prank, one must be considerate how their actions are perceived by others, one wrong move and you could end facing charges or a conviction.

Across the social media platform there are many creators of such prank videos. Not all of them have been well received by the public or by the fands of the creators. One such incident is where YouTube content creators, Alan and Alex Stokes who are known for their gag videos plead guilty to charges centering around fake bank robberies staged by them.

The twins wore black clothes and ski masks, carried cash filled duffle bags for a video where they pretended to have robbed a bank. They then ordered an uber who, unaware of the prank had refused to drive them. An onlooker called the police believing that the twins had robbed a bank and were attempting to carjack the vehicle. Police arrived at the scene and held the driver at gunpoint until it was revealed and determined that it was a prank. The brothers were not charged and let off with a warning. They however, pulled the same stunt at a university some four hours later and were arrested.

They were charged with one felony count of false imprisonment by violence, menace or fraud, or deceit and one misdemeanor count of falsely reporting an emergency. The charges carry a maximum penalty of five years in prison. “These were not pranks. These are crimes that could have resulted in someone getting seriously injured or even killed.” said Todd Spitzer, Orange County district attorney.

The brothers accepted a bargain from the judge. In return for a guilty plea, the felony count would be reduced a misdemeanor resulting in one year probation and 160 hours of community service and compensation. The plea was entered despite the prosecution stating that tougher charges were necessary. The judge also warned the brothers, who have over 5 million YouTube subscribers not to make such videos.

Analyzing the scenario above, I would agree with the district attorney. Making prank videos and racking up videos should not come at the cost of inciting fear and panic in the community. The situation with the police could have escalated severely which might have led to a more gruesome outcome. The twins were very lucky, however, in the next incident, the man doing a prank video in Tennessee was not.

In filming a YouTube prank video, 20 year old Timothy Wilks was shot dead in a parking lot of an Urban Air indoor trampoline park. David Starnes Jr, admitted to shooting Wilks when he and an unnamed individual approached him and a group wielding butcher knives and lunged at them. David told the police that he shot one of them in defense of himself and others.

Wilks’s friend said they were filming a video of a robbery prank for their YouTube channel. This was a supposed to be a recorded YouTube video meant to capture the terrified reactions of their prank victims. David was unaware of this prank and pulled out his gun to protect himself and others. No one has been charged yet in regard to the incident.

The above incident is an example of how pranks can go horribly wrong and result in irreparable damage. This poses the question, who do you blame, the 20 years old man staging a very dangerous prank video, or the 23-year-old who fired his gun in response to that?

Monalisa Perez, a youtuber from Minnesota fatally shot and killed her boyfriend in an attempt to film a stunt of firing a gun 30 cm away from her boyfriend, Predo Ruiz, who only had a thick book of 1.5inch to protect him. Perez pleaded guilty to second degree manslaughter and was sentenced to six months’ imprisonment.

Perez and her boyfriend Ruiz would document their everyday lives in Minnesota by posting pranks videos on YouTube to gain views. Before the fatal stunt, Perez tweeted, “Me and Pedro are probably going to shoot one of the most dangerous videos ever. His idea, not mine.”

Perez had previously experimented before and thought that the hardback Encyclopedia would be enough to stop the bullet. Perez fired a .50-calibre Desert Eagle, which is known to be an extremely powerful handgun which pierced the encyclopedia and fatally wounded Ruiz.

Perez will serve a 180-day jail term, serve 10 years of supervised probation, be banned for life from owning firearms and make no financial gain from the case. The sentence is below the minimum guidelines, but it was allowed on the ground that the stunt was mostly Ruiz’s idea.

Dangerous pranks such as the one above has left a man dead and a mother of two grieving for fatally killing her partner.

In response to the growing concerns of filming various trends and videos, YouTube have updated their policies regarding “harmful and dangerous” content and explicitly banned pranks and challenges that may cause immediate or lasting physical or emotional harm. The policies page showcases three types of videos that are now prohibited. They are: 1) Challenges that encourage acts that have an inherent risk of sever harm; 2) Pranks that make victims they are physical danger and 3) Pranks that cause emotional distress to children.

Prank videos may depict the dark side of how content crating can go wrong but they are not the only ones. In 2017, youtuber, Logan Paul became the source of controversy after posting a video of him in a Japanese forest called Aokigahara near the base of Mount Fuji. Aokigahara is a dense forest with lush trees and greenery. The forest is, however, infamous for being known as the suicide forest. It is a frequent site for suicides and is also considered haunted.

Upon entering the forest, the youtuber came across a dead body hung from a tree. The actions and depictions of Logan Paul around the body are what caused controversy and outrage. The video has since been taken down from YouTube. An apology video was posted by Logan Paul trying to defend his actions. This did nothing to quell the anger on the internet. He then came out with a second video where he could be seen tearing up on camera. In addressing the video, YouTube expressed condolences and stated that they prohibit such content which are shocking or disrespectful. Paul lost the ability to make money on his videos through advertisement which is known as demonetization. He was also removed from the Google Preferred program, where brands can sell advertisement to content creators on YouTube.

That consequences of Logan Paul’s actions did not end there. A production company is suing the youtuber on the claims that the video of him in the Aokigahara resulted in the company losing a multimillion-dollar licencing agreement with Google. The video caused Google to end its relationship with Planeless Pictures, the production company and not pay the $3.5 million. Planeless Pictures are now suing Paul claiming that he pay the amount as well as additional damage and legal fees.

That is not all. Youtube has been filled with controversies which have resulted in lawsuits.

A youtuber by the name of Kanghua Ren was fined $22300 and was also sentenced to 15 months imprisonment for filming himself giving a homeless man an oreo filled with toothpaste. He gave 20 euros and oreo cookies to a homeless which were laced with toothpaste instead of cream. The video depicts the homeless man vomiting after eating the cookie. In the video Ren stated that although he had gone a bit far, the action would help clean the homeless person’s teeth. The court, however, did not take this lightly and sentenced him. The judge stated that this was not an isolated act and that Ren had shown cruel behaviour towards vulnerable victims.

These are some of the pranks and videos that have gained online notoriety. There are many other videos which have portrayed child abuse, following a trend by eating tidepods as well as making sharing anti-Semitic videos and using racist remarks. The most disturbing thing about these videos is that they are not only viewed by adults but also children. In my opinion these videos could be construed as having some influence on young individuals.

Youtube is a diverse platform home to millions of content creators. Since its inception it has served as a mode of entertainment and means of income to many individuals. From posting cat videos online to making intricate, detailed, and well directed short films, YouTube has revolutionized the video and content creation spectrum. Being an avid viewer of many channels on YouTube, I find that incidents like these, give YouTube a bad name. Proper policies and guidelines should be enacted and imposed and if necessary government supervision may also be exercised.

What Evidence is Real in a World of Digitally Altered Material?

Imagine you are prosecuting a child pornography case and have incriminating chats made through Facebook showing the Defendant coercing and soliciting sexually explicit material from minors.  Knowing that you will submit these chats as evidence in trial, you acquire a certificate from Facebook’s records custodian authenticating the documents.  The custodian provides information that confirms the times, accounts and users.  That should be enough, right?

Wrong.  Your strategy relies on the legal theory that chats made through a third-party provider fall into a hearsay exception known as the “business records exemption.”  Under the Federal Rules of Evidence 902(11) “self-authenticating” business records “provides that ‘records of a regularly conducted activity’ that fall into the hearsay exception under Rule 803(6)—more commonly known as the “business records exception”—may be authenticated by way of a certificate from the records custodian.”  (Fed. R. Evid. 902(11)), (United States v. Browne, 834 F.3d 403 (3d Cir. 2016)).

Why does this certification fail to actually show authenticity?  The Third Circuit answers, saying there must be additional, outside evidence (extrinsic) establishing relevance of the evidence.  (United States v. Browne, 834 F.3d 403 (3d Cir. 2016)).

Relevance is another legal concept where “its existence simply has some ‘tendency to make the existence of any fact that is of consequence to the determination of the action more probable or less probable than it would be without the evidence.’”  (United States v. Jones, 566 F.3d 353, 364 (3d Cir. 2009) (quoting Fed. R. Evid. 401)).  Put simply, the existence of this evidence has a material effect on the evaluation of an action.

In Browne, the Third Circuit says the “business records exemption” is not enough because Facebook chats are fundamentally different than business records.  Business records are “supplied by systematic checking, by regularity and continuity which produce habits of precision, by actual experience of business in relying upon them, or by a duty to make an accurate record as part of a continuing job or occupation,” which results in records that can be relied upon as legitimate.

The issue here deals with authenticating the entirety of the chat – not just the timestamps or cached information.  The court delineates this distinction, saying “If the Government here had sought to authenticate only the timestamps on the Facebook chats, the fact that the chats took place between particular Facebook accounts, and similarly technical information verified by Facebook ‘in the course of a regularly conducted activity,’ the records might be more readily analogized to bank records or phone records conventionally authenticated and admitted under Rules 902(11) and 803(6).”

In contrast, Facebook chats are not authenticated based on confirmation of their substance, but instead on the user linked to that account.  Moreover, in this case, the Facebook records certification showed “alleged” activity between user accounts but not the actual identification of the person communicating, which the court found is not conclusive in determining authorship.

The policy concern is that information is easily falsified – accounts may be created with a fake name and email address, or a person’s account may be hacked into and operated by another.  As a result of the ruling in Browne, submitting chat logs into evidence made through a third party such as Facebook requires more than verification of technical data.  The Browne court describes the second step for evidence to be successfully admitted – there must be, extrinsic, or additional outside evidence, presented to show that the chat logs really occurred between certain people and that the content is consistent with the allegations.  (United States v. Browne, 834 F.3d 403 (3d Cir. 2016))

When there is enough extrinsic evidence, the “authentication challenge collapses under the veritable mountain of evidence linking [Defendant] and the incriminating chats.”  In the Browne case, there was enough of this outside evidence that the court found there was “abundant evidence linking [Defendant] and the testifying victims to the chats conducted… [and the] Facebook records were thus duly authenticated” under Federal Rule of Evidence 901(b)(1) in a traditional analysis.

The idea that extrinsic evidence must support authentication of evidence collected from third-party platforms is echoed in the Seventh Circuit decision United States v. Barber, 937 F.3d 965 (7th Cir. 2019).  Here, “this court has relied on evidence such as the presence of a nickname, date of birth, address, email address, and photos on someone’s Facebook page as circumstantial evidence that a page might belong to that person.”

The requirement for extrinsic evidence represents a shift in thinking from the original requirement that the government carries the burden of only ‘“produc[ing] evidence sufficient to support a finding’ that the account belonged to [Defendant] and the linked messages were actually sent and received by him.”  United States v. Barber, 937 F.3d 965 (7th Cir. 2019) citing Fed. R. Evid. 901(a), United States v. Lewisbey, 843 F.3d 653, 658 (7th Cir. 2016).  Here, “Facebook records must be authenticated through the ‘traditional standard’ of Rule 901.” United States v. Frazier, 443 F. Supp. 3d 885 (M.D. Tenn. 2020).

The bottom line is that Facebook cannot attest to the accuracy of the content of its chats and can only provide specific technical data.  This difference is further supported by a District Court ruling mandating traditional analysis under Rule 901 and not allowing a business hearsay exception, saying “Rule 803(6) is designed to capture records that are likely accurate and reliable in content, as demonstrated by the trustworthiness of the underlying sources of information and the process by which and purposes for which that information is recorded… This is no more sufficient to confirm the accuracy or reliability of the contents of the Facebook chats than a postal receipt would be to attest to the accuracy or reliability of the contents of the enclosed mailed letter.”  (United States v. Browne, 834 F.3d 403, 410 (3rd Cir. 2016), United States v. Frazier, 443 F. Supp. 3d 885 (M.D. Tenn. 2020)).

Evidence from social media is allowed under the business records exemption in a select-few circumstances.  For example, United States v. El Gammal, 831 F. App’x 539 (2d Cir. 2020) presents a case that does find authentication of Facebook’s message logs based on testimony from a records custodian.  However, there is an important distinction here – the logs admitted were directly from a “deleted” output, where Facebook itself created the record, rather than a person.  Accordingly, the Tenth Circuit agreed that “spreadsheets fell under the business records exception and, alternatively, appeared to be machine-generated non-hearsay.”  United States v. Channon, 881 F.3d 806 (10th Cir. 2018).

What about photographs – are pictures taken from social media dealt with in the same way as chats when it comes to authentication?  Reviewing a lower court decision, the Sixth Circuit in United States v. Farrad, 895 F.3d 859 (6th Cir. 2018) found that “it was an error for the district court to deem the photographs self-authenticating business records.”  Here, there is a bar on using the business exception that is similar to that found in the authentication of chats, where photographs must also be supported by extrinsic evidence.

While not using the business exception to do so, the court in Farrad nevertheless found that social media photographs were admissible because it would be logically inconsistent to allow “physical photos that police stumble across lying on a sidewalk” while barring “electronic photos that police stumble across on Facebook.”  It is notable that the court does not address the ease with which photographs may be altered digitally, given that was a major concern voiced by the Browne court regarding alteration of digital text.

United States v. Vazquez-Soto, 939 F.3d 365 (1st Cir. 2019) further supports the idea that photographs found through social media need to be authenticated traditionally.  Here, the court explains the authentication process, saying “The standard [the court] must apply in evaluating a[n] [item]’s authenticity is whether there is enough support in the record to warrant a reasonable person in determining that the evidence is what it purports to be.” United States v. Vazquez-Soto, 939 F.3d 365 (1st Cir. 2019) quoting United States v. Blanchard, 867 F.3d 1, 6 (1st Cir. 2017) (internal quotation marks omitted); Fed. R. Evid. 901(a).”  In other words, based on the totality of the evidence to include extrinsic evidence, do you believe the photograph is real?  Here, “what is at issue is only the authenticity of the photographs, not the Facebook page” – it does not necessarily matter who posted the photo, only what was depicted.

Against the backdrop of an alterable digital world, courts seek to emplace guards against falsified information.  The cases here represent the beginning of a foray into what measures can be realistically taken to protect ourselves from digital fabrications.

 

https://www.rulesofevidence.org/article-ix/rule-902/

https://www.rulesofevidence.org/article-viii/rule-803/

https://casetext.com/case/united-states-v-browne-12

https://www.courtlistener.com/opinion/1469601/united-states-v-jones/?order_by=dateFiled+desc&page=4

https://www.rulesofevidence.org/article-iv/rule-401/

https://www.rulesofevidence.org/article-ix/rule-901/

https://casetext.com/case/united-states-v-barber-103

https://casetext.com/case/united-states-v-lewisbey-4

https://casetext.com/case/united-states-v-frazier-175

https://casetext.com/case/united-states-v-el-gammal

https://casetext.com/case/united-states-v-channon-8

https://casetext.com/case/united-states-v-farrad

https://casetext.com/case/united-states-v-vazquez-soto-1?q=United%20States%20v.%20Vazquez-Soto,%20939%20F.3d%20365%20(1st%20Cir.%202019)&PHONE_NUMBER_GROUP=P&sort=relevance&p=1&type=case&tab=keyword&jxs=

Don’t Throw Out the Digital Baby with the Cyber Bathwater: The Rest of the Story

This article is in response to Is Cyberbullying the Newest Form of Police Brutality?” which discussed law enforcement’s use of social media to apprehend people. The article provided a provocative topic, as seen by the number of comments.

I believe that discussion is healthy for society; people are entitled to their feelings and to express their beliefs. Each person has their own unique life experiences that provide a basis for their beliefs and perspectives on issues. I enjoy discussing a topic with someone because I learn about their experiences and new facts that broaden my knowledge. Developing new relationships and connections is so important. Relationships and new knowledge may change perspectives or at least add to understanding each other better. So, I ask readers to join the discussion.

My perspectives were shaped in many ways. I grew up hearing Paul Harvey’s radio broadcast “The Rest of the Story.” His radio segment provided more information on a topic than the brief news headline may have provided. He did not imply that the original story was inaccurate, just that other aspects were not covered. In his memory, I will attempt to do the same by providing you with more information on law enforcement’s use of social media. 

“Is Cyberbullying the Newest Form of Police Brutality?

 The article title served its purpose by grabbing our attention. Neither cyberbullying or police brutality are acceptable. Cyberbullying is typically envisioned as teenage bullying taking place over the internet. The U.S. Department of Health and Human Services states that “Cyberbullying includes sending, posting, or sharing negative, harmful, false, or mean content about someone else. It can include sharing personal or private information about someone else causing embarrassment or humiliation”. Similarly, police brutality occurs when law enforcement (“LE”) officers use illegal and excessive force in a situation that is unreasonable, potentially resulting in a civil rights violation or a criminal prosecution.

While the article is accurate that 76% of the surveyed police departments use social media for crime-solving tips, the rest of the story is that more departments use social media for other purposes. 91% notified the public regarding safety concerns. 89% use the technology for community outreach and citizen engagement, 86% use it for public relations and reputation management. Broad restrictions should not be implemented, which would negate all the positive community interactions increasing transparency.   

Transparency 

In an era where the public is demanding more transparency from LE agencies across the country, how is the disclosure of the public’s information held by the government considered “Cyberbullying” or “Police Brutality”? Local, state, and federal governments are subject to Freedom of Information Act laws requiring agencies to provide information to the public on their websites or release documents within days of requests or face civil liability.

New Jersey Open Public Records

While the New Jersey Supreme Court has not decided if arrest photographs are public, the New Jersey Government Records Council (“GRC”) has decided in Melton v. City of Camden, GRC 2011-233 (2013) that arrest photographs are not public records under NJ Open Public Records Act (“OPRA”) because of Governor Whitmer’s Executive Order 69 which exempts fingerprint cards, plates and photographs and similar criminal investigation records from public disclosure. It should be noted that GRC decisions are not precedential and therefore not binding on any court.

However, under OPRA, specifically 47:1A-3 Access to Records of Investigation in Progress, specific arrest information is public information and must be disclosed to the public within 24 hours of a request to include the:

  • Date, time, location, type of crime, and type of weapon,
  • Defendant’s name, age, residence, occupation, marital status, and similar background information.
  • Identity of the complaining party,
  • Text of any charges or indictment unless sealed,
  • Identity of the investigating and arresting officer and agency and the length of the investigation,
  • Time, location, and the arrest circumstances (resistance, pursuit, use of weapons),
  • Bail information.

For years, even before Melton, I believed that an arrestee’s photograph should not be released to the public. As a police chief, I refused numerous media requests for arrestee photographs protecting their rights and believing in innocence until proven guilty. Even though they have been arrested, the arrestee has not received due process in court.

New York’s Open Public Records

In New York under the Freedom of Information Law (“FOIL”), Public Officers Law, Article 6, §89(2)(b)(viii) (General provisions relating to access to records; certain cases) The disclosure of LE arrest photographs would constitute an unwarranted invasion of an individual’s personal privacy unless the public release would serve a specific LE purpose and the disclosure is not prohibited by law.

California’s Open Public Records

Under the California Public Records Act (CPRA) a person has the statutory right to be provided or inspect public records, unless a record is exempt from disclosure. Arrest photographs are inclusive in arrest records along with other personal information, including the suspect’s full name, date of birth, sex, physical characteristics, occupation, time of arrest, charges, bail information, any outstanding warrants, and parole or probation holds.

Therefore under New York and California law, the blanket posting of arrest photographs is already prohibited.

Safety and Public Information

 Recently in Ams. for Prosperity Found. V. Bonta, the compelled donor disclosure case, while invalidating the law on First Amendment grounds, Justice Alito’s concurring opinion briefly addressed the parties personal safety concerns that supporters were subjected to bomb threats, protests, stalking, and physical violence. He cited Doe v Reed  which upheld disclosures containing home addresses under Washington’s Public Records Act despite the growing risks by anyone accessing the information with a computer. 

Satisfied Warrant

I am not condoning Manhattan Beach Police Department’s error of posting information on a satisfied warrant along with a photograph on their “Wanted Wednesday” in 2020. However, the disclosed information may have been public information under CPRA then and even now. On July 23, 2021, Governor Newsom signed a law amending Section 13665 of the CPRA prohibiting LE agencies from posting photographs of an arrestee accused of a non-violent crime on social media unless:

  • The suspect is a fugitive or an imminent threat, and disseminating the arrestee’s image will assist in the apprehension.
  • There is an exigent circumstance and an urgent LE interest.
  • A judge orders the release or dissemination of the suspect’s image based on a finding that the release or dissemination is in furtherance of a legitimate LE interest.

The critical error was that the posting stated the warrant was active when it was not. A civil remedy exists and was used by the party to reach a settlement for damages. Additionally, it could be argued that the agency’s actions were not the proximate cause when vigilantes caused harm.

Scope of Influence

LE’s reliance on the public’s help did not start with social media or internet websites. The article pointed out that “Wanted Wednesday” had a mostly local following of 13,600. This raised the question if there is much of a difference between the famous “Wanted Posters” from the wild west or the “Top 10 Most Wanted” posters the Federal Bureau of Investigations (“FBI”) used to distribute to Post Offices, police stations and businesses to locate fugitives. It can be argued that this exposure was strictly localized. However, the weekly TV show America’s Most Wanted, made famous by John Walsh, aired from 1988 to 2013, highlighting fugitive cases nationally. The show claims it helped capture over 1000 criminals through their tip-line. However, national media publicity can be counter-productive by generating so many false leads that obscure credible leads.

The FBI website contains pages for Wanted People, Missing People, and Seeking Information on crimes. “CAPTURED” labels are added to photographs showing the results of the agency’s efforts. Local LE agencies should follow FBI practices. I would agree with the article that social media and websites should be updated; however, I don’t agree that the information must be removed because it is available elsewhere on the internet.

Time

Vernon Gebeth, the leading police homicide investigation instructor, believes time is an investigator’s worst enemy.  Eighty-five percent of abducted children are killed within the first five hours. Almost all are killed within the first twenty-four hours. Time is also critical because, for each hour that passed, the distance a suspect’s vehicle can travel expands by seventy-five miles in either direction. In five hours, the area can become larger than 17,000 square miles. Like Amber Alerts, social media can be used to quickly transmit information to people across the country in time-sensitive cases.

Live-Streaming Drunk Driving Leads to an Arrest

When Whitney Beall, a Florida woman, used a live streaming app to show her drinking at a bar then getting into her vehicle. The public dialed 911, and a tech-savvy officer opened the app, determined her location, and pulled her over. She was arrested after failing a DWI sobriety test.  After pleading guilty to driving under the influence, she was sentenced to 10 days of weekend work release, 150 hours of community service, probation, and a license suspension. In 2019 10,142 lives were lost to alcohol impaired driving crashes.

Family Advocating

Social media is not limited to LE. It also provides a platform for victim’s families to keep attention on their cases. The father of a seventeen-year-old created a series of Facebook Live videos about a 2011 murder resulting in the arrest of Charles Garron. He was to a fifty-year prison term.

Instagram Selfies with Drugs, Money and Stolen Guns 

Police in Palm Beach County charged a nineteen-year-old man with 142 felony charges, including possession of a weapon by a convicted felon, while investigating burglaries and jewel thefts in senior citizen communities. An officer found his Instagram account with incriminating photographs. A search warrant was executed, seizing stolen firearms and $250,000 in stolen property from over forty burglaries.

Bank Robbery Selfies


Police received a tip and located a social media posting by John E. Mogan II of himself with wads of cash in 2015. He was charged with robbing an Ashville, Ohio bank. He pled guilty and was sentenced to three years in prison. According to news reports, Morgan previously  served prison time for another bank robbery.

Food Post Becomes the Smoking Gun

LE used Instagram to identify an ID thief who posted photographs of his dinner at a high-end steakhouse with a confidential informant (“CI”).  The man who claimed he had 700,000 stolen identities and provided the CI a flash drive of stolen identities. The agents linked the flash drive to a “Troy Maye,” who the CI identified from Maye’s profile photograph. Authorities executed a search warrant on his residence and located flash drives containing the personal identifying information of thousands of ID theft victims. Nathaniel Troy Maye, a 44-year-old New York resident, was sentenced to sixty-six months in federal prison after pleading guilty to aggravated identity theft.

 

Wanted Man Turns Himself in After Facebook Challenge With Donuts

A person started trolling Redford Township Police during a Facebook Live community update. It was determined that he was a 21-year-old wanted for a probation violation for leaving the scene of a DWI collision. When asked to turn himself in, he challenged the PD to get 1000 shares and he would bring in donuts. The PD took the challenge. It went viral and within an hour reached that mark acquiring over 4000 shares. He kept his word and appeared with a dozen donuts. He faced 39 days in jail and had other outstanding warrants.

The examples in this article were readily available on the internet and on multiple news websites, along with photographs.

Under state Freedom of Information Laws, the public has a statutory right to know what enforcement actions LE is taking. Likewise, the media exercises their First Amendment rights to information daily across the country when publishing news. Cyber journalists are entitled to the same information when publishing news on the internet and social media. Traditional news organizations have adapted to online news to keep a share of the news market. LE agencies now live stream agency press conferences to communicating directly with the communities they serve.

Therefore the positive use of social media by LE should not be thrown out like bathwater when legal remedies exist when damages are caused.

“And now you know…the rest of the story.”

How Defamation and Minor Protection Laws Ultimately Shaped the Internet

Kyiv, Ukraine – September 5, 2019: A paper cubes collection with printed logos of world-famous social networks and online messengers, such as Facebook, Instagram, YouTube, Telegram and others.

The Communications Decency Act (CDA) was originally enacted with the intention of shielding minors from indecent and obscene online material. Despite its origins, Section 230 of the Communications Decency Act is now commonly used as a broad legal safeguard for social media platforms to shield themselves from legal liability for content posted on their sites by third parties. Interestingly, the reasoning behind this safeguard arises both from defamation common law, and constitutional free speech laws. As the internet has grown, however, this legal safeguard has gained increasing criticism. However, is this legislation actually undesirable? Many would disagree as section 230 contains “the 26 words that created the internet.”

 

Origin of the Communications Decency Act

The CDA was introduced and enacted as an attempt to shield minors from obscene or indecent content online. Although parts of the Act were later struck down for first amendment free speech violations, the Court left section 230 intact. The creation of section 230 was influenced by two landmark court decisions of defamation lawsuits.

The first case was in 1991, and involved an Internet site that hosted around 150 online forums. A claim was brought against the internet provider when a columnist of one of the online forums posted a defamatory comment about his competitor. The competitor sued the online distributor for the published defamation. The courts categorized the internet service provider as a distributor because they did not review any content of the forums before the content was posted to the site. As a distributor, there was no legal liability, and the case was dismissed.

 

Distributor Liability

Distributor Liability refers to the limited legal consequences that a distributor is exposed to for defamation. A common example of a distributor, is a bookstore or library. The theory behind distributor liability is that it would be impossible for distributors to moderate and censor every piece of content that they disperse because of the sheer volume, and the impossibility of knowing whether something is false or not.

The second case that influenced the creation of section 230, was Stratton Oakmont, Inc. v. Prodigy Servs. Co., in which the court used publisher liability theory to find the internet provider liable for the third party defamatory postings published on its site.  The court deemed the website a publisher because they moderated and deleted certain posts, regardless of the fact that there were far too many postings a day to regulate each one.

 

Publisher Liability

Under common law principles, a person who publishes a third-party’s defamatory statement bears the same legal responsibility as the creator of that statement. This liability is often referred to as “publisher liability,” and is based in theory that a publisher has the knowledge, opportunity, and ability to exercise control over the publication. For example, a newspaper publisher could face legal consequences for the content located within it. The court’s decision was significant because it meant that if a website attempted to moderate certain posts, it would be held liable for all posts.

 

Section 230’s Creation

In response to the Stratton-Oakmond case, and the ambiguous court decisions regarding internet services provider’s liability, members of Congress introduced an amendment to the CDA that later became Section 230. The Amendment was specifically introduced and passed with the goal of encouraging the development of unregulated, free speech online by relieving internet providers from any liability for their content.

 

Text of the Act- Subsection (c)(1) 

“No Provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

 Section 230 further provides that…

“No cause of action may be brought and no liability may be imposed under any State or local law that is inconsistent with this section.”

 The language above removes legal consequences arising from content posted on their forum. Courts have interpreted this subsection as providing broad immunity to online platforms from suits over content of third parties. Because of this, section 230 has become the principal legal safeguard from lawsuits over sites content.

 

The Good

  •  Section 230 can be viewed as being one of the most important pieces of legislation that protects free speech online. One of the unique aspects of this legislation is that it essentially extends free speech protection, applying it to private, non-governmental companies.
  • Without CDA 230, the internet would be a very different place. This section influenced some of the internet’s most distinctive characteristics. The internet promotes free speech and offers the ability for worldwide connectivity.
  • The CDA 230 does not fully eliminate liability or court remedies for victims of online defamation. Rather, it makes only the creator themselves liable for their speech, instead of the speaker and the publisher.

 

 

The Bad

  •  Because of the legal protections section 230 provides, social media networks have less of an incentive to regulate false or deceptive posts. Deceptive online posts can have an enormous impact on society. False posts have the ability to alter election results, or lead to dangerous misinformation campaigns, like the QAnon conspiracy theory, and the anti-vaccination movement.
  • Section 230 is twenty-five years old, and has not been updated to match the internet’s extensive growth.
  • Big Tech companies have been left largely unregulated regarding their online marketplaces.

 

 The Future of 230

While section 230 is still successfully used by social media platforms, concerns over the archaic legislation have mounted. Just recently, Justice Thomas, who is infamous for being a quiet Justice, wrote a concurring opinion articulating his view that the government should regulate content providers as common carriers, like utilities companies. What implications could that have on the internet? With the growing level of criticism surrounding section 230, will Congress will finally attempt to fix this legislation? If not, will the Supreme Court be left to tackle the problem themselves?

Advertising in the Cloud

Thanks to social media, advertising to a broad range of people across physical and man-made borders has never been easier. Social media has transformed how people and businesses can interact throughout the world. In just a few moments a marketer can create a post advertising their product halfway across the world and almost everywhere in between. Not only that, but Susan, a charming cat lady in west London, can send her friend Linda, who’s visiting her son in Costa Rica an advertisement she saw for sunglasses she thinks Linda might like. The data collected by social media sites allows marketers to target specific groups of people with their advertisements. For example, if Susan was part of a few Facebook cat groups, she would undoubtedly receive more cat tower or toy related advertisements than the average person.

 

Advertising on social media also allows local stores or venues to advertise to the local communities, targeting groups of people in the local area. New jobs in this area are being created, young entrepreneurs are selling their social media skills to help small business owners create an online presence. Social media has also transformed the way stores advertise to people as well, no longer must stores rely on solely a posterboard, or scripted advertisement. Individuals with a large enough following on social media are sought out by companies to “review” or test their products for free.

Social media has transformed and expanded the marketplace exponentially. Who we can reach in the world, who we can market to and sell to has expanded beyond physical barriers. With these changes, and newfound capabilities through technology, comes a new legal frontier.

 Today, most major brands and companies have their own social media account. Building a store’s “online presence” and promoting brand awareness has now become a priority for many marketing departments. According to Internet Advertising Revenue Report: Full Year 2019 Results & Q1 2020 Revenues, “The Interactive Advertising bureau, an industry trade association, and the research firm eMarketer estimate that U.S. social media advertising revenue was roughly $36 billion in 2019, making up approximately 30% of all digital advertising revenue,” they expect that it will increase to $43 billion in 2020.

The Pew Research Center estimated, “that in 2019, 72% of U.S. adults, or about 184 million U.S. adults, used at least one social media site, based on the results of a series of surveys.”

Companies and people are increasingly utilizing these tools, what are the legal implications? 

This area of law is quickly growing. Advertisers can now directly reach their consumers in an instant, marketing their products at comparable prices. The FTC, Federal Trade Commission has expanded its enforcement actions in this area. Some examples of this are:

  •  The Securities and Exchange Commission Regulation Fair Disclosure addresses, “ the selective disclosure of information by publicly traded companies and other issuers, and the SEC has clarified that disseminating information through social media outlets like Facebook and Twitter is allowed so long as investors have been alerted about which social media will be used to disseminate such information,” 
  • The National Labor Relations Act, “While crafting an effective social media policy regarding who can post for a company or what is acceptable content to post relating to the company is important, companies need to ensure that the policy is not overly broad or can be interpreted as limiting employees’ rights related to protected concerted activity”
  • FDA, “ Even on social media platforms, businesses running promotions or advertising online have to be careful not to run afoul of FDA disclosure requirements”

According to the ABA there are two basic principles in advertising law which apply to any media: 

  1. Advertisers must have a reasonable basis to substantiate claims made; and
  2.  If disclosure is required to prevent an ad from being misleading, such disclosure must appear in a clear and conspicuous manner.

Advertisements may be subject to more specific regulations regarding Children under the Children’s Online Privacy Protection Act (COPPA). This act gives parents control over protections and approvable ways to get verifiable parental consent.  

The Future legality of our Data 

Data brokers are companies that collect information about you and sell that data to other companies or individuals. This information can include everything from family birthdays, addresses, contacts, jobs, education, hobbies, interests, life events and health conditions. Currently, Data brokers are legal in most states. California and Vermont have enacted laws that require data brokers to register their operation in the state. Who owns your data? Should you? Should the sites you are creating the data on? Should it be free for companies to sell? Will states take this issue in different directions? If so, what would these implications be for companies and sites to keep up with?

Facebook’s market capitalization stands at $450 billion.

While there is uncertainty regarding this area of law, it is certain that it is new, expanding and will require much debate. 

According to Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media,  “Collecting user data allows operators to offer different advertisements based on its potential relevance to different users.”   The data collected by social media companies enables them to build complex strategies and sell advertising “space” targeting specific user groups to companies, organizations, and political campaigns (How Does Facebook Make Money). The capabilities here seem endless, “Social media operators place ad spaces in a marketplaces that runs an instantaneous auction with advertisers that can place automated bids.” With the ever expanding possibilities of social media comes a growing legal frontier. 

Removing Content 

 Section 230, a provision of the 1996 Communications Decency Act, states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” (47 U.S.C. § 230). This act shields social media companies from liability for content posted by their users and allows them to remove lawful but objectionable posts.

One legal issue that has been arising here is, advertisements are being taken down by the content monitoring algorithms. According to a Congressional Research Services report, during the COVID-19 pandemic social media companies relied more heavily on automated systems to monitor content. These systems could review large volumes of the content at a time however they mistakenly removed some content. “Facebook’s automated systems have reportedly removed ads from small businesses, mistakenly identifying them as content that violates its policies and causing the business to lose money during the appeals process” (Facebook’s AI Mistakenly Bans Ads for Struggling Businesses). This has affected a wide range of small businesses according to Facebook’s community standards transparency enforcement report. According to this same report, “In 2019, Facebook restored 23% of the 76 million appeals it received, and restored an additional 284 million pieces of content without an appeal—about 2% of the content that it took action on for violating its policies.” 

 

Is Cyberbullying the Newest Form of Police Brutality?

Police departments across the country are calling keyboard warriors into action to help them solve crimes…but at what cost?

In a survey of 539 police departments in the U.S., 76% of departments said that they used their social media accounts to solicit tips on crimes. Departments post “arrested” photos to celebrate arrests, surveillance footage for suspect identification, and some even post themed wanted posters, like the Harford County Sheriff’s Office.

The process for using social media as an investigative tool is dangerously simple and the consequences can be brutal. A detective thinks posting on social media might help an investigation, so the department posts a video or picture asking for information. The community, armed with full names, addresses, and other personal information, responds with some tips and a lot of judgmental, threatening, and bigoted comments. Most police departments have no policy for removing posts after information has been gathered or cases are closed, even if the highlighted person is found to be innocent. A majority of people who are arrested are not even convicted of a crime.

Law enforcement’s use of social media in this way threatens the presumption of innocence, creates a culture of public humiliation, and often results in a comment section of bigoted and threatening comments.

On February 26, 2020, the Manhattan Beach Police Department posted a mugshot of Matthew Jacques on their Facebook and Instagram pages for their “Wanted Wednesday” social media series. The pages have 4,500 and 13,600, mostly local, followers, respectively. The post equated Matthew to a fugitive and commenters responded publicly with information about where he worked. Matthew tried to call off work out of fear of a citizen’s arrest. The fear turned out to be warranted when two strangers came to find him at his workplace. Matthew eventually lost his job because he was too afraid to return to work.

You may be thinking this is not a big deal. This guy was probably wanted for something really bad and the police needed help. After all, the post said the police had a warrant. Think again.

There was no active warrant for Matthew at the time, his only (already resolved) warrant came from taking too long to schedule remedial classes for a 2017 DUI. Matthew was publicly humiliated by the local police department. The department even refused to remove the social media posts after being notified of the truth. The result?

Matthew filed a complaint against the department for defamation (as well as libel per se and false light invasion of privacy). Typically, defamation requires the plaintiff to show:

1) a false statement purporting to be fact; 2) publication or communication of that statement to a third person; 3) fault amounting to at least negligence; and 4) damages, or some harm caused to the person or entity who is the subject of the statement.

Here, the department made a false statement – that there was a warrant. They published it on their social media, satisfying the second element. They did not check readily available public records that showed Matthew did not have a warrant. Finally, Matthew lived in fear and lost his job. Clearly, he was harmed.

The police department claimed their postings were protected by the California Constitution, governmental immunity, and the 1st Amendment. Fortunately, the court denied the department’s anti-SLAPP motion. Over a year after postings, the department took down the posting and settled the lawsuit with Matthew.

Some may think that Matthew’s case is an anomaly and that, usually, the negative attention is warranted and perhaps even socially beneficial because it further de-incentivizes criminal activity via humiliation and social stigma. However, most arrests don’t result in convictions, many of the police’s cyberbullying victims are likely innocent. Even if they are guilty, leaving these posts up can increase the barrier to societal re-entry, which can increase recidivism rates. A negative digital record can make finding jobs and housing more difficult. Many commenters assume the highlighted individual’s guilt and take to their keyboards to shame them.

Here’s one example of a post and comment section from the Toledo Police Department Facebook page:

Unless departments change their social media use policies, they will continue to face defamation lawsuits and continue to further the degradation of the presumption of innocence.

Police departments should discontinue the use of social media in the humiliating ways described above. At the very least, they should consider using this tactic only for violent, felonious crimes. Some departments have already changed their policies.

The San Francisco Police Department has stopped posting mugshots for criminal suspects on social media. According to Criminal Defense Attorney Mark Reichel, “The decision was made in consultation with the San Francisco Public Defender’s Office who argued that the practice of posting mugshots online had the potential to taint criminal trials and follow accused individuals long after any debt to society is paid.” For a discussion of some of the issues social media presents to maintaining a fair trial, see Social Media, Venue and the Right to a Fair Trial.

Do you think police departments should reconsider their social media policies?

Is social media promoting or curbing Asian hate?

The COVID-19 pandemic has caused our lives to twist and turn in many unexpected ways. Of all the ethnicities in the world, the Asian population took the hardest hit since the virus originated from China. This ultimately caused a significant increase in hate crimes, particularly towards the Asian community, in the real world as well as the cyber world. Since the number of internet users is almost uncountable, the impact that it creates online, as well as offline, is massive. Social media can create bias and social media has the power to remedy bias. The question becomes which side of the scale is it currently tipping towards? Is the internet making social network platform users more vulnerable to manipulation? Are hatred and bias “contagious” through cyber means? On the contrary, is social media remedying the bias that people have created through the internet?

Section 230 of the Communications Decency Act governs the cyber world. It essentially provides legal immunity to internet providers such as TikTok, Facebook, Instagram, Snapchat and etc. The Act states: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” With that being said, posts and comments that appear on these social media platforms do not have any legal ramifications for the tech companies. Hence, do these tech companies have incentives to regulate what is posted on their websites? With the Asian hate wave currently going on, will it evolve into a giant snowball of problems if social media platforms fail to step in? On the other hand, if these tech companies elect to step in, to what extent can they regulate or supervise?

The hatred and bias sparked by the pandemic have not been limited to the real world. Asian Americans have reported the biggest increase in serious incidents of online hate and harassment throughout such a crazy time. Many of them were verbally attacked or insulted by racist and xenophobic slurs merely because they have Asian last names or that they look Asian. According to a new survey shared exclusively with USA TODAY, comparing to last year, there was an 11% increase in sexual harassment, stalking, physical threats, and other incidents reported by Asian Americans, of which many were through online social media platforms. Pursuant to the findings by the Center for the Study of Hate and Extremism at California State University, hate crimes against Asian Americans rose 149% from 2019 to 2020. That is 149% in one year. In addition, an AI-based internet abuse detection organization named L1ght reported a 900% increase on Twitter since the start of this pandemic. This may just be the tip of an iceberg as many of the hate crime incidents may have gone unreported. As you may recall, former President Trump publicly referred the COVID-19 coronavirus as the “Chinese Virus” which led to a record-breaking level of brutal online harassment against Asian Americans. This also gave rise to other similar remarks such as “Kung Flu” or “Wuhan Virus.” Social media users began using hashtags of the like. Just the hashtag “#ChineseVirus” alone has been used over 68,000 times on Instagram.

We must not forget that the real world and the cyber world are interconnected. Ideas consumed online can have a significant impact on our offline actions which may lead to violence. Last week, I had the privilege to interview New York Police Department Lieutenant Mike Wang who is in charge of the NYPD’s Asian Hate Crimes Task Force in Brooklyn, he expressed his concerns about the Asian community being attacked, seniors in particular. Lieutenant Wang said during the interview: “It’s just emotionally difficult and heartbreaking. New York Police Department is definitely taking unprecedented measures to combat these crimes. These incidents cannot be overlooked.” Most of these incidents were unprovoked. Some examples include an elderly Thai immigrant who died after being shoved to the ground, a Filipino-American citizen being slashed in the face with a box cutter leaving a big permanent scar on his face, a Chinese lady being slapped and then set on fire, as well as six Asian-Americans being brutally shot to death in a spa one night. Wang indicated that crimes against Asian-Americans in general are nothing new, they have been in existence for quite some time; however, the rage and frustration of the COVID-19 pandemic fueled this fire to an uncontrollable level. Wang encourages citizens to report crimes in general, not just hate crimes, as we need to be more vocal. You can read more about hate crimes and bias on the city’s website.

From verbal harassment to physical assaults, there have been thousands of reported cases since the pandemic started. These are typically hate crimes as offenders believe that the Asian population should be blamed for the spread of the virus. Perhaps people’s daily interactions online play an important role here. Almost everyone uses some sort of social network in our country, the more hatred and bias they see online, the more likely they will exhibit violence in real life. Why? Because people would think such behaviors are acceptable since many others are doing it. Accountability does not seem to be an issue, especially through social channels. At the most, the user’s post would be removed or the account would get suspended. With that being said, it is questionable as to whether the tech companies are doing enough to address these issues? When encountering these hateful behaviors in the cyber world, what are the policies of the social media giants? For instance, Twitter has implemented a policy on hate speech that prohibits accounts whose primary purpose was to incite harm towards others. Twitter does reserve the discretion to remove inappropriate content or suspend users who violated their policy. You can read more about their Hateful Conduct Policy on their website. Other social media platforms such as Facebook, TikTok, and YouTube all have similar policies in place to address hateful behaviors, violent threats, and harassment; however, are they sufficient? According to the CEO of the Anti-Defamation League, online users continue to experience strong hateful comments despite that the social network companies alleged that they are taking things seriously. Facebook and YouTube are still allowing users to use the racially incentive term “Kung Flu” while TikTok has prohibited it. A comics artist Ethan Van Sciver joked about killing Chinese people in one of his videos but later claimed that it was “facetious sarcasm.” YouTube only removed the video stating that it was a violation of its hate speech policy. Like I previously mentioned, the accountability with these social networks is minimal.

Social networks have definitely helped spread the news keeping everyone in the country informed about the horrible incidents that are happening on a regular basis. Other than spreading the virus of hatred and bias online, social networks also raise awareness and promote positivity on the other hand. As Asian hate crimes spike, public figures, and celebrities are taking part to stand against this battle. Allure magazine’s editor-in-chief Michelle Lee and designer Phillip Lim are one of them. They have posted videos on Instagram sharing their very own experiences of racism in an effort to raise awareness. They also used the hashtag #StopAsianHate in their posts. On March 20, 2021, “Killing Eve” star Sandra Oh joined a “Stop Asian Hate” protest in Pittsburgh. She said she is “proud to be Asian” while giving a powerful speech urging people to fight against racism and hatred towards the Asian community. The video of her speech went viral online in just a day and there have been more than ninety-three thousand views on YouTube since.  I have to say that our generation is not afraid to speak up about the hate and injustice we face in our society today. This generation is taking it upon ourselves to prove racism instead of relying on authorities to recognize the threats and implement policy changes. This is how #StopAAPIHate came about. The hashtag stands for “Stop Asian American and Pacific Islander Hate.” Stop AAPI Hate is a nonprofit organization that tracks incidents of hate and discrimination against Asian Americans and Pacific Islanders in the United States. It was recently created as a social media platform to bring awareness, education, and resources to the Asian community and its allies. Stop AAPI Hate also utilized social networks like Instagram to organize support groups, provide aid and pressure those in power to act. The following is a list of influential members of the AAPI community who are vocalizing their concerns and belief: Christine Chiu, “The Bling Empire” star who is also a producer and an entrepreneur; Chriselle Lim, who is a digital influencer, content creator and entrepreneur; Tina Craig, who is the founder and CEO of U Beauty; Daniel Martin, who is the makeup artist and global director of Artistry & Education at Tatcha; Yu Tsai, who is a celebrity and fashion photographer & host; Sarah Lee and Christine Chang, who are the co-founders and co-CEOs of Glow Recipe; Aimee Song, who is an entrepreneur and digital influencer; Samuel Hyun, who is the chairman of the Massachusetts Asian American Commission; Daniel Nguyen who is an actor; Mai Quynh, who is a celebrity makeup artist; Ann McFerran, who is the founder and CEO of Glamnetic; Nadya Okamoto, who is the founder of August; Sharon Pak who is the founder of INH; Sonja Rasula, who is the founder of Unique Markets; as well as Candice Kumai, who is a writer, journalist, director and best-selling author. The list can go on but the purpose of these influential speakers is that taking things to social media is not just about holding people or companies accountable, instead, it is about creating meaningful changes in our society.

The internet is more powerful than we think it is. It is dangerous to allow individuals to attack or harass others, even through the screen. I understand that the social media platforms cannot blatantly censor contents or materials as they see inappropriate on their websites as it may be a violation of the user’s First Amendment rights; however, there has to be more that they can do. Perhaps creating more rigorous policies as an effort to combat hate speech. If we are able to track the user’s identity to his or her real-life credentials, it may curb the tendency of potential offenders or repeated offenders. The question is how do you draw the line between freedom of speech and social order?