States are ready to challenge Section 230

On January 8, 2021, Twitter permanently suspended @realDonaldTrump.  The decision followed an initial warning to the then-president and conformed to its published standards as defined in its public interest framework.   The day before, Meta (then Facebook) restricted President Trump’s ability to post content on Facebook or Instagram.   Both companies cited President Trump’s posts praising those who violently stormed the U.S. Capitol on January 6, 2021 in support of their decisions.

Members of the Texas and Florida legislatures, together with their governors, were seemingly enraged that these sites would silence President Trump’s voice.  In response, each immediately passed laws aiming to limit the scope of social media sites.   Although substantively different, the Texas and Florida laws are theoretically the same; they both seek to punish social media sites that regulate forms of conservative content that they argue liberal social media sites silence, regardless of whether the posted content violates the site’s published standards.

Shortly after each law’s adoption, two tech advocacy groups, NetChoice and Computer and Communication Industry Association, filed suits in federal district courts challenging the laws as violative of the First Amendment.  Each case has made its way through the federal courts on procedural grounds; the Eleventh Circuit upheld a lower court preliminary injunction prohibiting Florida from enforcing the statute until the case is decided on its merits.   In contrast, the Fifth Circuit overruled a lower court preliminary injunction.  Texas appealed the Fifth Circuit ruling to the Supreme Court of the United States, which, by a vote of 5-4, voted to reinstate the injunction.  The Supreme Court’s decision made clear that these cases are headed to the Supreme Court on the merits.

Criminals Beware the Internet Is Here!

Social media has now become a mainstay in our culture. We use social media to communicate and interact socially with our family and friends. Social media and the Internet allow us to share our whereabouts and latest experiences with just about everyone on the planet instantly with just a click of a button. The police department now understands this shift in culture and is using social media and the advancement in technology to their benefit. “Police are recognizing that a lot of present-day crimes are attached to social media. Even if the minuscule possibility existed that none of the persons involved were on social media, the crime would likely be discussed on social media by people who have become aware of it or the media organizations reporting it”.

Why social media is the New Police Investigative Tool?

Why are police so successful fighting crime with social media? It’s because a lot of us are addicted to social media and it’s our new form of communication. The addiction of social media has made it easier for police to catch criminals. Criminals tend to tell on them selves these days by simply not being able to stay off social media. We tell our friends confidential information through social media with the false narrative of thinking that what we say can’t be traced back to us. We even think that since we put our pages on private, that our information can’t be retrieved. However, that’s far from the truth. Bronx criminal Melvin Colon found this out the hard way.  Police authorities suspected Colon of crimes but lacked probable cause for a search. “Their solution: finding an informant who was friends with him on Facebook. There they gathered the bulk of the evidence needed for an indictment. Colon’s lawyers sought to use the Fourth Amendment to suppress that evidence, but the judge ruled that Colon’s legitimate expectation of privacy ended when he disseminated post to his friends. The court explained that Colons ‘friends’ were free to use the information however they wanted—including sharing it with the Government.” This illustrates that even information we think is private can still be accessed by police.

How Police use social media as an Investigative Tool?

“Most commonly, an officer views publicly available posts by searching for an individual, group, hashtag, or another search vector. Depending on the platform and the search, it may yield all the content responsive to the query or only a portion. When seeking access to more than is publicly available, police may use an informant (such as a friend of the target) or create an undercover account by posing as a fellow activist or alluring stranger”. This allows officers to communicate directly with the target and see content posted by both the target and their contacts that might otherwise be inaccessible to the public. Police also use social media to catch criminals through sting operations. “A sting operation is designed to catch a person in the act of committing a crime. Stings usually include a law enforcement officer playing the part as accessory to a crime, perhaps as a drug dealer or a potential customer of prostitution. After the crime is committed, the suspect is quickly arrested”. Another way social media is used as an investigative tool is through location tracking. “Location tracking links text, pictures and video to an exact geographical location and is a great tool for law enforcement to find suspects”. Due to location tagging, police can search for hot spots of crime and even gain instant photographic evidence from a crime. Social media is also used as an investigative public outreach tool. It helps the police connect with the public. It allows for police to communicate important announcements to the community and solicit tips on criminal investigations.

What does the law say about Police using social media?

There are few laws that specifically constrain law enforcement’s ability to engage in social media monitoring. “In the absence of legislation, the strongest controls over this surveillance tactic are often police departments’ individual social media policies and platform restrictions, such as Facebook’s real name policy and Twitter’s prohibition against using its API for surveillance”. Many people try to use fourth amendment as protections against police intrusion into their social media privacy. The Fourth Amendment guarantees the right of the people to be free from unreasonable searches and seizures. The inquiry against unreasonable searches and seizures is whether a person has a “reasonable expectation of privacy” and whether society recognizes that expectation as reasonable. The court states individuals do not have a recognized expectation of privacy in data publicly shared online. Law enforcement can also seek account information directly from social media companies. Under the Stored Communications Act, law enforcement can serve a warrant or subpoena on a social media company to get access to information about a person’s social media profile. The Stored Communications Act also permits service providers to voluntarily share user data without any legal process if delays in providing the information may lead to death or serious injury. “Courts have upheld warrants looking for IP logs to establish a suspect’s location, for evidence of communications between suspects, and to establish a connection between co-conspirators”.

 

AI Avatars: Seeing is Believing

Have you ever heard of deepfake? The term deepfake comes from “deep learning,” a set of intelligent algorithms that can learn and make decisions on their own. By applying deep learning, deepfake technology replaces faces from the original images or videos with another person’s likeness.

What does deep learning have to do with switching faces?

Basically, deepfake allows AI to learn automatically from its data collection, which means the more people try deepfake, the faster AI learns, thereby making its content more real.

Deepfake enables anyone to create “fake” media.

How does Deepfake work?

First, an AI algorithm called an encoder collects endless face shots of two people. The encoder then detects similarities between the two faces and compresses the images so they can be delivered. A second AI algorithm called a decoder receives the package and recovers it to reconstruct the images to perform a face swap.

Another way deepfake uses to swap faces is GAN, or a generative adversarial network. A GAN adds two AI algorithms against each other, unlike the first method where encoder and decoder work hand in hand.
The first algorithm, the generator, is given random noise and converts it into an image. This synthetic image is then added to a stream of real photos like celebrities. This combination of images gets delivered to the second algorithm, the discriminator. After repeating this process countless times, the generator and discriminator both improve. As a result, the generator creates completely lifelike faces.

For instance, Artist Bill Posters used deepfake technology to create a fake video of Mark Zuckerberg , saying that Facebook’s mission is to manipulate its users.

Real enough?

How about this. Consider having Paris Hilton’s famous quote, “If you don’t even know what to say, just be like, ‘That’s hot,’” replaced by Vladimir Putin, President of Russia. Those who don’t know either will believe that Putin is a Playboy editor-in-chief.

Yes, we can all laugh at these fake jokes. But when something becomes overly popular, it has to come with a price.

Originally, deepfake was developed by an online user of the same name for the purpose of entertainment, as the user had put it.

Yes, Deepfake meant pornography.

The biggest problem of deepfake is that it is challenging to detect the difference and figure out which one is the original. It has become more than just superimposing one face onto another.

Researchers found that more than 95% of deepfake videos were pornographic, and 99% of those videos had faces replaced with female celebrities. Experts explained that these fake videos lead to the weaponization of artificial intelligence used against women, perpetuating a cycle of humiliation, harassment, and abuse.

How do you spot the difference?

As mentioned earlier, the algorithms are fast learners, so for every breath we take, deepfake media becomes more real. Luckily, research showed that deepfake faces do not blink normally or even blink at all. That sounds like one easy method to remember. Well, let’s not get ahead of ourselves just yet. When it comes to machine learning, nearly every problem gets corrected as soon as it gets revealed. That is how algorithms learn. So, unfortunately, the famous blink issue already had been solved.

But not so fast. We humans may not learn as quickly as machines, but we can be attentive and creative, which are some qualities that tin cans cannot possess, at least for now.
It only takes extra attention to detect Deepfake. Ask these questions to figure out the magic:

Does the skin look airbrushed?
Does the voice synchronize with the mouth movements?
Is the lighting natural, or does it make sense to have that lighting on that person’s face?

For example, the background may be dark, but the person may be wearing a pair of shiny glasses reflecting the sun’s rays.

Oftentimes, deepfake contents are labeled as deepfake because creators want to display themselves as artists and show off their works.
In 2018, a software named Deeptrace was developed to detect deepfake contents. A deeptrace lab reported that deepfake videos are proliferating online, and its rapid growth is “supported by the growing commodification of tools and services that lower the barrier for non-experts—from well-maintained open source libraries to cheap deepfakes-as-a-service websites.”

The pros and cons of deepfake

It may be self-explanatory to name the cons, but here are some other risks deepfake imposes:

  • Destabilization: the misuse of deepfake can destabilize politics and international relations by falsely implicating political figures in scandals.
  • Cybersecurity: the technology can also negatively influence cybersecurity by having fake political figures incite aggression.
  • Fraud: audio deepfake can clone voices to convince people to believe that they are talking to actual people and induce them into giving away private information.

Well then, are there any pros to deepfake technology other than having entertainment values? Surprisingly, a few:

  • Accessibility: deepfake creates various vocal personas that can turn text into speech, which can help with speech impediments.
  • Education: deepfake can deliver innovative lessons that are more engaging and interactive than traditional lessons. For example, deepfake can bring famous historical figures back to life and explain what happened during their time. Deepfake technology, when used responsibly, can be served as a better learning tool.
  • Creativity: instead of hiring a professional narrator, implementing artificial storytelling using audio deepfake can tell a captivating story and let its users do so only at a fraction of the cost.

If people use deepfake technology with high ethical and moral standards on their shoulders, it can create opportunities for everyone.

Case

In a recent custody dispute in the UK, the mother presented an audio file to prove that the father had no right to take away their child. In the audio, the father  was heard making a series of violent threats towards his wife.

The audio file was compelling evidence. When people thought the mother would be the one to walk out with a smile on her face, the father’s attorney thought something was not right. The attorney challenged the evidence, and it was revealed through forensic analysis that the audio was tailored using a deepfake technology.

This lawsuit is still pending. But do you see any other problems in this lawsuit? We are living in an era where evidence tampering is easily available to anyone with the Internet. It would require more scrutiny to figure out whether evidence is altered.

Current legislation on deepfake.

The National Defense Authorization Act for Fiscal Year 2021 (“NDAA”), which became law as Congress voted to override former President Trump’s veto, also requires the Department of Homeland Security (“DHS”) to issue an annual report for the next five years on manipulated media and deepfakes.

So far, only three states took action against deepfake technology.
On September 1, 2019, Texas became the first state to prohibit the creation and distribution of deepfake content intended to harm candidates for public office or influence elections.
Similarly, California also bans the creation of “videos, images, or audio of politicians doctored to resemble real footage within 60 days of an election.”
Also, in 2019, Virginia banned deepfake pornography.

What else does the law say?

Deep fakes are not illegal per se. But depending on the content, a deepfake can breach data protection law, infringe copyright and defamation. Additionally, if someone shares non-consensual content or commits a revenge porn crime, it is punishable depending on the state law. For example, in New York City, the penalties for committing a revenge porn crime are up to one year in jail and a fine of up to $1,000 in criminal court.

Henry Ajder, head of threat intelligence at Deeptrace, raised another issue: “plausible deniability,” where deepfake can wrongfully provide an opportunity for anyone to dismiss actual events as fake or cover them up with fake events.

What about the First Amendment rights?

The First Amendment of the U.S. Constitution states:
“Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.”

There is no doubt that injunctions against deepfakes are likely to face First Amendment challenges. The First Amendment will be the biggest challenge to overcome. Even if the lawsuit survives, lack of jurisdiction over extraterritorial publishers would inhibit their effectiveness, and injunctions will not be granted unless under particular circumstances such as obscenity and copyright infringement.

How does defamation law apply to deepfake?

How about defamation laws? Will it apply to deepfake?

Defamation is a statement that injures a third party’s reputation.  To prove defamation, a plaintiff must show all four:

1) a false statement purporting to be fact;

2) publication or communication of that statement to a third person;

3) fault amounting to at least negligence; and

4) damages, or some harm caused to the person or entity who is the subject of the statement.

As you can see, deepfake claims are not likely to succeed under defamation because it is difficult to prove that the content was intended to be a statement of fact. All that the defendant will need to protect themselves from defamation claims is to have the word “fake” somewhere in the content. To make it less of a drag, they can simply say that they used deep”fake” to publish their content.

Pursuing a defamation claim against nonconsensual deepfake pornography also poses a problem. The central theme of the claim is the nonconsensual part, and our current defamation law fails to address whether or not the publication was consented to by the victim.

To reflect our transformative impact of artificial intelligence, I would suggest making new legislation to regulate AI-backed technology like deepfake. Perhaps this could lower the hurdle that plaintiffs must face.
What are your suggestions in regards to deepfake? Share your thoughts!

 

 

 

 

 

Social Media: a pedophile’s digital playground.

A few years ago, as I got lost in the wormhole that is YouTube, I stumbled upon a family channel, “The ACE Family.” They had posted a cute video where the mom was playing a prank on her boyfriend by dressing their infant daughter in a cute and tiny crochet bikini. I didn’t think much of it at the time as it seemed innocent and cute, but then I pondered and thought about it. I stumbled on this video without any malintent, but how easy would it be for someone looking for content like this with other disgusting intent.

When you Google “Social media child pornography,” you get many articles from 2019. In 2019 a YouTuber using the name “MattsWhatItIs” posted a YouTube video titled “YouTube is Facilitating the Sexual Exploitation of Children, and it’s Being Monetized(2019)”; this video has 4,305,097 views to date and has not been removed from the platform. The author of the video discusses a potential child pornography ring on YouTube that was being facilitated due to a glitch in the algorithm. He demonstrates how with a brand-new account on a VPN, all it takes is two clicks to end up in this ring. The search started with a “Bikini Haul” search. After two clicks on the recommended videos section, he stumbles upon an innocent-looking homemade video. The video looks innocent, but he scrolls down to the comments to expose the dark side. Multiple random account comment time stamps, those timestamps are linked to slots on the video where the children are in comprising implicit sexual positions. The most disturbing part is that the algorithm glitches once you enter the wormhole, and you get stuck on these “child pornography” videos. Following the vast attention this video received, YouTube has created an algorithm that is supposed to catch this predatory behavior; when the video was posted, it didn’t seem to be doing much.

YouTube has since implemented a “Child Safety Policy,” which details all the content which the social media platform has aimed to protect. It also includes recommended steps for parents or agents posting content with children being the focus. “To protect minors on YouTube, content that doesn’t violate our policies but features children may have some features disabled at both the channel and video level. These features may include:

  • Comments
  • Live chat
  • Live streaming
  • Video recommendations (how and when your video is recommended)
  • Community posts”

Today when you look up the news on this topic, you don’t find much. There are forums and such exposing the many methods these predators use to get around the algorithms set up by platforms to detect their activity. Many predators leave links to child pornography in the comments section of specific videos. Others used generic terms with the initials “C.P.,” a common abbreviation for “child pornography,” and codes like “caldo de pollo,” which means “chicken soup” in Spanish. Many dedicated and concerned parents have formed online communities that scour the Internet for this disgusting content and report it to social media platforms. Online communities scan the Internet for this activity and report it, but why haven’t social media platforms created departments for this issue? Most technology companies use automated tools to detect images and videos that law enforcement has already categorized as child sexual exploitation material. Still, they struggle to identify new, previously unknown material and rely heavily on user reports.

The Child Rescue Coalition has created the “Child Protection System Software.” This tool has a growing database of more than a million hashed images and videos, which it uses to find computers that have downloaded them. The software can track I.P. addresses, which are shared by people connected to the same Wi-Fi network and individual devices. According to the Child Rescue Coalition, the system can follow widgets even if the owners move or use virtual private networks or VPNs to mask the I.P. addresses. Last year they expressed interest in partnering with social media platforms to combine resources to crack down on child pornography. Unfortunately, some are against this as it would allow social media companies access to this unregulated database of suspicious I.P. addresses. Thankfully, many law enforcement departments have partnered up and used this software and as the president of the Child Rescue Coalition said: “Our system is not open-and-shut evidence of a case. It’s for probable cause.”

The United States Department of Justice has created a “Citizen’s Guide to U.S. Federal Law on Child Pornography.” The first line on this page reads, “Images of child pornography are not protected under First Amendment rights, and are illegal contraband under federal law.” Commonly federal jurisdiction is applied if the child pornography offense occurred in interstate or foreign commerce. In today’s digital era, federal law almost always applies when the Internet is used to commit child pornography offenses. The United States has implemented multiple laws that define child pornography and what constitutes a crime related to child pornography.

Whose job is it to protect children from these predators? Should social media have to regulate this? Should parents be held responsible for contributing to the distribution of these media?

 

“Unfortunately, we’ve also seen a historic rise in the distribution of child pornography, in the number of images being shared online, and in the level of violence associated with child exploitation and sexual abuse crimes. Tragically, the only place we’ve seen a decrease is in the age of victims.

 This is – quite simply – unacceptable.”

-Attorney General Eric Holder Jr. speaks at the National Strategy Conference on Combating Child Exploitation in San Jose, California, May 19, 2011.

Can Social Media Be Regulated?

In 1996 Congress passed what is known as Section 230 of the Communications Decency Act (CDA) which provides immunity to website publishers for third-party content posted on their websites. The CDA holds that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This Act passed in 1996, was created in a different time and era, one that could hardly envision how fast the internet would grow in the coming years. In 1996, social media for instance consisted of a little-known social media website called Bolt, the idea of a global world wide web, was still very much in its infancy. The internet was still largely based on dial-up technology, and the government was looking to expand the reach of the internet. This Act is what laid the foundation for the explosion of Social Media, E-commerce, and a society that has grown tethered to the internet.

The advent of Smart-Phones in the late 2000s, coupled with the CDA, set the stage for a society that is constantly tethered to the internet and has allowed companies like Facebook, Twitter, YouTube, and Amazon to carve out niches within our now globally integrated society.   Facebook alone in the 2nd quarter of 2021 has averaged over 1.9 billion daily users.

Recent studs conducted by the Pew Research Center show that “[m]ore than eight in ten Americans get news from digital services”

Large majority of Americans get news on digital devices

While older members of society still rely on news media online, the younger generation, namely those 18-29 years of age, receive their news via social media.

Online, most turn to news websites except for the youngest, who are more likely to use social media

The role Social Media plays in the lives of the younger generation needs to be recognized. Social Media has grown at a far greater rate than anyone could imagine. Currently, Social Media operates under its modus operandi, completely free of government interference due to its classification as a private entity, and its protection under Section 230.

Throughout the 20th century when Television News Media dominated the scenes, laws were put into effect to ensure that television and radio broadcasters would be monitored by both the courts and government regulatory commissions. For example, “[t]o maintain a license, stations are required to meet a number of criteria. The equal-time rule, for instance, states that registered candidates running for office must be given equal opportunities for airtime and advertisements at non-cable television and radio stations beginning forty-five days before a primary election and sixty days before a general election.”

What these laws and regulations were put in place for was to ensure that the public interest in broadcasting was protected. To give substance to the public interest standard, Congress has from time to time enacted requirements for what constitutes the public interest in broadcasting. But Congress also gave the FCC broad discretion to formulate and revise the meaning of broadcasters’ public interest obligations as circumstances changed.

The Federal Communications Commission (FCC) authority is constrained by the first amendment but acts as an intermediary that can intervene to correct perceived inadequacies in overall industry performance, but it cannot trample on the broad editorial discretion of licensees. The Supreme Court has continuously upheld the public trustee model of broadcast regulation as constitutional. The criticisms of regulating social media center on the notion that they are purely private entities that do not fall under the purviews of the government, and yet these same issues are what presented themselves in the precedent-setting case of Red Lion Broadcasting Co. v. Federal Communications Commission (1969.  In this case, the court held that “rights of the listeners to information should prevail over those of the broadcasters.” The Court’s holding centered on the public right to information over the rights of a broadcast company to choose what it will share, this is exactly what is at issue today when we look at companies such as Facebook, Twitter, and Snapchat censuring political figures who post views that they feel may be inciteful of anger or violence.

In essence, what these organizations are doing is keeping information and views from the attention of the present-day viewer. The vessel for the information has changed, it is no longer found in television or radio but primarily through social media. Currently, television and broadcast media are restricted by Section 315(a) of the Communications Act and Section 73.1941 of the Commission’s rules which “require that if a station allows a legally qualified candidate for any public office to use its facilities (i.e., make a positive identifiable appearance on the air for at least four seconds), it must give equal opportunities to all other candidates for that office to also use the station.” This is a restriction that is nowhere to be found for Social Media organizations. 

This is not meant to argue for one side or the other but merely to point out that there is a political discourse being stifled by these social media entities, that have shrouded themselves in the veils of a private entity. However, what these companies fail to mention is just how political they truly are. For instance, Facebook proclaims itself to be an unbiased source for all parties, and yet what it fails to mention is that currently, Facebook employs one of the largest lobbyist groups in Washington D.C. Four Facebooks lobbyist have worked directly in the office of House Speaker Pelosi. Pelosi herself has a very direct connection to Facebook, she and her husband own between $550,000 to over $1,000,000 in Facebook stock. None of this is illegal, however, it raises the question of just how unbiased is Facebook.

If the largest source of news for the coming generation is not television, radio, or news publications themselves, but rather Social Media such as Facebook, then how much power should they be allowed to wield without there being some form of regulation? The question being presented here is not a new one, but rather the same question asked in 1969, simply phrased differently. How much information is a citizen entitled to, and at what point does access to that information outweigh the rights of the organization to exercise its editorial discretion? I believe that the answer to that question is the same now as it was in 1969 and that the government ought to take steps similar to those taken with radio and television. What this looks like is ensuring that through Social Media, that the public has access to a significant amount of information on public issues so that its members can make rational political decisions. At the end of that day that it was at stake, the public’s ability to make rational political decisions.

These large Social Media conglomerates such as Facebook and Twitter have long outgrown their place as a private entity, they have grown into a public medium that has tethered itself to the realities of billions of people. Certain aspects of it need to be regulated, mainly those that interfere with the Public Interest, there are ways to regulate this without interfering with the overall First Amendment right of Free Speech for all Americans. Where however Social Media blends being a private forum for all people to express their ideas under firmly stated “terms and conditions”, and being an entity that strays into the political field whether it be by censoring heads of state, or by hiring over $50,000,000 worth of lobbyist in Washington D.C, there need to be some regulations put into place that draw the line that ensures the public still maintains the ability to make rational political decisions. Rational decisions that are not influenced by anyone organization. The time to address this issue is now when there is still a middle ground on how people receive their news and formulate opinions.

What Evidence is Real in a World of Digitally Altered Material?

Imagine you are prosecuting a child pornography case and have incriminating chats made through Facebook showing the Defendant coercing and soliciting sexually explicit material from minors.  Knowing that you will submit these chats as evidence in trial, you acquire a certificate from Facebook’s records custodian authenticating the documents.  The custodian provides information that confirms the times, accounts and users.  That should be enough, right?

Wrong.  Your strategy relies on the legal theory that chats made through a third-party provider fall into a hearsay exception known as the “business records exemption.”  Under the Federal Rules of Evidence 902(11) “self-authenticating” business records “provides that ‘records of a regularly conducted activity’ that fall into the hearsay exception under Rule 803(6)—more commonly known as the “business records exception”—may be authenticated by way of a certificate from the records custodian.”  (Fed. R. Evid. 902(11)), (United States v. Browne, 834 F.3d 403 (3d Cir. 2016)).

Why does this certification fail to actually show authenticity?  The Third Circuit answers, saying there must be additional, outside evidence (extrinsic) establishing relevance of the evidence.  (United States v. Browne, 834 F.3d 403 (3d Cir. 2016)).

Relevance is another legal concept where “its existence simply has some ‘tendency to make the existence of any fact that is of consequence to the determination of the action more probable or less probable than it would be without the evidence.’”  (United States v. Jones, 566 F.3d 353, 364 (3d Cir. 2009) (quoting Fed. R. Evid. 401)).  Put simply, the existence of this evidence has a material effect on the evaluation of an action.

In Browne, the Third Circuit says the “business records exemption” is not enough because Facebook chats are fundamentally different than business records.  Business records are “supplied by systematic checking, by regularity and continuity which produce habits of precision, by actual experience of business in relying upon them, or by a duty to make an accurate record as part of a continuing job or occupation,” which results in records that can be relied upon as legitimate.

The issue here deals with authenticating the entirety of the chat – not just the timestamps or cached information.  The court delineates this distinction, saying “If the Government here had sought to authenticate only the timestamps on the Facebook chats, the fact that the chats took place between particular Facebook accounts, and similarly technical information verified by Facebook ‘in the course of a regularly conducted activity,’ the records might be more readily analogized to bank records or phone records conventionally authenticated and admitted under Rules 902(11) and 803(6).”

In contrast, Facebook chats are not authenticated based on confirmation of their substance, but instead on the user linked to that account.  Moreover, in this case, the Facebook records certification showed “alleged” activity between user accounts but not the actual identification of the person communicating, which the court found is not conclusive in determining authorship.

The policy concern is that information is easily falsified – accounts may be created with a fake name and email address, or a person’s account may be hacked into and operated by another.  As a result of the ruling in Browne, submitting chat logs into evidence made through a third party such as Facebook requires more than verification of technical data.  The Browne court describes the second step for evidence to be successfully admitted – there must be, extrinsic, or additional outside evidence, presented to show that the chat logs really occurred between certain people and that the content is consistent with the allegations.  (United States v. Browne, 834 F.3d 403 (3d Cir. 2016))

When there is enough extrinsic evidence, the “authentication challenge collapses under the veritable mountain of evidence linking [Defendant] and the incriminating chats.”  In the Browne case, there was enough of this outside evidence that the court found there was “abundant evidence linking [Defendant] and the testifying victims to the chats conducted… [and the] Facebook records were thus duly authenticated” under Federal Rule of Evidence 901(b)(1) in a traditional analysis.

The idea that extrinsic evidence must support authentication of evidence collected from third-party platforms is echoed in the Seventh Circuit decision United States v. Barber, 937 F.3d 965 (7th Cir. 2019).  Here, “this court has relied on evidence such as the presence of a nickname, date of birth, address, email address, and photos on someone’s Facebook page as circumstantial evidence that a page might belong to that person.”

The requirement for extrinsic evidence represents a shift in thinking from the original requirement that the government carries the burden of only ‘“produc[ing] evidence sufficient to support a finding’ that the account belonged to [Defendant] and the linked messages were actually sent and received by him.”  United States v. Barber, 937 F.3d 965 (7th Cir. 2019) citing Fed. R. Evid. 901(a), United States v. Lewisbey, 843 F.3d 653, 658 (7th Cir. 2016).  Here, “Facebook records must be authenticated through the ‘traditional standard’ of Rule 901.” United States v. Frazier, 443 F. Supp. 3d 885 (M.D. Tenn. 2020).

The bottom line is that Facebook cannot attest to the accuracy of the content of its chats and can only provide specific technical data.  This difference is further supported by a District Court ruling mandating traditional analysis under Rule 901 and not allowing a business hearsay exception, saying “Rule 803(6) is designed to capture records that are likely accurate and reliable in content, as demonstrated by the trustworthiness of the underlying sources of information and the process by which and purposes for which that information is recorded… This is no more sufficient to confirm the accuracy or reliability of the contents of the Facebook chats than a postal receipt would be to attest to the accuracy or reliability of the contents of the enclosed mailed letter.”  (United States v. Browne, 834 F.3d 403, 410 (3rd Cir. 2016), United States v. Frazier, 443 F. Supp. 3d 885 (M.D. Tenn. 2020)).

Evidence from social media is allowed under the business records exemption in a select-few circumstances.  For example, United States v. El Gammal, 831 F. App’x 539 (2d Cir. 2020) presents a case that does find authentication of Facebook’s message logs based on testimony from a records custodian.  However, there is an important distinction here – the logs admitted were directly from a “deleted” output, where Facebook itself created the record, rather than a person.  Accordingly, the Tenth Circuit agreed that “spreadsheets fell under the business records exception and, alternatively, appeared to be machine-generated non-hearsay.”  United States v. Channon, 881 F.3d 806 (10th Cir. 2018).

What about photographs – are pictures taken from social media dealt with in the same way as chats when it comes to authentication?  Reviewing a lower court decision, the Sixth Circuit in United States v. Farrad, 895 F.3d 859 (6th Cir. 2018) found that “it was an error for the district court to deem the photographs self-authenticating business records.”  Here, there is a bar on using the business exception that is similar to that found in the authentication of chats, where photographs must also be supported by extrinsic evidence.

While not using the business exception to do so, the court in Farrad nevertheless found that social media photographs were admissible because it would be logically inconsistent to allow “physical photos that police stumble across lying on a sidewalk” while barring “electronic photos that police stumble across on Facebook.”  It is notable that the court does not address the ease with which photographs may be altered digitally, given that was a major concern voiced by the Browne court regarding alteration of digital text.

United States v. Vazquez-Soto, 939 F.3d 365 (1st Cir. 2019) further supports the idea that photographs found through social media need to be authenticated traditionally.  Here, the court explains the authentication process, saying “The standard [the court] must apply in evaluating a[n] [item]’s authenticity is whether there is enough support in the record to warrant a reasonable person in determining that the evidence is what it purports to be.” United States v. Vazquez-Soto, 939 F.3d 365 (1st Cir. 2019) quoting United States v. Blanchard, 867 F.3d 1, 6 (1st Cir. 2017) (internal quotation marks omitted); Fed. R. Evid. 901(a).”  In other words, based on the totality of the evidence to include extrinsic evidence, do you believe the photograph is real?  Here, “what is at issue is only the authenticity of the photographs, not the Facebook page” – it does not necessarily matter who posted the photo, only what was depicted.

Against the backdrop of an alterable digital world, courts seek to emplace guards against falsified information.  The cases here represent the beginning of a foray into what measures can be realistically taken to protect ourselves from digital fabrications.

 

https://www.rulesofevidence.org/article-ix/rule-902/

https://www.rulesofevidence.org/article-viii/rule-803/

https://casetext.com/case/united-states-v-browne-12

https://www.courtlistener.com/opinion/1469601/united-states-v-jones/?order_by=dateFiled+desc&page=4

https://www.rulesofevidence.org/article-iv/rule-401/

https://www.rulesofevidence.org/article-ix/rule-901/

https://casetext.com/case/united-states-v-barber-103

https://casetext.com/case/united-states-v-lewisbey-4

https://casetext.com/case/united-states-v-frazier-175

https://casetext.com/case/united-states-v-el-gammal

https://casetext.com/case/united-states-v-channon-8

https://casetext.com/case/united-states-v-farrad

https://casetext.com/case/united-states-v-vazquez-soto-1?q=United%20States%20v.%20Vazquez-Soto,%20939%20F.3d%20365%20(1st%20Cir.%202019)&PHONE_NUMBER_GROUP=P&sort=relevance&p=1&type=case&tab=keyword&jxs=

Say Bye to Health Misinformation on Social Media?

A study from the Center of Countering Digital Hate, found that social media platforms failed to act on 95% of coronavirus-related disinformation reported to them.

      Over the past few weeks, social media companies have been in the hot seat regarding their lack of action against limiting the amount of fake news and misinformation on their platforms. Especially, the information regarding COVID-19 and the vaccine. Even President Biden remarked on social media platforms- stating Facebook and other companies were “killing people” by serving as platforms for misinformation about the Covid-19 vaccine. Later, Biden clarified his earlier statements by saying that he wasn’t accusing Facebook of killing people, but that he meant he wanted the companies to do something about the misinformation, the outrageous information about the vaccine.”

A few weeks later, Senator, Amy Klobuchar introduced the Health Misinformation Act, which would ultimately create an exemption to Section 230 of the Communication Decency Act. Section 230 has always shielded social media companies from being liable for almost any of the content that is posted on their platform. However, under the Health Misinformation Act, social media companies would be liable for the spread of health-related misinformation. Further, the bill would only apply to social media platforms that use an algorithm that promotes health misinformation- which most social media platforms use algorithms and would only apply to health misinformation during a health crisis. Additionally, if this bill were to pass, then the Department of Health and Human Services would be authorized to define “health misinformation.” Finally, the proposed bill would only apply during a national public health crisis, such as COVID-19. Therefore, this exemption would not apply during “normal” times, when there is no public health crisis.

        Senator Amy Klobuchar and some of her peers believe the time has come to create an exemption to Section 230 because “for far too long, online platforms have not done enough to protect the health of Americans.” Further, Klobuchar believes that the misinformation spread about COVID-19 the vaccine proves to the world that the social media companies have no desire to do anything about this because the misinformation gives the social media platform more activity and because the companies cannot be liable for this misinformation because of Section 230.
Instead, these social media companies, use this misinformation to their advantage to get more activity on their platform by creating features within to incentivizing their users to share the misinformation and to get likes, comments, and other engagements, which rewards engagement rather than accuracy.” Furthermore, a study conducted by MIT found that false news stories are 70% more likely to be retweeted than true stories. Therefore, social media platforms, have no reason to limit this information because of the activity they receive for the misinformation. Especially, when this misinformation benefits the social media platform.

What are the concerns with the Health Misinformation Act?

How will the Department of Health and Human Services define “health misinformation?” it seems very difficult to define such a narrow topic, that the majority will agree upon. Also, I believe there will be a huge amount of criticism from the social media companies about this act. For instance, I can imagine the social media companies arguing how will they be able to implement the definition of “health misinformation” to their algorithm? Such as, what if the information on the health crisis changes? Will the social media company have to constantly change the algorithms with the constant change of health information? For example, at the beginning of the pandemic the information of the importance of the mask changed; from mask not being necessary to masking being crucial to ensure the health and safety of yourself and others.

Will the Bill Pass?

With that being said, I do like the concept of the Health Misinformation Act, because it’s wanting to hold the social media companies accountable for their inaction while trying to protect the public so they receive accurate health-related information. However, I do not believe this bill will pass because of a few issues; first, it may violate the First Amendment, for people’s freedom of speech. Second, while it isn’t right; it is not illegal for individuals to post their opinion or misinformation on social media. Therefore, the bill might not pass because it may violate the First Amendment. Finally, as stated earlier how would social media companies implement these new policies and the change of “health misinformation” and how would the federal agencies regulate the social media companies?

What should be done?

“These are some of the biggest, richest companies in the world and they must do more to prevent the spread of deadly vaccine misinformation.”

     I believe we need to create more regulations and create more exemptions to Section 230. Especially, because Section 230 was created in 1996, our world looks and operates very differently than it did in 1996. Social media is an essential part of our business and cultural world.
Overall, I believe there need to be more regulations put into place to oversee social media companies. We need to create transparency with these companies, so the world can understand what is going on behind the closed doors of these companies. Transparency will allow for agencies to fully understand the algorithms and make for proper regulations.

To conclude, social media companies are a monopoly- even though there are a lot of them, there is only a handful that holds most of the popularity and power. With that being said, all major businesses and monopolies must follow strict regulations from the government. Yet, social media companies seem exempt from these types of strict regulations.

While there has been a push over the past few years to repeal or make changes to Section 230, do you think this bill can pass? If not, what can be done to create more regulations?

Getting Away with Murder

It’s probably not best to “joke” around with someone seeking legal advice about how to get away with murder. Even less so doing it on social media where tone infamously, is not always easily construed. Alas, that is what happened recently in January 2021, in the case In re Sitton out of Tennessee.

Let’s lay out the facts of the case first. Mr. Sitton is an attorney who has been practicing for almost 25 years. He has a Facebook page in which he labels himself as an attorney. A Facebook “friend” of his, named Lauren Houston had posted a publicly viewable question, asking about the legality of carrying a gun in her car in the state of Tennessee. The reason for the inquiry was because she had been involved in a toxic relationship with her ex-boyfriend, who was also the father of her child. As Mr. Sitton had become aware of her allegations of abuse, harassment, violations of child custody arrangement, and requests for orders of protection against the ex, he decided to comment on the post and offer some advice to Ms. Houston. The following was Mr. Sitton’s response to her question:

“I have a carry permit Lauren. The problem is that if you pull your gun, you must use it. I am afraid that, with your volatile relationship with your baby’s daddy, you will kill your ex     your son’s father. Better to get a taser or a canister of tear gas. Effective but not deadly. If you get a shot gun, fill the first couple rounds with rock salt, the second couple with bird shot, then load for bear.

If you want to kill him, then lure him into your house and claim he broke in with intent to do you bodily harm and that you feared for your life. Even with the new stand your ground law, the castle doctrine is a far safer basis for use of deadly force.”

 

Ms. Houston then replied to Mr. Sitton, “I wish he would try.” Mr. Sitton then replied again, “As a lawyer, I advise you to keep mum about this if you are remotely serious. Delete this thread and keep quiet. Your defense is that you are afraid for your life     revenge or premeditation of any sort will be used against you at trial.” Ms. Houston then subsequently deleted the post, following the advice of Mr. Sitton.

Ms. Houston’s ex-boyfriend eventually found out about the post, including Mr. Sitton’s comments and passed screenshots of it to the Attorney General of Shelby County who then sent them to the Tennessee’s Board of Professional Responsibility (“Board”). In August 2018, the Board filed a petition for discipline against him. The petition alleged Mr. Sitton violated Rule of Professional Conduct by “counseling Ms. Houston about how to engage in criminal conduct in a manner that would minimize the likelihood of arrest or conviction.”

Mr. Sitton admitted most of the basic facts but attempted to claim his comments were taken out of context. One of the things Mr. Sitton has admitted to during the Board’s hearing on this matter was that he identified himself as a lawyer in his Facebook posts and intended to give Ms. Houston legal advice and information. He noted Ms. Houston engaged with him on Facebook about his legal advice, and he felt she “appreciated that he was helping her understand the laws of the State of Tennessee.” Mr. Sitton went on to claim his only intent in posting the Facebook comments was to convince Ms. Houston not to carry a gun in her car. He maintained that his Facebook posts about using the protection of the “castle doctrine” to lure Mr. Henderson into Ms. Houston’s home to kill him were “sarcasm” or “dark humor.”

The hearing panel found Mr. Sitton’s claim that his “castle doctrine” comments were “sarcasm” or “dark humor” to be unpersuasive, noting that this depiction was challenged by his own testimony and Ms. Houston’s posts. The panel instead came to the determination that Mr. Sitton intended to give Ms. Houston legal advice about a legally “safer basis for use of deadly force.” Pointing out that the Facebook comments were made in a “publicly posted conversation,” the hearing panel found that “a reasonable person reading these comments certainly would not and could not perceive them to be ‘sarcasm’ or ‘dark humor. They also noted Mr. Sitton lacked any remorse for his actions. It acknowledged that he conceded his Facebook posts were “intemperate” and “foolish,” but it also pointed out that he maintained, “I don’t think what I told her was wrong.”

The Board decided to only suspend Mr. Sitton for 60 days. However, the Supreme Court of Tennessee reviews all punishments once the Board submits a proposed order of enforcement against an attorney to ensure the punishment is fair and uniform to similar circumstances/punishments throughout the state. The Supreme Court found the 60-day suspension to be insufficient and increased Mr. Sitton’s suspension to 1-year active suspension and 3 years on probation.

Really? While I’m certainly glad the Tennessee Supreme Court increased his suspension, I still think one year is dramatically too short. How do you allow an attorney who has been practicing for nearly 30 years to only serve a 1-year suspension for instructing someone on how to get away with murder? Especially when both the court and hearing panel found no mitigating factors, that a reasonable person would not interpret his comments to have been dark humor and that it was to be interpreted as real legal advice? What’s even more mind boggling is that the court found Mr. Sitton violated ABA Standards 5.1 (Failure to Maintain Personal Integrity) and 6.1 (False Statements, Fraud, and Misrepresentation), but then twisted their opinion and essentially said there was no real area in which Mr. Sitton’s actions neatly fall into within those two rules and therefore that is why they are only giving a 1-year suspension. The thing is, that is simply inaccurate for the sentencing guidelines (which the court included in their opinion) for violations of 5.1 and 6.1, it is abundantly obvious that Mr. Sitton’s actions do fall into them clearly, so it is a mystery as to how the court found otherwise.

 

If you were the judge ruling on this disciplinary case, what sentencing would you have handed down?

If I were to sue “Gossip Girl.”

If you grew up in New York and were a teenager in the early 2000s, you probably know the top-rated show “Gossip Girl.” “Gossip Girl” is the alias for an anonymous blogger who creates chaos by making public the very intimate and personal lives of upper-class high school students. The show is very scandalous due to the nature of these teenagers’ activities, but what stands out is the influence gossip girl had on these young teenagers. And it makes one think, what could I do if Gossip Girl came after me?

 

Anonymity

When bringing a claim for internet defamation against an anonymous blogger, the trickiest part is getting over the anonymity. In Cohen v. Google, Inc., 887 N.Y.S.2d 424 (N.Y. Sup. Ct. 2009), a New York state trial court granted plaintiff, model Liskula Cohen, pre-suit discovery from Google to reveal the identity of the anonymous publisher of the “Skanks in NYC” blog. Cohen alleged that the blog author defamed her by calling her a “skank” and a “ho” and posting photographs of her in provocative positions with sexually suggestive captions, all creating the false impression that she is sexually promiscuous. The court analyzed the discovery request under New York CPLR § 3102(c), which allows for discovery “to aid in bringing an action.” The court ruled that, under CPLR § 3102(c), a party seeking pre-action discovery must make a prima facie showing a meritorious cause of action before obtaining the identity of an anonymous defendant. The court acknowledges the First Amendment issues at stake, and citing Dendrite; the court opined that New York law’s requirement of a prima facie showing appears to address the constitutional concerns raised in the context of this case. The court held that Cohen adequately made this prima facie showing defamation, finding that the “skank” and “ho” statements, along with the sexually suggestive photographs and captions, conveyed a factual assertion that Cohen was sexually promiscuous, rather than an expression of protected opinion.

In Cohen, the court decided that Kiskula Cohen was entitled to the pre-suit discovery under CPLR § 3102(c). To legally obtain “Gossip Girl’s” true identity under this statute, we would have to prove that the statement posted on her blog against us is on its face defamatory and not simply an expression of protected opinion.

 

Defamation

Now that we may have uncovered our anonymous blogger, “Gossip Girl,” aka Dan Humphrey now we may dive into the defamation issue. There are two types of defamation: 1) Libel is the written form of defamation, and 2) Slander is the oral form of defamation. Because Gossip Girl’s choice of media is a written blog, our case would fall under Libel. But does our claim meet the legal elements of defamation?

In New York, there are four elements that the alleged defamation must meet:

  1. A false statement;
  2. Published to a third-party without privilege or authorization;
  3. With fault amounting to at least negligence;
  4. That caused special harm or ‘defamation per se.’

Dillon v. City of New York, 261 AD2d 34, 38, 704 NYS2d1 (1999)

Furthermore, our defamation claim for the plaintiff must “set forth the particular words allegedly constituting defamation and it must also allege time when, place where, and the manner in which the false statement was made, and specific to whom it was made.” Epifani v. Johnson, 65 A.D.3d 224, 233, 882 N.Y.S.2d 234 (2d Dept. 2009). The court simply means that we must provide details such as: what specific words were used? What were the terms used? Was the plaintiff labeled a “how” or “skank” like in Cohen, or did they simply call you “ugly”? When? The time said words were spoken, written, or published. Where? The place where they were spoken, written, or published (platform). How? The manner in which they were spoken, written, or published. Lastly Whom? The party or source to whom the statement was made to.

The plaintiff’s status determines the level of burden of proof in defamation lawsuits in N.Y. Is the plaintiff considered a “public” figure or a “private” citizen? To determine this status New York State courts use the “vortex notion.” This term simply means that a person who would generally qualify as a “private” citizen is considered a “public” figure if they draw public attention to themselves, like jumping right into a tornado vortex. Defamation for a “public” figure has a higher preponderance of evidence in defamation lawsuits. The plaintiff must prove that the defendant acted with actual malice (reckless disregard for the truth or falsity of the statement). For defamation of a “private” citizen, the plaintiff the N.Y. court apply a negligence standard of fault for the defendant unless the statements were related to a matter of legitimate public concern.

When the plaintiff is a private figure, and the allegedly defamatory statements relate to a matter of legitimate public concern, they must prove that the defendant acted “in a grossly irresponsible manner without due consideration for the standards of information gathering and dissemination ordinarily followed by responsible parties.” Chapeau v. Utica Observer-Dispatch, 28 N.Y.S.2d 196, 199 (N.Y. 1975) This standard focuses on the objective evaluation of the defendant’s actions rather than looking at the defendant’s state of mind at the time of publication.

If the statements Gossip Girl published are so inherently apparent, we may explore defamation per se. There are four elements to defamation per se in New York:

  1. Statement charging a plaintiff with a serious crime.
  2. Statements that tend to injure another in his or her trade, business, or profession
  3. Statements imputing a loathsome disease on a plaintiff, &
  4. Statements imputing unchastity on a woman

Liberman v. Gelstein, 80 NY2d 429, 435, 605 NE2d 344, 590 NYS2d 857 (1992). If the statements meet these elements, the court may find that the statements were inherently injurious that the damages to the plaintiff’s person are presumed. Another option to consider is defamation per quod which requires the plaintiff to provide extrinsic and supporting evidence to prove the defamatory nature of the alleged statement(s) in question that is not inherently apparent.

 

Privileges and Defenses

After concluding that Gossip Girl defamed the plaintiff, we must ensure that the defamatory statement is not protected under any privileges. New York courts recognize several privileges and defenses in the context of defamation actions, including the fair report privilege (a defamation lawsuit cannot be sustained against any person making a “fair and true report of any judicial proceeding, legislative proceeding or other official proceeding.”) N.Y.Civ.Rights §74, the opinion and fair comment privileges, substantial truth (the maker cannot be held liable for saying things that are actually true), and the wire service defense. There is also Section 230 of the Communications Decency Act, which may protect media platforms or publishers if a third party, not acting under their direction, posts something on their blog or website that is defamatory. Suppose a statement is privileged or defense applies. In that case, the maker of that statement may be immune from any lawsuit arising from those privileged statements.

 

Statute of Limitations

A New York plaintiff must start an action within one (1) year of the date the defamatory material was published or communicated to a third-party CPLR § 15 Sub 3. New York has also adopted a law directed explicitly to internet posts. The “single publication,” a party that causes the mass publication of defamatory content, may only be sued once for its initial publication of that content. For example, suppose a blog publishes a defamatory article that is circulated to thousands of people. In the case above, the blog may only be sued once. The Statute of Limitations begins to run at the time of first publication. “Republication” of the allegedly defamatory content will restart the statute of limitations. A republication occurs when “a separate aggregate publication from the original, on a different occasion, which is not merely a ‘delayed circulation of the original edition.'” Firth v. State, 775 N.E.2d 463, 466 (N.Y. 2002). Courts examine whether the republication was intended to and actually reached new audiences. Altering the allegedly defamatory content and moving web content to a different web address may trigger republication.

 

Damages

Damages to defamation claims are proportionate to the harm suffered by the plaintiff. If a plaintiff is awarded damages, it may be in the form of compensatory, nominal, or punitive damages. There are two types of compensatory damages 1) special damages and 2) general damages. Special damages are based on economic harm and must have a specific amount identified. General damages are challenging to assess. The jury has the discretion to determine the award amount after weighing all the facts. Nominal damages are small monetary sums awarded to vindicate the plaintiff’s name. Punitive damages are intended to punish the defendant and are meant to deter the defendant from repeating defamatory conduct.

 

When Gossip Girl first aired, the idea of a blog holding cultural relevance was not yet mainstream. Gossip Girl’s unchecked power kept many characters from living their lives freely and without scrutiny. After Gossip Girl aired, an anonymous blog, “Socialite Rank,” emerged. It damaged the reputation of the targeted victim, Olivia Palermo, who eventually dropped the suit she had started against the blog. The blog “Skanks in NYC” painted a false image of who Kiskula Cohen was and caused her to lose potential jobs. In the series finale, after the identity of Gossip Girl is revealed, the characters laugh. Still, one of the characters exclaimed, “why do you all think that this is funny? Gossip Girl ruined our lives!” Defamation can ruin lives. As technology advances, the law should as well. New York has adopted its defamation laws that were in place to ensure that person cannot hide behind anonymity to ruin another person’s life.

 

Do you feel protected against online defamation?

XOXO

Skip to toolbar