How Defamation and Minor Protection Laws Ultimately Shaped the Internet

Kyiv, Ukraine – September 5, 2019: A paper cubes collection with printed logos of world-famous social networks and online messengers, such as Facebook, Instagram, YouTube, Telegram and others.

The Communications Decency Act (CDA) was originally enacted with the intention of shielding minors from indecent and obscene online material. Despite its origins, Section 230 of the Communications Decency Act is now commonly used as a broad legal safeguard for social media platforms to shield themselves from legal liability for content posted on their sites by third parties. Interestingly, the reasoning behind this safeguard arises both from defamation common law, and constitutional free speech laws. As the internet has grown, however, this legal safeguard has gained increasing criticism. However, is this legislation actually undesirable? Many would disagree as section 230 contains “the 26 words that created the internet.”

 

Origin of the Communications Decency Act

The CDA was introduced and enacted as an attempt to shield minors from obscene or indecent content online. Although parts of the Act were later struck down for first amendment free speech violations, the Court left section 230 intact. The creation of section 230 was influenced by two landmark court decisions of defamation lawsuits.

The first case was in 1991, and involved an Internet site that hosted around 150 online forums. A claim was brought against the internet provider when a columnist of one of the online forums posted a defamatory comment about his competitor. The competitor sued the online distributor for the published defamation. The courts categorized the internet service provider as a distributor because they did not review any content of the forums before the content was posted to the site. As a distributor, there was no legal liability, and the case was dismissed.

 

Distributor Liability

Distributor Liability refers to the limited legal consequences that a distributor is exposed to for defamation. A common example of a distributor, is a bookstore or library. The theory behind distributor liability is that it would be impossible for distributors to moderate and censor every piece of content that they disperse because of the sheer volume, and the impossibility of knowing whether something is false or not.

The second case that influenced the creation of section 230, was Stratton Oakmont, Inc. v. Prodigy Servs. Co., in which the court used publisher liability theory to find the internet provider liable for the third party defamatory postings published on its site.  The court deemed the website a publisher because they moderated and deleted certain posts, regardless of the fact that there were far too many postings a day to regulate each one.

 

Publisher Liability

Under common law principles, a person who publishes a third-party’s defamatory statement bears the same legal responsibility as the creator of that statement. This liability is often referred to as “publisher liability,” and is based in theory that a publisher has the knowledge, opportunity, and ability to exercise control over the publication. For example, a newspaper publisher could face legal consequences for the content located within it. The court’s decision was significant because it meant that if a website attempted to moderate certain posts, it would be held liable for all posts.

 

Section 230’s Creation

In response to the Stratton-Oakmond case, and the ambiguous court decisions regarding internet services provider’s liability, members of Congress introduced an amendment to the CDA that later became Section 230. The Amendment was specifically introduced and passed with the goal of encouraging the development of unregulated, free speech online by relieving internet providers from any liability for their content.

 

Text of the Act- Subsection (c)(1) 

“No Provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

 Section 230 further provides that…

“No cause of action may be brought and no liability may be imposed under any State or local law that is inconsistent with this section.”

 The language above removes legal consequences arising from content posted on their forum. Courts have interpreted this subsection as providing broad immunity to online platforms from suits over content of third parties. Because of this, section 230 has become the principal legal safeguard from lawsuits over sites content.

 

The Good

  •  Section 230 can be viewed as being one of the most important pieces of legislation that protects free speech online. One of the unique aspects of this legislation is that it essentially extends free speech protection, applying it to private, non-governmental companies.
  • Without CDA 230, the internet would be a very different place. This section influenced some of the internet’s most distinctive characteristics. The internet promotes free speech and offers the ability for worldwide connectivity.
  • The CDA 230 does not fully eliminate liability or court remedies for victims of online defamation. Rather, it makes only the creator themselves liable for their speech, instead of the speaker and the publisher.

 

 

The Bad

  •  Because of the legal protections section 230 provides, social media networks have less of an incentive to regulate false or deceptive posts. Deceptive online posts can have an enormous impact on society. False posts have the ability to alter election results, or lead to dangerous misinformation campaigns, like the QAnon conspiracy theory, and the anti-vaccination movement.
  • Section 230 is twenty-five years old, and has not been updated to match the internet’s extensive growth.
  • Big Tech companies have been left largely unregulated regarding their online marketplaces.

 

 The Future of 230

While section 230 is still successfully used by social media platforms, concerns over the archaic legislation have mounted. Just recently, Justice Thomas, who is infamous for being a quiet Justice, wrote a concurring opinion articulating his view that the government should regulate content providers as common carriers, like utilities companies. What implications could that have on the internet? With the growing level of criticism surrounding section 230, will Congress will finally attempt to fix this legislation? If not, will the Supreme Court be left to tackle the problem themselves?

Advertising in the Cloud

Thanks to social media, advertising to a broad range of people across physical and man-made borders has never been easier. Social media has transformed how people and businesses can interact throughout the world. In just a few moments a marketer can create a post advertising their product halfway across the world and almost everywhere in between. Not only that, but Susan, a charming cat lady in west London, can send her friend Linda, who’s visiting her son in Costa Rica an advertisement she saw for sunglasses she thinks Linda might like. The data collected by social media sites allows marketers to target specific groups of people with their advertisements. For example, if Susan was part of a few Facebook cat groups, she would undoubtedly receive more cat tower or toy related advertisements than the average person.

 

Advertising on social media also allows local stores or venues to advertise to the local communities, targeting groups of people in the local area. New jobs in this area are being created, young entrepreneurs are selling their social media skills to help small business owners create an online presence. Social media has also transformed the way stores advertise to people as well, no longer must stores rely on solely a posterboard, or scripted advertisement. Individuals with a large enough following on social media are sought out by companies to “review” or test their products for free.

Social media has transformed and expanded the marketplace exponentially. Who we can reach in the world, who we can market to and sell to has expanded beyond physical barriers. With these changes, and newfound capabilities through technology, comes a new legal frontier.

 Today, most major brands and companies have their own social media account. Building a store’s “online presence” and promoting brand awareness has now become a priority for many marketing departments. According to Internet Advertising Revenue Report: Full Year 2019 Results & Q1 2020 Revenues, “The Interactive Advertising bureau, an industry trade association, and the research firm eMarketer estimate that U.S. social media advertising revenue was roughly $36 billion in 2019, making up approximately 30% of all digital advertising revenue,” they expect that it will increase to $43 billion in 2020.

The Pew Research Center estimated, “that in 2019, 72% of U.S. adults, or about 184 million U.S. adults, used at least one social media site, based on the results of a series of surveys.”

Companies and people are increasingly utilizing these tools, what are the legal implications? 

This area of law is quickly growing. Advertisers can now directly reach their consumers in an instant, marketing their products at comparable prices. The FTC, Federal Trade Commission has expanded its enforcement actions in this area. Some examples of this are:

  •  The Securities and Exchange Commission Regulation Fair Disclosure addresses, “ the selective disclosure of information by publicly traded companies and other issuers, and the SEC has clarified that disseminating information through social media outlets like Facebook and Twitter is allowed so long as investors have been alerted about which social media will be used to disseminate such information,” 
  • The National Labor Relations Act, “While crafting an effective social media policy regarding who can post for a company or what is acceptable content to post relating to the company is important, companies need to ensure that the policy is not overly broad or can be interpreted as limiting employees’ rights related to protected concerted activity”
  • FDA, “ Even on social media platforms, businesses running promotions or advertising online have to be careful not to run afoul of FDA disclosure requirements”

According to the ABA there are two basic principles in advertising law which apply to any media: 

  1. Advertisers must have a reasonable basis to substantiate claims made; and
  2.  If disclosure is required to prevent an ad from being misleading, such disclosure must appear in a clear and conspicuous manner.

Advertisements may be subject to more specific regulations regarding Children under the Children’s Online Privacy Protection Act (COPPA). This act gives parents control over protections and approvable ways to get verifiable parental consent.  

The Future legality of our Data 

Data brokers are companies that collect information about you and sell that data to other companies or individuals. This information can include everything from family birthdays, addresses, contacts, jobs, education, hobbies, interests, life events and health conditions. Currently, Data brokers are legal in most states. California and Vermont have enacted laws that require data brokers to register their operation in the state. Who owns your data? Should you? Should the sites you are creating the data on? Should it be free for companies to sell? Will states take this issue in different directions? If so, what would these implications be for companies and sites to keep up with?

Facebook’s market capitalization stands at $450 billion.

While there is uncertainty regarding this area of law, it is certain that it is new, expanding and will require much debate. 

According to Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media,  “Collecting user data allows operators to offer different advertisements based on its potential relevance to different users.”   The data collected by social media companies enables them to build complex strategies and sell advertising “space” targeting specific user groups to companies, organizations, and political campaigns (How Does Facebook Make Money). The capabilities here seem endless, “Social media operators place ad spaces in a marketplaces that runs an instantaneous auction with advertisers that can place automated bids.” With the ever expanding possibilities of social media comes a growing legal frontier. 

Removing Content 

 Section 230, a provision of the 1996 Communications Decency Act, states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” (47 U.S.C. § 230). This act shields social media companies from liability for content posted by their users and allows them to remove lawful but objectionable posts.

One legal issue that has been arising here is, advertisements are being taken down by the content monitoring algorithms. According to a Congressional Research Services report, during the COVID-19 pandemic social media companies relied more heavily on automated systems to monitor content. These systems could review large volumes of the content at a time however they mistakenly removed some content. “Facebook’s automated systems have reportedly removed ads from small businesses, mistakenly identifying them as content that violates its policies and causing the business to lose money during the appeals process” (Facebook’s AI Mistakenly Bans Ads for Struggling Businesses). This has affected a wide range of small businesses according to Facebook’s community standards transparency enforcement report. According to this same report, “In 2019, Facebook restored 23% of the 76 million appeals it received, and restored an additional 284 million pieces of content without an appeal—about 2% of the content that it took action on for violating its policies.” 

 

Is Cyberbullying the Newest Form of Police Brutality?

Police departments across the country are calling keyboard warriors into action to help them solve crimes…but at what cost?

In a survey of 539 police departments in the U.S., 76% of departments said that they used their social media accounts to solicit tips on crimes. Departments post “arrested” photos to celebrate arrests, surveillance footage for suspect identification, and some even post themed wanted posters, like the Harford County Sheriff’s Office.

The process for using social media as an investigative tool is dangerously simple and the consequences can be brutal. A detective thinks posting on social media might help an investigation, so the department posts a video or picture asking for information. The community, armed with full names, addresses, and other personal information, responds with some tips and a lot of judgmental, threatening, and bigoted comments. Most police departments have no policy for removing posts after information has been gathered or cases are closed, even if the highlighted person is found to be innocent. A majority of people who are arrested are not even convicted of a crime.

Law enforcement’s use of social media in this way threatens the presumption of innocence, creates a culture of public humiliation, and often results in a comment section of bigoted and threatening comments.

On February 26, 2020, the Manhattan Beach Police Department posted a mugshot of Matthew Jacques on their Facebook and Instagram pages for their “Wanted Wednesday” social media series. The pages have 4,500 and 13,600, mostly local, followers, respectively. The post equated Matthew to a fugitive and commenters responded publicly with information about where he worked. Matthew tried to call off work out of fear of a citizen’s arrest. The fear turned out to be warranted when two strangers came to find him at his workplace. Matthew eventually lost his job because he was too afraid to return to work.

You may be thinking this is not a big deal. This guy was probably wanted for something really bad and the police needed help. After all, the post said the police had a warrant. Think again.

There was no active warrant for Matthew at the time, his only (already resolved) warrant came from taking too long to schedule remedial classes for a 2017 DUI. Matthew was publicly humiliated by the local police department. The department even refused to remove the social media posts after being notified of the truth. The result?

Matthew filed a complaint against the department for defamation (as well as libel per se and false light invasion of privacy). Typically, defamation requires the plaintiff to show:

1) a false statement purporting to be fact; 2) publication or communication of that statement to a third person; 3) fault amounting to at least negligence; and 4) damages, or some harm caused to the person or entity who is the subject of the statement.

Here, the department made a false statement – that there was a warrant. They published it on their social media, satisfying the second element. They did not check readily available public records that showed Matthew did not have a warrant. Finally, Matthew lived in fear and lost his job. Clearly, he was harmed.

The police department claimed their postings were protected by the California Constitution, governmental immunity, and the 1st Amendment. Fortunately, the court denied the department’s anti-SLAPP motion. Over a year after postings, the department took down the posting and settled the lawsuit with Matthew.

Some may think that Matthew’s case is an anomaly and that, usually, the negative attention is warranted and perhaps even socially beneficial because it further de-incentivizes criminal activity via humiliation and social stigma. However, most arrests don’t result in convictions, many of the police’s cyberbullying victims are likely innocent. Even if they are guilty, leaving these posts up can increase the barrier to societal re-entry, which can increase recidivism rates. A negative digital record can make finding jobs and housing more difficult. Many commenters assume the highlighted individual’s guilt and take to their keyboards to shame them.

Here’s one example of a post and comment section from the Toledo Police Department Facebook page:

Unless departments change their social media use policies, they will continue to face defamation lawsuits and continue to further the degradation of the presumption of innocence.

Police departments should discontinue the use of social media in the humiliating ways described above. At the very least, they should consider using this tactic only for violent, felonious crimes. Some departments have already changed their policies.

The San Francisco Police Department has stopped posting mugshots for criminal suspects on social media. According to Criminal Defense Attorney Mark Reichel, “The decision was made in consultation with the San Francisco Public Defender’s Office who argued that the practice of posting mugshots online had the potential to taint criminal trials and follow accused individuals long after any debt to society is paid.” For a discussion of some of the issues social media presents to maintaining a fair trial, see Social Media, Venue and the Right to a Fair Trial.

Do you think police departments should reconsider their social media policies?

Why it Matters: Lawyers, the Spread of Misinformation and Social Media

It is important to remember the role lawyers play in and how the public views public figures, attorneys and the judicial system. This is especially true when posts are made on social media platforms or when statements are made available to the public in any manner. Many recent occurrences bring this important situation to light, most notably Rudy Giuliani’s unproven campaign regarding the “Big Lie” a/k/a the stolen election. Attorneys and important public figures may need to be held to a higher standard of care and accountability due to the public’s heavy reliance on the truth of their statements. Because of this reliance, social media companies, and the Courts, are forced into action to curb the spread of false information.

Facts on the spread of information on the internet. So many people now rely on social media as a way of communication and as a news source, which can sometimes be their only source. Information online can now spread faster than any other news source in history. The science behind the spread of information online, is quite astounding (and there is actual science behind it!).

A Massachusetts Institute of Technology (MIT) study found that “It took the truth about six times as long as falsehood to reach 1500 people and 20 times as long as falsehood to reach a cascade depth of 10. As the truth never diffused beyond a depth of 10, we saw that falsehood reached a depth of 19 nearly 10 times faster than the truth reached a depth of 10.” These numbers show that false information spreads faster, farther and deeper than the truth. All users of social media are exposed and susceptible to false information, including attorneys, and our ability to discern true versus false information has become distorted leaving many users vulnerable.

 

 

What causes of the spread of misinformation and who is susceptible? The American Psychological Association has published information on the causes of misinformation spreading and who is most susceptible. Researchers looked at individual differences and identified that “[b]roadly, political conservativism and lower levels of educational attainment are correlated with an increase in susceptibility to fake news.” Further, “[s]ix ‘degrees of manipulation’—impersonation, conspiracy, emotion, polarization, discrediting, and trolling—are used to spread misinformation and disinformation.” A false news story may quote a fake expert, use emotional language, or propose a conspiracy theory in order to manipulate readers.

People use the following five criteria to decide whether information is true: 1) compatibility with other known information, 2) credibility of the source, 3) whether others believe it, 4) whether the information is internally consistent, and 5) whether there is supporting evidence. The study also shows that people are more likely to accept misinformation as fact if it’s easy to hear or read. “We want people to understand that disinformation is fundamentally exploitative—that it tries to use our religion, our patriotism, and our desire for justice to outrage us and to dupe us into faulty reasoning,” says Peter Adams, News Literacy Project’s senior vice president of education. “Much of that is a psychological phenomenon.”This information may be helpful in understanding how a once highly respected lawyer and politician, is now the focus of discipline-committee-attention.

Rudy Giuliani. Social media is important to the legal profession because the court systems and attorneys use it to reach the public and potential clients. Consequently, it is of utmost importance to respect social media and to know how it functions to make it work for the intended purpose. Rudy Giuliani, attorney, former Mayor of New York City and personal counsel to President Trump, is the most prominent and current example of an attorney who used social media to spread misinformation. Giuliani is currently involved in numerous lawsuits for spewing a theory of election fraud that was ultimately disproved. Intriguingly, even though the claims lacked evidence to support them and were ultimately dispelled by the Judicial System, members of society believed these claims as truth while a large number of people still believe them.

Giuliani made these claims on mainstream media, his YouTube channel and seemingly anyone that would listen including Fox News. An anonymous source at Fox News stated, “We turned so far right we went crazy.” Giuliani reportedly earned monies making plugs to sell items during interviews and on his YouTube channel while making the statements at issue. Smartmatic filed suit against Rudy Giuliani and Fox News amongst others which is separate from the Dominion suit filed against Giuliani. These two suits encompass the same general claims, that Giuliani made false statements that the 2020 US Presidential election was stolen resulting in irreputable harm to companies.

Both the NYC Bar Association and the New York State Bar Association filed complaints against Mr. Giuliani requesting an investigation into his conduct.

The Appellate Division’s First Judicial Department of the New York Supreme Court suspended Giuliani’s law license on an interim basis in a June 24, 2021 decision concluding that his conduct threatened public interest. Not only did his behavior threaten public interest but it also tarnished the reputation of lawyers and the judicial system as a whole. The opinion further states, “When false statements are made by an attorney, it also erodes public confidence in the legal profession and its role as a crucial source of reliable information.”

Other examples of attorney epic-fails. An Illinois attorney wrote in her blog post referring to a judge as being “a total asshole,” and in another blog entry referred to a judge as “Judge Clueless.” The attorney also wrote about client specific cases and identified her clients by jail number or first name. That attorney received a 60 days suspension and was terminated from her employment as an Assistant Public Defender. Here, the attorney’s opinion, while it is hers and she has a right to it, could influence other court system employees, attorneys, judges or lay people entering the judicial system for whatever reason resulting in an influenced preconceived notion of the judge and the judge’s ability to render decisions in a case.

A Tennessee lawyer was suspended for 60 days for giving Facebook advise on how to kill and ex-boyfriend and make it look like self-defense while providing information on the new stand your ground law and the castle doctrine. Because a Florida lawyer made disparaging statements and accusations of judicial witchcraft, that attorney was disbarred and arrested!

Lawyers are held to a higher standard. Period.  While Giuliani’s attorneys are arguing his right to make those statements are protected under his First Amendment right to free speech, “lawyers, as professionals, are subjected to speech restrictions that would not ordinarily apply to lay persons.” Especially, when it comes to judiciary review committees.

The legal system of attorneys is primarily a self-governing entity due to the professional legal standards inherent in the job. Attorneys swear an oath to support the Constitution of the United States before admission to practice. Attorneys are expected to uphold certain legal standards, enforce other attorneys to uphold those legal standards and, if necessary, report another attorney’s actions. A grievance committee is used to deter and investigate unethical conduct which can result in sanctions or commencement of a formal disciplinary proceeding at the Appellate Court level, as in the case of Mr. Giuliani’s interim suspension.

Rules to keep in mind as a practicing attorney. These rules come from the NY Rules of professional conduct

  • Rule 4.1 governs Truthfulness in Statements to Others and reads, in part, “In the course of representing a client, a lawyer shall not knowingly make a false statement of fact or law to a third person.”
  • Rule 8.3 governs Reporting Professional Misconduct and reads in part, “(a) A lawyer who knows that another lawyer has committed a violation of the Rules of Professional Conduct that raises a substantial question as to that lawyer’s honesty, trustworthiness or fitness as a lawyer shall report such knowledge to a tribunal or other authority empowered to investigate or act upon such violation.”
  • Rule 8.4 governs Misconduct and reads, in part, “A lawyer or law firm shall not: … (c) engage in conduct involving dishonesty, fraud, deceit or misrepresentation” and “(h) engage in any other conduct that adversely reflects on the lawyer’s fitness as a lawyer.”

What can be done to curb the spread of misinformation going forward? It seems inevitable that something has to give when it comes to social media and the downward spiral that may or may not hit rock bottom but only time will tell. Social media plays an important role in how our society communicates, shares ideas and inspires others. But is self-regulation enough? Should there be heightened standards for persons of influence? Should social media be regulated or are the companies sufficiently regulating themselves? Can the government work together with social media platforms to achieve a higher standard? Is judicial witchcraft even a thing? Regardless, your license to practice law is what it’s all about so choose your words wisely.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

A Slap in the Face(book)?

Social media law has become somewhat of a contentious issue in recent years. While most people nowadays could not imagine life without it, many realize too, that it’s influence on our daily lives may not be a great thing. As the technology has advanced to unimaginable levels and the platforms have boomed in popularity, it seems as though our smart phones and Big Tech know our every move. The leading social media platform, Facebook, has around 1.82 billion active users a day, with people volunteering all sorts of personal information to be stored in the internet database. Individual profiles hold pictures of our children, our friends, our family, meals we eat, locations we visit. “What’s on your mind?” is the opening invite to any Facebook page, and one can only hazard a guess as to how many people actually answer that question on a daily basis.  Social media sites know our likes, our dislikes, our preferences, our moods, the shoes we want to buy for that dress we are thinking of wearing to the party we are looking forward to in three weeks!

With all that knowledge, comes enormous power, and through algorithmic design, social media can manipulate our thoughts and beliefs by controlling what we see and don’t see. With all that power, therefore, should come responsibility, but Section 230 of the Communications Decency Act (CDA) has created a stark disconnect between the two. What started out as a worthy protection for internet service providers for the content posted by others, has more recently drawn criticism for the lack of accountability held by social media oligarchs such as Jack Dorsey (Twitter) and Mark Zuckerberg (Facebook).

However, that could all be about to change.

On May 28, 2017, three friends lost their lives in a deadly car accident in which the 17-year-old driver, Jason Davis, crashed into a tree at an estimated speed of 113 mph. Landen Brown, 20, and Hunter Morby, 17, were passengers. Tragic accident? Or wrongful death?

Parents of the deceased lay blame on the Snapchat App, which offered a ‘Speed Filter’ that would clock how fast you were moving, and allowed users to snap and share videos of their movements in progress.

You see where this is going.

As quickly became the trend, the three youths used the app to see how fast they could record the speed of their car. Just moments before their deaths, Davis had posted a ‘snap’ clocking the car’s speed at 123 mph. In Lemmon v Snap, the parents of two of the boys brought suit against the social media provider, Snap, Inc., claiming that the app feature encouraged reckless driving and ultimately served to “entice” the young users to their death.

Until now, social media platforms and other internet service providers have enjoyed the protection of near absolute immunity from liability. Written in 1996, Section 230 was designed to protect tech companies from liability, for suits such as defamation, for third party posts. In the early days, it was small tech companies, or an online business with a ‘comments’ feature that generally saw the benefits of the Code. 25 years later, many people are questioning the role of Section 230 within the vastly developing era of social media and the powerful pass it grants Big Tech in many of its societal shortcomings.

Regarded more as an open forum than the publisher or speaker, social media platforms such as Facebook, Twitter, TikTok, Instagram and Snapchat, have been shielded by Section 230 from any legal claims of harm caused by the content posted on their sites.

Applied broadly, it is argued that Section 230 prevents Snap, Inc. from being held legally responsible for the deaths of the three boys in this case, which is the defense the tech company relied upon. The district court dismissed the case on those grounds, holding that the captured speeds fall into the category of content published by a third party, for which the service provider cannot be held liable. The Ninth Circuit however, disagrees. The Court’s interesting swerve of such immunity, is that the speed filter resulted in the deaths of the boys regardless of whether or not their captured speeds were posted. In other words, it did not matter if the vehicle’s speed was shared with others in the app; the fact that the app promotes, and rewards, high speed (although the award system within the app is not entirely clear), is enough.

The implications of this could be tremendous. At a time when debate over 230 reevaluations is already heavy, this precedential interpretation of Section 230 could lead to some cleverly formulated legal arguments for holding internet service providers accountable for some of the highly damaging effects of internet, social media and smart phone usage.

For the many benefits the internet has to offer, it can no longer be denied that there is another, very ugly side to internet usage, in particular with social media.

It is somewhat of an open secret that social media platforms such as Facebook and Instagram, purposely design their apps to be addictive by its users. It is also no secret that there is a growing association between social media usage and suicides, depression and other mental health issues. Cyber bullying has long been a very real problem. In addition, studies have shown that smart device screen time in very young children has shockingly detrimental impacts on a child’s social and emotional developments,  not to mention the now commonly known damage it can have on a person’s eyesight.

An increased rate of divorces has been linked to smart phones, and distracted driving – whether it be texting or keeping tabs on your Twitter retweets, or Facebook ‘likes’– is on the increase. Even an increase in accidents while walking has been linked to distractions caused by the addictive smart devices.

With the idea of accountability being the underlying issue, it can of course be stated that almost all of these problems should be a matter of personal responsibility. Growing apart from your spouse? Ditch your cell phone and reinvent date night. Feeling depressed about your life as you ‘heart’ a picture of your colleague’s wine glass in front of a perfect sunset beach backdrop? Close your laptop and stop comparing yourself to everyone else’s highlights. Step in front of a cyclist while LOL’ing in a group text? Seriously….put your Apple Watch hand in your pocket and look where you are going! The list of personal-blame is endless. But then we hear about three young friends, two still in their teens, who lose their lives engaged with social media, and suddenly it’s not so easy to blame them for their own devastating misfortune.

While social media sites cannot be held responsible for the content posted by others, no matter how hurtful it might be to some, or no matter what actions it leads others to take, should they be held responsible for negligently making their sites so addictive, so emotionally manipulative and so targeted towards individual users, that such extensive and compulsive use leads to dire consequences? According to the Ninth Circuit, negligent app design can in fact be a cause of action for wrongful death.

With a potential crack in the 230-armor, the questions many lawyers will be scrambling to ask are:

      • What duties do the smart device producers and/or internet service providers owe to their users?
      • Are these duties breached by continuing to design, produce, and provide products that are now known to create such disturbing problems?
      • What injuries have occurred and where those injuries foreseeably caused by any such breaches of duty?

For the time being, it is unlikely that any substantial milestone will be reached with regards to Big Tech accountability, but the Ninth Circuit decision in this case has certainly delivered a powerful blow to the Big Tech apparent untouchability in the courtroom.

As awareness of all these social media related issues grow, could this court decision open the door to further suits of defective or negligent product design resulting in death or injury? Time will tell…..stay tuned.

Facebook Posts Can Land You In Jail!

Did you know that a single Facebook post can land you in jail?  Its true, an acting judge in Westchester NY recently ruled that a ‘tag’ notification on Facebook violated  a protective order.  The result of the violation; second-degree contempt, which can lead to punishment of up to a year in jail.   In January, the a judge issued a  restraining order against Maria Gonzalez, prohibiting her from communicating with her former sister-in-law, Maribel Calderon.  Restraining orders are issued to prevent person from making contact with protected individuals.  Traditionally, courts interpreted contact to mean direct communications in person, mail, email, phone, voicemail or even text.   Facebook tags, however, present a slightly different form of contact.

Unlike Facebook messages, tagging someone identifies the tagged person on the poster’s Facebook page.  The tag, however, has the concurrent effect of linking to the identified person’s profile; thereby notifying them of the post.  Ms. Gonzalez tagged Calderon in a post on her (Gonzalez’s) timeline calling Calderon stupid and writing “you have a sad family.”  Gonzalez argued the post did not violate the protective order since there was no contact aimed directly at Calderon.  Acting Westchester (NY) County Supreme Court Justice Susan Capeci felt otherwise writing a restraining order includes “contacting the protected party by electronic or other means.”  Other means, it seems, is through personal posts put out on social media.

And Social Media posts aren’t just evidence of orders of protection violations, they are also grounds for supporting the issuance of restraining orders.  In 2013, a court granted an order of protection for actress Ashley Tinsdale against an alleged stalker.  Tinsdale’s lawyers presented evidence of over 19,000 tweets that the alleged stalker posted about the actress (an average of 100 tweets per day).

The bottom line:  Naming another on a social media post, even one that is directed to the twittersphere or Facebook community, rather than toward a particular individual,  is sufficient contact for purposes of supporting restraining orders or violations thereof.   We should all keep our posts positives –even more so if we have been told to stay away!!!

Should Courts allow Facebook Posts as Evidence of Lack of Remorse?

Last month Orange County Prosecutors charged Victoria Graswald with the murder of her fiancé Vincent Viafore.  Ms. Graswald allegedly tampered with Mr. Viafore’s kayak while the two were boating in the icy (yes again icy – see post below) water of the Hudson River. As a result, prosecutors argue, Mr. Viafore drowned.

Although Mr. Viafore’s body has yet to be found, prosecutors believed that Ms. Graswald’s inconsistent stories, and pictures she posted on Facebook after the accident were sufficient to indict her for her fiancé’s death.  They cite as evidence a picture of Ms. Graswald in a yoga pose against a bucolic setting and a video of her doing a cartwheel.

Facebook posts that demonstrate a lack of remorse have been figuring into criminal prosecutions for a while.  in 2011 Casey Anthony was indicted in the media for posts she shared of a “Bella Vida” tattoo she emblazoned on her back shoulder and pictures she posted showing Ms. Anthony partying while her daughter was still missing.   A California, judge sentenced a woman to 2 years in jail for her first DUI offense (typical first time offenders are given probation).  The judge cited a post- arrest picture the woman posted to MySpace while holding a drink.

But are Facebook posts, with all of their innuendo, a fair measures of guilt.   The Casey Anthony jury probably didn’t think so; although all we know for sure is that the posts, considered as part of the prosecution’s entire case, were not sufficient to lead to a guilty verdict.  And arguably posts, without a body, will not provide the lack of reasonable doubt necessary to convict Ms. Graswald.

But should these pictures hold the weight that members of the criminal justice system increasingly ascribe to them?  A problem seems to be context.  While the pictures seem damning when posted during or soon after an investigation, the evidence is circumstantial at best.  Absent testimony by the defendant corroborating his or her intent at the time of the post, (an event unlikely to happen) jurors can never be certain that the pictures demonstrate an expression of relief or a lack of remorse.

The issue of post-indictment remorse is transcends social media. Prosecutors recently introduced into evidence a picture of Dzhokhar Tsarnaev (the Boston Bomber) flashing his middle finger into a camera from a jail holding cell.  But Tsarnaev’s attorney, like Ms. Graswald’s spun the picture in a way that suggests it has nothing to do with a lack of remorse.

And therein lies the problem, skilled attorney’s on either side can explain  pictures, and intent while posting them, from several different angles.  The issue becomes whether their value is sufficient to justify supporting an indictment for a crime? a conviction? or a sentence?

Thoughts?

From Twitter to Terrorism

A teen was arrested for Tweeting an airline terrorist threat. A 14 year old Dutch girl named Sarah with twitter name @QueenDemetriax tweeted to American Airlines the following: “@AmericanAir hello my name’s lbrahim and I’m from Afghanistan. I’m part of Al Qaida and on June 1st I’m gonna do something really big bye.”

In response American Airlines wrote to Sarah from their official Twitter account saying “we take these threats very seriously. Your IP address and details will be forwarded to security and the FBI.” Moments after their response, Sarah replied saying “I’m just a girl” and that her initial tweet was simply a joke that her friend wrote! She had also posted a tweet apologizing to American Airlines and stating that she is scared now.

Sarah turned herself in to the Dutch police station, where the police department stated that they are taking her tweet seriously since it is an alarming threat. The girl was charged with “posting a false or alarming announcement” under Dutch law. It was unconfirmed whether the FBI was involved or not but she gained thousands of followers on Twitter as a result of this incident. Could this be a new trend in order to gain popularity or recognition? Should Sarah be punished and if so how?

Update:

Others are now tweeting similar tweets @AmericanAir and other airlines. Kale tweeted @SouthwestAir “I bake really good pies and my friends call me ‘the bomb’ am I still allowed to fly?” Donnie Cyrus tweeted @SouthwestAir “@WesleyWalrus is gonna bomb your next few flights.” ArmyJacket tweeted @AmericanAir “I have a bomb under the next plane to take off” There are many other tweets with similar language all aimed at airlines.

There are no reports yet of any of these follow up twitter threats being reported to the appropriate authorities. Are these tweeters going too far? These tweets can potentially be translated into legitimate threats or have they now crossed into the realm of freedom of speech?

Social Rift

Another day, another questionable Facebook acquisition, and as engadget.com put it, another instance of the “Facebook” effect.  This particular acquisition is the $2 billion purchase of virtual reality headset manufacturer “Oculus Rift.”  Oculus Rift is a particularly unique purchase by Facebook because of its crowdfunding roots.  Oculus Rift got its start through the crowdfunding website “Kickstarter.”  Kickstarter allows individuals to contribute money to upstarts and projects often essentially pre-purchasing the product they are supporting.  Oculus Rift was able to successfully get funded and shipped its VR headsets to qualifying supporters.  Oculus was deemed to be a device that will change the gaming industry and supporters, many of them developers, wanted to get in on the ground floor.  Since its funding the Oculus Rift has improved and has been used for numerous projects, demos, and games by developers, artists, and gamers alike.

The future of the Oculus Rift will now however will be determined by Facebook its new owner to the dismay of many of Oculus’ former supporters.  Which poses an interesting legal question that Kickstarter and startups like Oculus have to consider.  What happens when your hundreds of investors on a crowdfunding site like Kickstarter think they are funding something like a unique grassroots revolution in gaming and it turns out to be bought by a social media juggernaut who may have intentions to take the company in a completely different direction?  Kickstarter has maintained that supporters on their website are not entitled to shares of the company they are supporting, viewing supporters as donators more than investors.  Many of the 9,522 initial Kickstarter backers of Oculus are now demanding their money back and expressing their displeasure online through social media such as on twitter and on Oculus’ Facebook page (irony noted).  Oculus’ Kickstarter page is riddled with comments condemning the acquisition and expressing their feelings of betrayal believing Oculus received a windfall on the shoulders of their supporters who made them who they are today.

Facebook may be able to now provide Oculus funding much greater than they have ever seen before, but their future in gaming is at risk by a number of factors.  The “Facebook effect” for instance, caused by the feeling of distrust of the social media giant by many, is already having an adverse effect with not just their Kickstarter supporters, but also by huge players in the gaming industry the platform needs to rely on.  The creator of “Minecraft,” an immensely popular game on a large number of platforms including game consoles, mobile phones, and PC’s tweeted, “We were in talks about maybe bringing a version of Minecraft to Oculus. I just cancelled that deal. Facebook creeps me out.”  Oculus also will soon no longer be the only game in town as far as virtual reality is concerned, with Sony announcing recently their own headset, Project Morpheus, for their PlayStation 4 game console.  Kotaku.com offered a quote by Sam Biddle from the blog Valleywag to offer a strong perspective to sum up the concerns of many in the crowdfunding community, “For me, it’s now simple: post-Oculus, if you back a large Kickstarter project, you’re a sucker.”

Read more at: Engadget & Kotaku

Skip to toolbar