Free speech, should it be so free?

In the United States everybody is entitled to free speech; however, we must not forget that the First Amendment of the Constitution only protects individuals from federal and state actions. With that being said, free speech is not protected from censorship by private entities, like social media platforms. In addition, Section 230 of the Communications Decency Act (CDA) provides technology companies like Twitter, YouTube, Facebook, Snapchat, Instagram as well as other social media giants, immunity from liabilities arising from the content posted on their websites. The question becomes whether it is fair for an individual who desires to freely express himself or herself to be banned from certain social media websites by doing so? What is the public policy behind this? What are the standards employed by these social media companies when determining who should or should not be banned? On the other hand, are social media platforms being used as tools or weapons when it comes to politics? Do they play a role in how the public vote? Are the users truly seeing what they think they have chosen to see or are the contents being displayed targeted to the users and may ultimately create biases?

As we have seen earlier this year, former President Trump was banned from several social media platforms as a result of the January 6, 2021 assault at the U.S. Capitol by Trump supporters. It is no secret that our former president is not shy about his comments on a variety of topics. Some audiences view him as outspoken, direct, or perhaps provocative. When Twitter announced its permanent suspension of former President Trump’s account, its rationale was to prevent further incitement of violence. By falsely claiming that the 2020 election had been stolen from him, thousands of Trump supporters gathered in Washington, D.C. on January 5 and January 6 which ultimately led to violence and chaos. As a public figure and a politician, our former president should have known that his actions or viewpoints on social media are likely to trigger a significant impact on the public. Public figures and politicians should be held to a higher standard as they represent citizens who voted for them. As such, they are influential. Technology companies like Twitter saw the former president’s tweets as potential threats to the public as well as a violation of their company policies; hence, it was justified to ban his account. The ban was an instance of private action as opposed to government action. In other words, former President Trump’s First Amendment rights were not violated.

Spare Me Your Outrage, Your Shock. This Is America | Cognoscenti

First, let us discuss the fairness aspect of censorship. Yes, individuals possess rights to free speech; however, if the public’s safety is at stake, actions are required to avoid chaos. For example, you cannot scream “fire”  out of nowhere in a dark movie theater as it would cause panic and unnecessary disorder. There are rules you must comply with in order to use the facility and these rules are in place to protect the general welfare. As a user, if you don’t like the rules set forth by that facility, you can simply avoid using it. It does not necessarily mean that your idea or speech is strictly prohibited, just not on that particular facility. Similar to social media platforms, if users fail to follow their company policies, the companies reserve the right to ban them. Public policy probably outweighs individual freedom. As for the standards employed by these technology companies, there is no bright line. As I previously mentioned, Section 230 grants them immunity from liabilities. That being said, the contents are unregulated and therefore, these social media giants are free to implement and execute policies as they seem appropriate.

The Dangers of Social Networking - TurboFuture

In terms of politics, I believe social media platforms do play a role in shaping their users’ perspectives in some way. This is because the contents that are being displayed are targeted, if not tailored, as they collect data based on the user’s preferences and past habits. The activities each user engages in are being monitored, measured, and analyzed. In a sense, these platforms are being used as a weapon as they may manipulate users without the users even knowing. A lot of times we are not even aware that the videos or pictures that we see online are being presented to us because of past contents we had seen or selected. In other words, these social media companies may be censoring what they don’t want you to see or what they may think you don’t want to see.  For example, some technology companies are pro-vaccination. They are more likely to post information about facts about COVID-19 vaccines or perhaps publish posts that encourage their users to get vaccinated.  We think we have control over what we see or watch, but do we really?

How to Avoid Misinformation About COVID-19 | Science | Smithsonian Magazine

There are advantages and disadvantages to censorship. Censorship can reduce the negative impact of hate speech, especially on the internet. By limiting certain speeches, we create more opportunities for equality. In addition, censorship prevents the spread of racism. For example, posts and videos of racial comments could be blocked by social media companies if deemed necessary. Censorship can also protect minors from seeing harmful content. Because children can be manipulated easily, it helps promote safety.  Moreover, censorship can be a vehicle to stop false information. During unprecedented times like this pandemic, misinformation can be fatal. On the other hand, censorship may not be good for the public as it creates a specific narrative in society. This can potentially cause biases. For example, many blamed Facebook for the outcome of an election as it’s detrimental to our democracy.

Overall, I believe that some sort of social media censorship is necessary. The cyber-world is interrelated to the real world. We can’t let people do or say whatever they want as it may have dramatic detrimental effects. The issue is how do you keep the best of both worlds?

 

How Defamation and Minor Protection Laws Ultimately Shaped the Internet

Kyiv, Ukraine – September 5, 2019: A paper cubes collection with printed logos of world-famous social networks and online messengers, such as Facebook, Instagram, YouTube, Telegram and others.

The Communications Decency Act (CDA) was originally enacted with the intention of shielding minors from indecent and obscene online material. Despite its origins, Section 230 of the Communications Decency Act is now commonly used as a broad legal safeguard for social media platforms to shield themselves from legal liability for content posted on their sites by third parties. Interestingly, the reasoning behind this safeguard arises both from defamation common law, and constitutional free speech laws. As the internet has grown, however, this legal safeguard has gained increasing criticism. However, is this legislation actually undesirable? Many would disagree as section 230 contains “the 26 words that created the internet.”

 

Origin of the Communications Decency Act

The CDA was introduced and enacted as an attempt to shield minors from obscene or indecent content online. Although parts of the Act were later struck down for first amendment free speech violations, the Court left section 230 intact. The creation of section 230 was influenced by two landmark court decisions of defamation lawsuits.

The first case was in 1991, and involved an Internet site that hosted around 150 online forums. A claim was brought against the internet provider when a columnist of one of the online forums posted a defamatory comment about his competitor. The competitor sued the online distributor for the published defamation. The courts categorized the internet service provider as a distributor because they did not review any content of the forums before the content was posted to the site. As a distributor, there was no legal liability, and the case was dismissed.

 

Distributor Liability

Distributor Liability refers to the limited legal consequences that a distributor is exposed to for defamation. A common example of a distributor, is a bookstore or library. The theory behind distributor liability is that it would be impossible for distributors to moderate and censor every piece of content that they disperse because of the sheer volume, and the impossibility of knowing whether something is false or not.

The second case that influenced the creation of section 230, was Stratton Oakmont, Inc. v. Prodigy Servs. Co., in which the court used publisher liability theory to find the internet provider liable for the third party defamatory postings published on its site.  The court deemed the website a publisher because they moderated and deleted certain posts, regardless of the fact that there were far too many postings a day to regulate each one.

 

Publisher Liability

Under common law principles, a person who publishes a third-party’s defamatory statement bears the same legal responsibility as the creator of that statement. This liability is often referred to as “publisher liability,” and is based in theory that a publisher has the knowledge, opportunity, and ability to exercise control over the publication. For example, a newspaper publisher could face legal consequences for the content located within it. The court’s decision was significant because it meant that if a website attempted to moderate certain posts, it would be held liable for all posts.

 

Section 230’s Creation

In response to the Stratton-Oakmond case, and the ambiguous court decisions regarding internet services provider’s liability, members of Congress introduced an amendment to the CDA that later became Section 230. The Amendment was specifically introduced and passed with the goal of encouraging the development of unregulated, free speech online by relieving internet providers from any liability for their content.

 

Text of the Act- Subsection (c)(1) 

“No Provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

 Section 230 further provides that…

“No cause of action may be brought and no liability may be imposed under any State or local law that is inconsistent with this section.”

 The language above removes legal consequences arising from content posted on their forum. Courts have interpreted this subsection as providing broad immunity to online platforms from suits over content of third parties. Because of this, section 230 has become the principal legal safeguard from lawsuits over sites content.

 

The Good

  •  Section 230 can be viewed as being one of the most important pieces of legislation that protects free speech online. One of the unique aspects of this legislation is that it essentially extends free speech protection, applying it to private, non-governmental companies.
  • Without CDA 230, the internet would be a very different place. This section influenced some of the internet’s most distinctive characteristics. The internet promotes free speech and offers the ability for worldwide connectivity.
  • The CDA 230 does not fully eliminate liability or court remedies for victims of online defamation. Rather, it makes only the creator themselves liable for their speech, instead of the speaker and the publisher.

 

 

The Bad

  •  Because of the legal protections section 230 provides, social media networks have less of an incentive to regulate false or deceptive posts. Deceptive online posts can have an enormous impact on society. False posts have the ability to alter election results, or lead to dangerous misinformation campaigns, like the QAnon conspiracy theory, and the anti-vaccination movement.
  • Section 230 is twenty-five years old, and has not been updated to match the internet’s extensive growth.
  • Big Tech companies have been left largely unregulated regarding their online marketplaces.

 

 The Future of 230

While section 230 is still successfully used by social media platforms, concerns over the archaic legislation have mounted. Just recently, Justice Thomas, who is infamous for being a quiet Justice, wrote a concurring opinion articulating his view that the government should regulate content providers as common carriers, like utilities companies. What implications could that have on the internet? With the growing level of criticism surrounding section 230, will Congress will finally attempt to fix this legislation? If not, will the Supreme Court be left to tackle the problem themselves?

A Slap in the Face(book)?

Social media law has become somewhat of a contentious issue in recent years. While most people nowadays could not imagine life without it, many realize too, that it’s influence on our daily lives may not be a great thing. As the technology has advanced to unimaginable levels and the platforms have boomed in popularity, it seems as though our smart phones and Big Tech know our every move. The leading social media platform, Facebook, has around 1.82 billion active users a day, with people volunteering all sorts of personal information to be stored in the internet database. Individual profiles hold pictures of our children, our friends, our family, meals we eat, locations we visit. “What’s on your mind?” is the opening invite to any Facebook page, and one can only hazard a guess as to how many people actually answer that question on a daily basis.  Social media sites know our likes, our dislikes, our preferences, our moods, the shoes we want to buy for that dress we are thinking of wearing to the party we are looking forward to in three weeks!

With all that knowledge, comes enormous power, and through algorithmic design, social media can manipulate our thoughts and beliefs by controlling what we see and don’t see. With all that power, therefore, should come responsibility, but Section 230 of the Communications Decency Act (CDA) has created a stark disconnect between the two. What started out as a worthy protection for internet service providers for the content posted by others, has more recently drawn criticism for the lack of accountability held by social media oligarchs such as Jack Dorsey (Twitter) and Mark Zuckerberg (Facebook).

However, that could all be about to change.

On May 28, 2017, three friends lost their lives in a deadly car accident in which the 17-year-old driver, Jason Davis, crashed into a tree at an estimated speed of 113 mph. Landen Brown, 20, and Hunter Morby, 17, were passengers. Tragic accident? Or wrongful death?

Parents of the deceased lay blame on the Snapchat App, which offered a ‘Speed Filter’ that would clock how fast you were moving, and allowed users to snap and share videos of their movements in progress.

You see where this is going.

As quickly became the trend, the three youths used the app to see how fast they could record the speed of their car. Just moments before their deaths, Davis had posted a ‘snap’ clocking the car’s speed at 123 mph. In Lemmon v Snap, the parents of two of the boys brought suit against the social media provider, Snap, Inc., claiming that the app feature encouraged reckless driving and ultimately served to “entice” the young users to their death.

Until now, social media platforms and other internet service providers have enjoyed the protection of near absolute immunity from liability. Written in 1996, Section 230 was designed to protect tech companies from liability, for suits such as defamation, for third party posts. In the early days, it was small tech companies, or an online business with a ‘comments’ feature that generally saw the benefits of the Code. 25 years later, many people are questioning the role of Section 230 within the vastly developing era of social media and the powerful pass it grants Big Tech in many of its societal shortcomings.

Regarded more as an open forum than the publisher or speaker, social media platforms such as Facebook, Twitter, TikTok, Instagram and Snapchat, have been shielded by Section 230 from any legal claims of harm caused by the content posted on their sites.

Applied broadly, it is argued that Section 230 prevents Snap, Inc. from being held legally responsible for the deaths of the three boys in this case, which is the defense the tech company relied upon. The district court dismissed the case on those grounds, holding that the captured speeds fall into the category of content published by a third party, for which the service provider cannot be held liable. The Ninth Circuit however, disagrees. The Court’s interesting swerve of such immunity, is that the speed filter resulted in the deaths of the boys regardless of whether or not their captured speeds were posted. In other words, it did not matter if the vehicle’s speed was shared with others in the app; the fact that the app promotes, and rewards, high speed (although the award system within the app is not entirely clear), is enough.

The implications of this could be tremendous. At a time when debate over 230 reevaluations is already heavy, this precedential interpretation of Section 230 could lead to some cleverly formulated legal arguments for holding internet service providers accountable for some of the highly damaging effects of internet, social media and smart phone usage.

For the many benefits the internet has to offer, it can no longer be denied that there is another, very ugly side to internet usage, in particular with social media.

It is somewhat of an open secret that social media platforms such as Facebook and Instagram, purposely design their apps to be addictive by its users. It is also no secret that there is a growing association between social media usage and suicides, depression and other mental health issues. Cyber bullying has long been a very real problem. In addition, studies have shown that smart device screen time in very young children has shockingly detrimental impacts on a child’s social and emotional developments,  not to mention the now commonly known damage it can have on a person’s eyesight.

An increased rate of divorces has been linked to smart phones, and distracted driving – whether it be texting or keeping tabs on your Twitter retweets, or Facebook ‘likes’– is on the increase. Even an increase in accidents while walking has been linked to distractions caused by the addictive smart devices.

With the idea of accountability being the underlying issue, it can of course be stated that almost all of these problems should be a matter of personal responsibility. Growing apart from your spouse? Ditch your cell phone and reinvent date night. Feeling depressed about your life as you ‘heart’ a picture of your colleague’s wine glass in front of a perfect sunset beach backdrop? Close your laptop and stop comparing yourself to everyone else’s highlights. Step in front of a cyclist while LOL’ing in a group text? Seriously….put your Apple Watch hand in your pocket and look where you are going! The list of personal-blame is endless. But then we hear about three young friends, two still in their teens, who lose their lives engaged with social media, and suddenly it’s not so easy to blame them for their own devastating misfortune.

While social media sites cannot be held responsible for the content posted by others, no matter how hurtful it might be to some, or no matter what actions it leads others to take, should they be held responsible for negligently making their sites so addictive, so emotionally manipulative and so targeted towards individual users, that such extensive and compulsive use leads to dire consequences? According to the Ninth Circuit, negligent app design can in fact be a cause of action for wrongful death.

With a potential crack in the 230-armor, the questions many lawyers will be scrambling to ask are:

      • What duties do the smart device producers and/or internet service providers owe to their users?
      • Are these duties breached by continuing to design, produce, and provide products that are now known to create such disturbing problems?
      • What injuries have occurred and where those injuries foreseeably caused by any such breaches of duty?

For the time being, it is unlikely that any substantial milestone will be reached with regards to Big Tech accountability, but the Ninth Circuit decision in this case has certainly delivered a powerful blow to the Big Tech apparent untouchability in the courtroom.

As awareness of all these social media related issues grow, could this court decision open the door to further suits of defective or negligent product design resulting in death or injury? Time will tell…..stay tuned.

Facebook Posts Can Land You In Jail!

Did you know that a single Facebook post can land you in jail?  Its true, an acting judge in Westchester NY recently ruled that a ‘tag’ notification on Facebook violated  a protective order.  The result of the violation; second-degree contempt, which can lead to punishment of up to a year in jail.   In January, the a judge issued a  restraining order against Maria Gonzalez, prohibiting her from communicating with her former sister-in-law, Maribel Calderon.  Restraining orders are issued to prevent person from making contact with protected individuals.  Traditionally, courts interpreted contact to mean direct communications in person, mail, email, phone, voicemail or even text.   Facebook tags, however, present a slightly different form of contact.

Unlike Facebook messages, tagging someone identifies the tagged person on the poster’s Facebook page.  The tag, however, has the concurrent effect of linking to the identified person’s profile; thereby notifying them of the post.  Ms. Gonzalez tagged Calderon in a post on her (Gonzalez’s) timeline calling Calderon stupid and writing “you have a sad family.”  Gonzalez argued the post did not violate the protective order since there was no contact aimed directly at Calderon.  Acting Westchester (NY) County Supreme Court Justice Susan Capeci felt otherwise writing a restraining order includes “contacting the protected party by electronic or other means.”  Other means, it seems, is through personal posts put out on social media.

And Social Media posts aren’t just evidence of orders of protection violations, they are also grounds for supporting the issuance of restraining orders.  In 2013, a court granted an order of protection for actress Ashley Tinsdale against an alleged stalker.  Tinsdale’s lawyers presented evidence of over 19,000 tweets that the alleged stalker posted about the actress (an average of 100 tweets per day).

The bottom line:  Naming another on a social media post, even one that is directed to the twittersphere or Facebook community, rather than toward a particular individual,  is sufficient contact for purposes of supporting restraining orders or violations thereof.   We should all keep our posts positives –even more so if we have been told to stay away!!!

Should Courts allow Facebook Posts as Evidence of Lack of Remorse?

Last month Orange County Prosecutors charged Victoria Graswald with the murder of her fiancé Vincent Viafore.  Ms. Graswald allegedly tampered with Mr. Viafore’s kayak while the two were boating in the icy (yes again icy – see post below) water of the Hudson River. As a result, prosecutors argue, Mr. Viafore drowned.

Although Mr. Viafore’s body has yet to be found, prosecutors believed that Ms. Graswald’s inconsistent stories, and pictures she posted on Facebook after the accident were sufficient to indict her for her fiancé’s death.  They cite as evidence a picture of Ms. Graswald in a yoga pose against a bucolic setting and a video of her doing a cartwheel.

Facebook posts that demonstrate a lack of remorse have been figuring into criminal prosecutions for a while.  in 2011 Casey Anthony was indicted in the media for posts she shared of a “Bella Vida” tattoo she emblazoned on her back shoulder and pictures she posted showing Ms. Anthony partying while her daughter was still missing.   A California, judge sentenced a woman to 2 years in jail for her first DUI offense (typical first time offenders are given probation).  The judge cited a post- arrest picture the woman posted to MySpace while holding a drink.

But are Facebook posts, with all of their innuendo, a fair measures of guilt.   The Casey Anthony jury probably didn’t think so; although all we know for sure is that the posts, considered as part of the prosecution’s entire case, were not sufficient to lead to a guilty verdict.  And arguably posts, without a body, will not provide the lack of reasonable doubt necessary to convict Ms. Graswald.

But should these pictures hold the weight that members of the criminal justice system increasingly ascribe to them?  A problem seems to be context.  While the pictures seem damning when posted during or soon after an investigation, the evidence is circumstantial at best.  Absent testimony by the defendant corroborating his or her intent at the time of the post, (an event unlikely to happen) jurors can never be certain that the pictures demonstrate an expression of relief or a lack of remorse.

The issue of post-indictment remorse is transcends social media. Prosecutors recently introduced into evidence a picture of Dzhokhar Tsarnaev (the Boston Bomber) flashing his middle finger into a camera from a jail holding cell.  But Tsarnaev’s attorney, like Ms. Graswald’s spun the picture in a way that suggests it has nothing to do with a lack of remorse.

And therein lies the problem, skilled attorney’s on either side can explain  pictures, and intent while posting them, from several different angles.  The issue becomes whether their value is sufficient to justify supporting an indictment for a crime? a conviction? or a sentence?

Thoughts?

From Twitter to Terrorism

A teen was arrested for Tweeting an airline terrorist threat. A 14 year old Dutch girl named Sarah with twitter name @QueenDemetriax tweeted to American Airlines the following: “@AmericanAir hello my name’s lbrahim and I’m from Afghanistan. I’m part of Al Qaida and on June 1st I’m gonna do something really big bye.”

In response American Airlines wrote to Sarah from their official Twitter account saying “we take these threats very seriously. Your IP address and details will be forwarded to security and the FBI.” Moments after their response, Sarah replied saying “I’m just a girl” and that her initial tweet was simply a joke that her friend wrote! She had also posted a tweet apologizing to American Airlines and stating that she is scared now.

Sarah turned herself in to the Dutch police station, where the police department stated that they are taking her tweet seriously since it is an alarming threat. The girl was charged with “posting a false or alarming announcement” under Dutch law. It was unconfirmed whether the FBI was involved or not but she gained thousands of followers on Twitter as a result of this incident. Could this be a new trend in order to gain popularity or recognition? Should Sarah be punished and if so how?

Update:

Others are now tweeting similar tweets @AmericanAir and other airlines. Kale tweeted @SouthwestAir “I bake really good pies and my friends call me ‘the bomb’ am I still allowed to fly?” Donnie Cyrus tweeted @SouthwestAir “@WesleyWalrus is gonna bomb your next few flights.” ArmyJacket tweeted @AmericanAir “I have a bomb under the next plane to take off” There are many other tweets with similar language all aimed at airlines.

There are no reports yet of any of these follow up twitter threats being reported to the appropriate authorities. Are these tweeters going too far? These tweets can potentially be translated into legitimate threats or have they now crossed into the realm of freedom of speech?

$70,000 Settlement for a Facebook comment

Minnewaska School District has agreed to pay Riley Stratton $70,000 to settle the 2012 case involving the former Minnewaska Area Middle School sixth-grader. Stratton is now 15 years old. According to the lawsuit Stratton was given detention after she posted comments about a teacher’s aide on her Facebook page. The ACLU claimed that the reason for originally viewing her page was due to claims that she was using school computers to talk to a boy about sex. However, Stratton used her own personal computer at home to make the post -not a school computer.
The nature of the comments which lead to detention about a teacher’s aide were supposedly disapproving. A disputed fact in the case was whether there was permission for the school to go through her cellphone and request passwords for her Facebook account. According to Minnewaska Superintendent Greg Schmidt “It was believed the parent had given permission to look at her cellphone,” but there was no signed waiver from the parent, and there was no policy requiring one.
The fact that the posting was made from her home was a deciding factor in settling the case, according to Schmidt. The reason for the lawsuit was because Stratton became too distraught and embarrassed to attend class or go to school. Since this settlement, the school has changed its policy. The school now requires parents to submit a signed permission waiver in order to look through a students cellphone. This case may be an example of schools overreaching their authority in punishing kids for activities outside of school, and especially for things that happen on social media.

#Famous on C-Block?… or a Jailhouse-Crock?

In 2008, Jodi Ann Arias put together an elaborate plan to corner her victim Travis Alexander and brutally stab him to death.  After 29 consecutive stab wounds, a slit to his neck nearly decapitating him, and a gunshot wound to the head, she watched him suffer and take his last breath.  She left him in the shower to rot, until he was ultimately found five days later in his Mesa, Arizona home.   Due to the heinous nature of the crime, and the fact that she was an “attractive” female, the case garnered enormous media attention.  After a lengthy trial, she was found guilty of first degree murder.  Currently, Jodi awaits her fate in the penalty phase as the Prosecutor Juan Martinez seeks the death penalty.

 As a convicted murderer, Jodi Arias has developed a large body of loyal followers via her Twitter page, which is run by a “friend”/previous fellow inmate.  She currently sells artwork on her website by using Twitter to advertise.  She also uses Twitter as a platform to: promote sales of her wristbands, taunt the victim’s family, solicit donations, poke fun at Prosecutor Juan Martinez, belittle her own attorney Kirk Nurmi, and flaunt her media coverage.  Should any of this be allowed to happen? 

 The Son of Sam Law, applicable in Arizona, prevents criminals from profiting from their crimes.  Although her artwork is not directly related to her crime, her Twitter account brings her enough fame to enable a healthy volume and a continuous flow of business. Should her horrendous murder be an outlet for her fame?  Is fame a legitimate form of profit? Would any of us ever know Jodi Arias if not for the gruesome death of Travis Alexander?   Should Jodi Arias have a voice to the outside world, after she extinguished Travis’ so horribly forever? Her latest tweet says she’s going “Radio Silent.”  Considering that jury selection begins soon, her sudden choice to “sign off” seems obvious.  Should such use of Social Media by a convicted murderer ever be allowed?

May it Please The Court, I’d Like to Tweet Now

Last week, the Iowa Supreme Court submitted a proposal to revise its current rules for expanded media coverage during courtroom proceedings, specifically addressing the use of smart phones, tablets and the like to live blog and tweet. With most of my courtroom experience to date taking place in NY and PA courts I found this to be quite interesting. Although some judges in NY and PA allow certain uses of mobile devices, most courts I have been in had a pretty strict no-cell-phone-use policy. I have, on more than one occasion, witnessed judges stop everything in order to reprimand an attorney or even a gallery member for not having their phone on silent. There are currently 36 states (see survey link below) that have a policy addressing the use Twitter in the Courtroom, but only a handful of those policies actually allow members of the media to use social media to report live from court.

One can immediately see at least some of the upside of allowing live tweets from court, as nationwide-dissemination of a tweet to the general public will grant them instantaneous access and knowledge of everything happening in the proceeding. However, one should just as easily be able to recognize some shortfalls of allowing the use of social media from live court. For instance, what if an empanelled juror came across certain blogs or tweets that affects their impartiality? Can justice truly be served or will the use of social media during a live trail put certain litigants at a disadvantage? With the exponential growth of social media and more and more people getting their news from social media platforms each year, it seems only inevitable that these are questions courts across the country will be facing in the near future. However, according to the most recent survey conducted by the CCPIO, an organization that partners with the National Center for State Courts, we are still further away than one might think from all courts hopping on the Social Media Train.

Skip to toolbar