Alarming Side of Youtube

Social media has now become an integrated part of an individual’s life. From Facebook to twitter, Instagram, snapchat to the latest edition, that is TikTok, social media has made its way into a person’s life and occupies the same value as that of eating, sleeping, exercising etc. There is no denying the dopamine hit you get from posting on Instagram or scrolling endlessly, liking, sharing, commenting and re-sharing etc. From checking your notifications and convincing yourself, “Right, just five minutes, I am going to check my notifications” to spending hours on social media, it is a mixed bag. While I find that being in social media is to an extent a way to relax and alleviate stress, I also believe social media and its influence on peoples’ lives should not cross a certain threshold.

We all like a good laugh. We get a good laugh from people doing funny things on purpose or people pranking other people to get a laugh. Most individuals nowadays use some sort of social medial platforms to watch content or make content. YouTube is once such platform. After Google, YouTube is the most visited website on the internet. Everyday about a billion hours of videos are watched by people all over the world. I myself, contribute to those billion hours.

Now imagine you are on YouTube, you start watching a famous youtuber’s videos, you then realize this video is not only disturbing but is also very offensive. You stop watching the video. That’s it. You think that is a horrible video and think no more of it. On the contrary, there have been videos on YouTube which have caused mass controversy all over the internet since the platforms birth in 2005. Let us now explore the dark side of YouTube.

There is an industry that centers around pranks done to members of the public which is less about humor and more about shock value. There is nothing wrong with a harmless prank, but when doing a prank, one must be considerate how their actions are perceived by others, one wrong move and you could end facing charges or a conviction.

Across the social media platform there are many creators of such prank videos. Not all of them have been well received by the public or by the fands of the creators. One such incident is where YouTube content creators, Alan and Alex Stokes who are known for their gag videos plead guilty to charges centering around fake bank robberies staged by them.

The twins wore black clothes and ski masks, carried cash filled duffle bags for a video where they pretended to have robbed a bank. They then ordered an uber who, unaware of the prank had refused to drive them. An onlooker called the police believing that the twins had robbed a bank and were attempting to carjack the vehicle. Police arrived at the scene and held the driver at gunpoint until it was revealed and determined that it was a prank. The brothers were not charged and let off with a warning. They however, pulled the same stunt at a university some four hours later and were arrested.

They were charged with one felony count of false imprisonment by violence, menace or fraud, or deceit and one misdemeanor count of falsely reporting an emergency. The charges carry a maximum penalty of five years in prison. “These were not pranks. These are crimes that could have resulted in someone getting seriously injured or even killed.” said Todd Spitzer, Orange County district attorney.

The brothers accepted a bargain from the judge. In return for a guilty plea, the felony count would be reduced a misdemeanor resulting in one year probation and 160 hours of community service and compensation. The plea was entered despite the prosecution stating that tougher charges were necessary. The judge also warned the brothers, who have over 5 million YouTube subscribers not to make such videos.

Analyzing the scenario above, I would agree with the district attorney. Making prank videos and racking up videos should not come at the cost of inciting fear and panic in the community. The situation with the police could have escalated severely which might have led to a more gruesome outcome. The twins were very lucky, however, in the next incident, the man doing a prank video in Tennessee was not.

In filming a YouTube prank video, 20 year old Timothy Wilks was shot dead in a parking lot of an Urban Air indoor trampoline park. David Starnes Jr, admitted to shooting Wilks when he and an unnamed individual approached him and a group wielding butcher knives and lunged at them. David told the police that he shot one of them in defense of himself and others.

Wilks’s friend said they were filming a video of a robbery prank for their YouTube channel. This was a supposed to be a recorded YouTube video meant to capture the terrified reactions of their prank victims. David was unaware of this prank and pulled out his gun to protect himself and others. No one has been charged yet in regard to the incident.

The above incident is an example of how pranks can go horribly wrong and result in irreparable damage. This poses the question, who do you blame, the 20 years old man staging a very dangerous prank video, or the 23-year-old who fired his gun in response to that?

Monalisa Perez, a youtuber from Minnesota fatally shot and killed her boyfriend in an attempt to film a stunt of firing a gun 30 cm away from her boyfriend, Predo Ruiz, who only had a thick book of 1.5inch to protect him. Perez pleaded guilty to second degree manslaughter and was sentenced to six months’ imprisonment.

Perez and her boyfriend Ruiz would document their everyday lives in Minnesota by posting pranks videos on YouTube to gain views. Before the fatal stunt, Perez tweeted, “Me and Pedro are probably going to shoot one of the most dangerous videos ever. His idea, not mine.”

Perez had previously experimented before and thought that the hardback Encyclopedia would be enough to stop the bullet. Perez fired a .50-calibre Desert Eagle, which is known to be an extremely powerful handgun which pierced the encyclopedia and fatally wounded Ruiz.

Perez will serve a 180-day jail term, serve 10 years of supervised probation, be banned for life from owning firearms and make no financial gain from the case. The sentence is below the minimum guidelines, but it was allowed on the ground that the stunt was mostly Ruiz’s idea.

Dangerous pranks such as the one above has left a man dead and a mother of two grieving for fatally killing her partner.

In response to the growing concerns of filming various trends and videos, YouTube have updated their policies regarding “harmful and dangerous” content and explicitly banned pranks and challenges that may cause immediate or lasting physical or emotional harm. The policies page showcases three types of videos that are now prohibited. They are: 1) Challenges that encourage acts that have an inherent risk of sever harm; 2) Pranks that make victims they are physical danger and 3) Pranks that cause emotional distress to children.

Prank videos may depict the dark side of how content crating can go wrong but they are not the only ones. In 2017, youtuber, Logan Paul became the source of controversy after posting a video of him in a Japanese forest called Aokigahara near the base of Mount Fuji. Aokigahara is a dense forest with lush trees and greenery. The forest is, however, infamous for being known as the suicide forest. It is a frequent site for suicides and is also considered haunted.

Upon entering the forest, the youtuber came across a dead body hung from a tree. The actions and depictions of Logan Paul around the body are what caused controversy and outrage. The video has since been taken down from YouTube. An apology video was posted by Logan Paul trying to defend his actions. This did nothing to quell the anger on the internet. He then came out with a second video where he could be seen tearing up on camera. In addressing the video, YouTube expressed condolences and stated that they prohibit such content which are shocking or disrespectful. Paul lost the ability to make money on his videos through advertisement which is known as demonetization. He was also removed from the Google Preferred program, where brands can sell advertisement to content creators on YouTube.

That consequences of Logan Paul’s actions did not end there. A production company is suing the youtuber on the claims that the video of him in the Aokigahara resulted in the company losing a multimillion-dollar licencing agreement with Google. The video caused Google to end its relationship with Planeless Pictures, the production company and not pay the $3.5 million. Planeless Pictures are now suing Paul claiming that he pay the amount as well as additional damage and legal fees.

That is not all. Youtube has been filled with controversies which have resulted in lawsuits.

A youtuber by the name of Kanghua Ren was fined $22300 and was also sentenced to 15 months imprisonment for filming himself giving a homeless man an oreo filled with toothpaste. He gave 20 euros and oreo cookies to a homeless which were laced with toothpaste instead of cream. The video depicts the homeless man vomiting after eating the cookie. In the video Ren stated that although he had gone a bit far, the action would help clean the homeless person’s teeth. The court, however, did not take this lightly and sentenced him. The judge stated that this was not an isolated act and that Ren had shown cruel behaviour towards vulnerable victims.

These are some of the pranks and videos that have gained online notoriety. There are many other videos which have portrayed child abuse, following a trend by eating tidepods as well as making sharing anti-Semitic videos and using racist remarks. The most disturbing thing about these videos is that they are not only viewed by adults but also children. In my opinion these videos could be construed as having some influence on young individuals.

Youtube is a diverse platform home to millions of content creators. Since its inception it has served as a mode of entertainment and means of income to many individuals. From posting cat videos online to making intricate, detailed, and well directed short films, YouTube has revolutionized the video and content creation spectrum. Being an avid viewer of many channels on YouTube, I find that incidents like these, give YouTube a bad name. Proper policies and guidelines should be enacted and imposed and if necessary government supervision may also be exercised.

Don’t Throw Out the Digital Baby with the Cyber Bathwater: The Rest of the Story

This article is in response to Is Cyberbullying the Newest Form of Police Brutality?” which discussed law enforcement’s use of social media to apprehend people. The article provided a provocative topic, as seen by the number of comments.

I believe that discussion is healthy for society; people are entitled to their feelings and to express their beliefs. Each person has their own unique life experiences that provide a basis for their beliefs and perspectives on issues. I enjoy discussing a topic with someone because I learn about their experiences and new facts that broaden my knowledge. Developing new relationships and connections is so important. Relationships and new knowledge may change perspectives or at least add to understanding each other better. So, I ask readers to join the discussion.

My perspectives were shaped in many ways. I grew up hearing Paul Harvey’s radio broadcast “The Rest of the Story.” His radio segment provided more information on a topic than the brief news headline may have provided. He did not imply that the original story was inaccurate, just that other aspects were not covered. In his memory, I will attempt to do the same by providing you with more information on law enforcement’s use of social media. 

“Is Cyberbullying the Newest Form of Police Brutality?

 The article title served its purpose by grabbing our attention. Neither cyberbullying or police brutality are acceptable. Cyberbullying is typically envisioned as teenage bullying taking place over the internet. The U.S. Department of Health and Human Services states that “Cyberbullying includes sending, posting, or sharing negative, harmful, false, or mean content about someone else. It can include sharing personal or private information about someone else causing embarrassment or humiliation”. Similarly, police brutality occurs when law enforcement (“LE”) officers use illegal and excessive force in a situation that is unreasonable, potentially resulting in a civil rights violation or a criminal prosecution.

While the article is accurate that 76% of the surveyed police departments use social media for crime-solving tips, the rest of the story is that more departments use social media for other purposes. 91% notified the public regarding safety concerns. 89% use the technology for community outreach and citizen engagement, 86% use it for public relations and reputation management. Broad restrictions should not be implemented, which would negate all the positive community interactions increasing transparency.   

Transparency 

In an era where the public is demanding more transparency from LE agencies across the country, how is the disclosure of the public’s information held by the government considered “Cyberbullying” or “Police Brutality”? Local, state, and federal governments are subject to Freedom of Information Act laws requiring agencies to provide information to the public on their websites or release documents within days of requests or face civil liability.

New Jersey Open Public Records

While the New Jersey Supreme Court has not decided if arrest photographs are public, the New Jersey Government Records Council (“GRC”) has decided in Melton v. City of Camden, GRC 2011-233 (2013) that arrest photographs are not public records under NJ Open Public Records Act (“OPRA”) because of Governor Whitmer’s Executive Order 69 which exempts fingerprint cards, plates and photographs and similar criminal investigation records from public disclosure. It should be noted that GRC decisions are not precedential and therefore not binding on any court.

However, under OPRA, specifically 47:1A-3 Access to Records of Investigation in Progress, specific arrest information is public information and must be disclosed to the public within 24 hours of a request to include the:

  • Date, time, location, type of crime, and type of weapon,
  • Defendant’s name, age, residence, occupation, marital status, and similar background information.
  • Identity of the complaining party,
  • Text of any charges or indictment unless sealed,
  • Identity of the investigating and arresting officer and agency and the length of the investigation,
  • Time, location, and the arrest circumstances (resistance, pursuit, use of weapons),
  • Bail information.

For years, even before Melton, I believed that an arrestee’s photograph should not be released to the public. As a police chief, I refused numerous media requests for arrestee photographs protecting their rights and believing in innocence until proven guilty. Even though they have been arrested, the arrestee has not received due process in court.

New York’s Open Public Records

In New York under the Freedom of Information Law (“FOIL”), Public Officers Law, Article 6, §89(2)(b)(viii) (General provisions relating to access to records; certain cases) The disclosure of LE arrest photographs would constitute an unwarranted invasion of an individual’s personal privacy unless the public release would serve a specific LE purpose and the disclosure is not prohibited by law.

California’s Open Public Records

Under the California Public Records Act (CPRA) a person has the statutory right to be provided or inspect public records, unless a record is exempt from disclosure. Arrest photographs are inclusive in arrest records along with other personal information, including the suspect’s full name, date of birth, sex, physical characteristics, occupation, time of arrest, charges, bail information, any outstanding warrants, and parole or probation holds.

Therefore under New York and California law, the blanket posting of arrest photographs is already prohibited.

Safety and Public Information

 Recently in Ams. for Prosperity Found. V. Bonta, the compelled donor disclosure case, while invalidating the law on First Amendment grounds, Justice Alito’s concurring opinion briefly addressed the parties personal safety concerns that supporters were subjected to bomb threats, protests, stalking, and physical violence. He cited Doe v Reed  which upheld disclosures containing home addresses under Washington’s Public Records Act despite the growing risks by anyone accessing the information with a computer. 

Satisfied Warrant

I am not condoning Manhattan Beach Police Department’s error of posting information on a satisfied warrant along with a photograph on their “Wanted Wednesday” in 2020. However, the disclosed information may have been public information under CPRA then and even now. On July 23, 2021, Governor Newsom signed a law amending Section 13665 of the CPRA prohibiting LE agencies from posting photographs of an arrestee accused of a non-violent crime on social media unless:

  • The suspect is a fugitive or an imminent threat, and disseminating the arrestee’s image will assist in the apprehension.
  • There is an exigent circumstance and an urgent LE interest.
  • A judge orders the release or dissemination of the suspect’s image based on a finding that the release or dissemination is in furtherance of a legitimate LE interest.

The critical error was that the posting stated the warrant was active when it was not. A civil remedy exists and was used by the party to reach a settlement for damages. Additionally, it could be argued that the agency’s actions were not the proximate cause when vigilantes caused harm.

Scope of Influence

LE’s reliance on the public’s help did not start with social media or internet websites. The article pointed out that “Wanted Wednesday” had a mostly local following of 13,600. This raised the question if there is much of a difference between the famous “Wanted Posters” from the wild west or the “Top 10 Most Wanted” posters the Federal Bureau of Investigations (“FBI”) used to distribute to Post Offices, police stations and businesses to locate fugitives. It can be argued that this exposure was strictly localized. However, the weekly TV show America’s Most Wanted, made famous by John Walsh, aired from 1988 to 2013, highlighting fugitive cases nationally. The show claims it helped capture over 1000 criminals through their tip-line. However, national media publicity can be counter-productive by generating so many false leads that obscure credible leads.

The FBI website contains pages for Wanted People, Missing People, and Seeking Information on crimes. “CAPTURED” labels are added to photographs showing the results of the agency’s efforts. Local LE agencies should follow FBI practices. I would agree with the article that social media and websites should be updated; however, I don’t agree that the information must be removed because it is available elsewhere on the internet.

Time

Vernon Gebeth, the leading police homicide investigation instructor, believes time is an investigator’s worst enemy.  Eighty-five percent of abducted children are killed within the first five hours. Almost all are killed within the first twenty-four hours. Time is also critical because, for each hour that passed, the distance a suspect’s vehicle can travel expands by seventy-five miles in either direction. In five hours, the area can become larger than 17,000 square miles. Like Amber Alerts, social media can be used to quickly transmit information to people across the country in time-sensitive cases.

Live-Streaming Drunk Driving Leads to an Arrest

When Whitney Beall, a Florida woman, used a live streaming app to show her drinking at a bar then getting into her vehicle. The public dialed 911, and a tech-savvy officer opened the app, determined her location, and pulled her over. She was arrested after failing a DWI sobriety test.  After pleading guilty to driving under the influence, she was sentenced to 10 days of weekend work release, 150 hours of community service, probation, and a license suspension. In 2019 10,142 lives were lost to alcohol impaired driving crashes.

Family Advocating

Social media is not limited to LE. It also provides a platform for victim’s families to keep attention on their cases. The father of a seventeen-year-old created a series of Facebook Live videos about a 2011 murder resulting in the arrest of Charles Garron. He was to a fifty-year prison term.

Instagram Selfies with Drugs, Money and Stolen Guns 

Police in Palm Beach County charged a nineteen-year-old man with 142 felony charges, including possession of a weapon by a convicted felon, while investigating burglaries and jewel thefts in senior citizen communities. An officer found his Instagram account with incriminating photographs. A search warrant was executed, seizing stolen firearms and $250,000 in stolen property from over forty burglaries.

Bank Robbery Selfies


Police received a tip and located a social media posting by John E. Mogan II of himself with wads of cash in 2015. He was charged with robbing an Ashville, Ohio bank. He pled guilty and was sentenced to three years in prison. According to news reports, Morgan previously  served prison time for another bank robbery.

Food Post Becomes the Smoking Gun

LE used Instagram to identify an ID thief who posted photographs of his dinner at a high-end steakhouse with a confidential informant (“CI”).  The man who claimed he had 700,000 stolen identities and provided the CI a flash drive of stolen identities. The agents linked the flash drive to a “Troy Maye,” who the CI identified from Maye’s profile photograph. Authorities executed a search warrant on his residence and located flash drives containing the personal identifying information of thousands of ID theft victims. Nathaniel Troy Maye, a 44-year-old New York resident, was sentenced to sixty-six months in federal prison after pleading guilty to aggravated identity theft.

 

Wanted Man Turns Himself in After Facebook Challenge With Donuts

A person started trolling Redford Township Police during a Facebook Live community update. It was determined that he was a 21-year-old wanted for a probation violation for leaving the scene of a DWI collision. When asked to turn himself in, he challenged the PD to get 1000 shares and he would bring in donuts. The PD took the challenge. It went viral and within an hour reached that mark acquiring over 4000 shares. He kept his word and appeared with a dozen donuts. He faced 39 days in jail and had other outstanding warrants.

The examples in this article were readily available on the internet and on multiple news websites, along with photographs.

Under state Freedom of Information Laws, the public has a statutory right to know what enforcement actions LE is taking. Likewise, the media exercises their First Amendment rights to information daily across the country when publishing news. Cyber journalists are entitled to the same information when publishing news on the internet and social media. Traditional news organizations have adapted to online news to keep a share of the news market. LE agencies now live stream agency press conferences to communicating directly with the communities they serve.

Therefore the positive use of social media by LE should not be thrown out like bathwater when legal remedies exist when damages are caused.

“And now you know…the rest of the story.”

Free speech, should it be so free?

In the United States everybody is entitled to free speech; however, we must not forget that the First Amendment of the Constitution only protects individuals from federal and state actions. With that being said, free speech is not protected from censorship by private entities, like social media platforms. In addition, Section 230 of the Communications Decency Act (CDA) provides technology companies like Twitter, YouTube, Facebook, Snapchat, Instagram as well as other social media giants, immunity from liabilities arising from the content posted on their websites. The question becomes whether it is fair for an individual who desires to freely express himself or herself to be banned from certain social media websites by doing so? What is the public policy behind this? What are the standards employed by these social media companies when determining who should or should not be banned? On the other hand, are social media platforms being used as tools or weapons when it comes to politics? Do they play a role in how the public vote? Are the users truly seeing what they think they have chosen to see or are the contents being displayed targeted to the users and may ultimately create biases?

As we have seen earlier this year, former President Trump was banned from several social media platforms as a result of the January 6, 2021 assault at the U.S. Capitol by Trump supporters. It is no secret that our former president is not shy about his comments on a variety of topics. Some audiences view him as outspoken, direct, or perhaps provocative. When Twitter announced its permanent suspension of former President Trump’s account, its rationale was to prevent further incitement of violence. By falsely claiming that the 2020 election had been stolen from him, thousands of Trump supporters gathered in Washington, D.C. on January 5 and January 6 which ultimately led to violence and chaos. As a public figure and a politician, our former president should have known that his actions or viewpoints on social media are likely to trigger a significant impact on the public. Public figures and politicians should be held to a higher standard as they represent citizens who voted for them. As such, they are influential. Technology companies like Twitter saw the former president’s tweets as potential threats to the public as well as a violation of their company policies; hence, it was justified to ban his account. The ban was an instance of private action as opposed to government action. In other words, former President Trump’s First Amendment rights were not violated.

Spare Me Your Outrage, Your Shock. This Is America | Cognoscenti

First, let us discuss the fairness aspect of censorship. Yes, individuals possess rights to free speech; however, if the public’s safety is at stake, actions are required to avoid chaos. For example, you cannot scream “fire”  out of nowhere in a dark movie theater as it would cause panic and unnecessary disorder. There are rules you must comply with in order to use the facility and these rules are in place to protect the general welfare. As a user, if you don’t like the rules set forth by that facility, you can simply avoid using it. It does not necessarily mean that your idea or speech is strictly prohibited, just not on that particular facility. Similar to social media platforms, if users fail to follow their company policies, the companies reserve the right to ban them. Public policy probably outweighs individual freedom. As for the standards employed by these technology companies, there is no bright line. As I previously mentioned, Section 230 grants them immunity from liabilities. That being said, the contents are unregulated and therefore, these social media giants are free to implement and execute policies as they seem appropriate.

The Dangers of Social Networking - TurboFuture

In terms of politics, I believe social media platforms do play a role in shaping their users’ perspectives in some way. This is because the contents that are being displayed are targeted, if not tailored, as they collect data based on the user’s preferences and past habits. The activities each user engages in are being monitored, measured, and analyzed. In a sense, these platforms are being used as a weapon as they may manipulate users without the users even knowing. A lot of times we are not even aware that the videos or pictures that we see online are being presented to us because of past contents we had seen or selected. In other words, these social media companies may be censoring what they don’t want you to see or what they may think you don’t want to see.  For example, some technology companies are pro-vaccination. They are more likely to post information about facts about COVID-19 vaccines or perhaps publish posts that encourage their users to get vaccinated.  We think we have control over what we see or watch, but do we really?

How to Avoid Misinformation About COVID-19 | Science | Smithsonian Magazine

There are advantages and disadvantages to censorship. Censorship can reduce the negative impact of hate speech, especially on the internet. By limiting certain speeches, we create more opportunities for equality. In addition, censorship prevents the spread of racism. For example, posts and videos of racial comments could be blocked by social media companies if deemed necessary. Censorship can also protect minors from seeing harmful content. Because children can be manipulated easily, it helps promote safety.  Moreover, censorship can be a vehicle to stop false information. During unprecedented times like this pandemic, misinformation can be fatal. On the other hand, censorship may not be good for the public as it creates a specific narrative in society. This can potentially cause biases. For example, many blamed Facebook for the outcome of an election as it’s detrimental to our democracy.

Overall, I believe that some sort of social media censorship is necessary. The cyber-world is interrelated to the real world. We can’t let people do or say whatever they want as it may have dramatic detrimental effects. The issue is how do you keep the best of both worlds?

 

Private or not private, that is the question.

Section 230 of the Communications Decency Act (CDA), protects private online companies from liability for content posted by others. This immunity also grants internet service providers the freedom to regulate what is posted onto their sites. What has faced much criticism of late however, is social media’s immense power to silence any voices the platform CEOs disagree with.

Section 230(c)(2), known as the Good Samaritan clause, states that no provider shall be held liable for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”

When considered in the context of a ‘1996’ understanding of internet influence (the year the CDA was created) this law might seem perfectly reasonable. Fast forward 25 years though, with how massively influential social media has become on society and the spread of political information, there has developed a strong demand for a repeal, or at the very least, a review of Section 230.

The Good Samaritan clause is what shields Big Tech from legal complaint. The law does not define obscene, lewd, lascivious, filthy, harassing or excessively violent. And “otherwise objectionable” leaves the internet service providers’ room for discretion all the more open-ended. The issue at the heart of many critics of Big Tech, is that the censorship companies such as Facebook, Twitter, and YouTube (owned by Google) impose on particular users is not fairly exercised, and many conservatives feel they do not receive equal treatment of their policies.

Ultimately, there is little argument around the fact that social media platforms like Facebook and Twitter are private companies, therefore curbing any claims of First Amendment violations under the law. The First Amendment of the US Constitution only prevents the government from interfering with an individual’s right to free speech. There is no constitutional provision that dictates any private business owes the same.

Former President Trump’s recent class action lawsuits however, against Facebook, Twitter, Google, and each of their CEOs, challenges the characterization of these entities as being private.

In response to the January 6th  Capitol takeover by Trump supporters, Facebook and Twitter suspended the accounts of the then sitting president of the United States – President Trump.

The justification was that President Trump violated their rules by inciting violence and encouraged an insurrection following the disputed election results of 2020. In the midst of the unrest, Twitter, Facebook and Google also removed a video posted by Trump, in which he called for peace and urged protestors to go home. The explanation given was that “on balance we believe it contributes to, rather than diminishes the risk of ongoing violence” because the video also doubled down on the belief that the election was stolen.

Following long-standing contentions with Big Tech throughout his presidency, the main argument in the lawsuit is that the tech giants Facebook, Twitter and Google, should no longer be considered private companies because their respective CEOs, Mark Zuckerberg, Jack Dorsey, and Sundar Pichai, actively coordinate with the government to censor politically oppositional posts.

For those who support Trump, probably all wish to believe this case has a legal standing.

For anyone else who share concerns about the almost omnipotent power of Silicon Valley, many may admit that Trump makes a valid point. But legally, deep down, it might feel like a stretch. Could it be? Should it be? Maybe. But will Trump see the outcome he is looking for? The initial honest answer was “probably not.”

However, on July 15th 2021, White House press secretary, Jen Psaki, informed the public that the Biden administration is in regular contact with Facebook to flag “problematic posts” regarding the “disinformation” of Covid-19 vaccinations.

Wait….what?!? The White House is in communication with social media platforms to determine what the public is and isn’t allowed to hear regarding vaccine information? Or “disinformation” as Psaki called it.

Conservative legal heads went into a spin. Is this allowed? Or does this strengthen Trump’s claim that social media platforms are working as third-party state actors?

If it is determined that social media is in fact acting as a strong-arm agent for the government, regarding what information the public is allowed to access, then they too should be subject to the First Amendment. And if social media is subject to the First Amendment, then all information, including information that questions, or even completely disagrees with the left-lean policies of the current White House administration, is protected by the US Constitution.

Referring back to the language of the law, Section 230(c)(2) requires actions to restrict access to information be made in good faith. Taking an objective look at some of the posts that are removed from Facebook, Twitter and YouTube, along with many of the posts that are not removed, it begs the question of how much “good faith” is truly exercised. When a former president of the United States is still blocked from social media, but the Iranian leader Ali Khamenei is allowed to post what appears nothing short of a threat to that same president’s life, it can certainly make you wonder. Or when illogical insistence for unquestioned mass emergency vaccinations, now with continued mask wearing is rammed down our throats, but a video showing one of the creators of the mRNA vaccine expressing his doubts regarding the safety of the vaccine for the young is removed from YouTube, it ought to have everyone question whose side is Big Tech really on? Are they really in the business of allowing populations to make informed decisions of their own, gaining information from a public forum of ideas? Or are they working on behalf of government actors to push an agenda?

One way or another, the courts will decide, but Trump’s class action lawsuit could be a pivotal moment in the future of Big Tech world power.

Are Judges’ Safety at Risk? The Increase in Personal Threats Prompts the Introduction of the Daniel Anderl Judicial Security and Privacy Act

When a judge renders a legal decision, they hardly anticipate that their commitment to serving the public could make themselves or their family a target for violence. Rather than undergo the appeals process when an unfavorable verdict is reached, disgruntled civilians are threatening and even attacking the presiding judges and their families – placing them in fear of their lives.

Earlier this month, the federal judiciary introduced legislation which aims to safeguard the personal information of judges and their immediate family members within federal databases and restrict data aggregators from reselling that information. The Administrative Office of the U.S. Courts announced their support for the Daniel Anderl Judicial Security and Privacy Act of 2021, named for the late son of Judge Esther Salas of the U.S. District Court for the District of New Jersey.

The bill comes in response to the tragedy that occurred on July 19, 2020, when an angered attorney disguised as a FedEx delivery driver showed up at the Salas’ home and opened fire. In attempting to assassinate Salas, the gunman shot and killed her 20-year-old son, Daniel, and wounded her husband, attorney Mark A. Anderl. A day after the racially motivated attack, the gunman, Roy Den Hollander, was found dead from a self-inflicted gunshot wound.

The Manhattan attorney and self-proclaimed “anti-feminist” appeared in Salas’ courtroom months prior to the attack. According to the FBI, Hollander had detailed information on Salas and her family, in addition to several other targets on his radar.  An autobiography published to Hollander’s personal website revealed anti-feminist ideology and his extreme displeasure with Salas, including the following posts:

  • “If she ruled draft registration unconstitutional, the Feminists who believed females deserved preferential treatment would criticize her. If she ruled that it did not violate the Constitution, then those Feminists who advocate for equal treatment would criticize her. Either way it was lose-lose for Salas unless someone took the risk of leading the way”
  • “Female judges didn’t bother me as long as they were middle age or older black ladies…Latinas, however, were usually a problem — driven by an inferiority complex.”
  • In another passage, he wrote that Salas was a “lazy and incompetent Latina judge appointed by Obama.”
  • He criticized Salas’ resume, writing that “affirmative action got her into and through college and law school,” and that her one accomplishment was “high school cheerleader.”

(https://www.goodmorningamerica.com/news/story/suspect-deadly-shooting-called-federal-judge-esther-salas-71901734)

In a news video two-weeks after the incident, Salas shared that “unfortunately, for my family, the threat was real, and the free flow of information from the internet allowed this sick and depraved human being to find all our personal information and target us. In my case, the monster knew where I lived and what church we attended and had a complete dossier on me and my family.” Since her sons’ killing, Judge Salas has been personally advocating for stronger protections to ensure that judges are able to render decisions without fear of reprisal or retribution – not only for safety purposes, but because our democracy depends on an independent judiciary.

***

Sadly, Judge Salas is not alone in the terrible misfortune that occurred last year. Judges are regularly threatened and harassed, specifically after high-profile legal battles with increased media attention – increasing 400% over the past five years. Four federal judges have been murdered since 1979. District Judge John Wood was assassinated outside his home in 1979 by hitman Charles Harrelson. In 1988, U.S. District Judge Richard Daronco was shot and killed in the front yard of his Pelham, New York, home. In 1989, Circuit Judge Robert Vance was killed when he opened a mail bomb sent to his home. District Judge John Roll was shot in the back and killed in 2011 at an event for Congresswoman Gabrielle Giffords, who was also shot and injured. (https://www.abajournal.com/news/article/federal-judiciary-supports-legislation-to-prevent-access-to-judges-information)

Thankfully, not all threats result in successful or fatal attacks – but the rise of intimidation tactics and inappropriate communications with federal judges and other court personnel has quadrupled since 2015.

U.S. District Judge Julie Kocurek was shot in front of her family in 2015. She miraculously survived but sustained severe injuries and underwent dozens of surgeries. The attempted assassin was a plaintiff before her court and had been tracking the judges’ whereabouts. Former Texas Federal Judge Liz Lang Miers attributes the attacks to someone misperceiving a ruling and acting irrationally “as opposed to understanding the justice system.”

In 2017, Seattle federal Judge James Robart received more than 42,000 letters, emails and calls, including more than 100 death threats, after he temporarily blocked President Donald Trump’s travel ban that barred people from Iran, Iraq, Libya, Somalia, Sudan, Syria and Yemen from entering the U.S. for 90 days. (https://www.nbcnews.com/news/us-news/attack-judge-salas-family-highlights-concerns-over-judicial-safety-n1234476)

The Internet, notably social media, has amplified the criticisms that citizens have with the judicial system. Rather than listening to and comprehending the entirety of a court ruling, an individual can fire off a tweet or post at the click of a button, spreading that inaccurate information worldwide. Before long, hundreds of thousands of people have seen that communication and are quick to draw conclusions despite not understanding the merits of the legal opinion. Misinformation, or misleading information or arguments, often aiming to influence a subset of the public, spreads rapidly. Data indicates that articles containing misinformation were among the most viral content, with “falsehoods diffusing significantly farther, faster, deeper, and more broadly than the truth in all categories of information.” (https://voxeu.org/article/misinformation-social-media).

***

Since 1789, federal judges have been entitled to home and court security systems and protections by the U.S. Marshals service – however the threats and attacks continue to prevail.

As elected public servants, judges’ information is made publicly available and easily accessible through a simple Google search. The Daniel Anderl Judicial Security and Privacy Act would shield the information of federal judges and their families, including home addresses, Social Security numbers, contact information, tax records, marital and birth records, vehicle information, photos of their vehicle and home, and the name of the schools and employers of immediate family members.

Many officials are onboard with the proposed legislation. Senator Menendez, who recommended Judge Salas to President Barack Obama for appointment to the federal bench, reveals that “the threats against our federal judiciary are real and they are on the rise.  We must give the U.S. Marshals and other agencies charged with guarding our courts the resources and tools they need to protect our judges and their families. I made a personal commitment to Judge Salas that I would put forth legislation to better protect the men and women who sit on our federal judiciary, to ensure their independence in the face of increased personal threats on judges and help prevent this unthinkable tragedy from ever happening again to anyone else.” Moreover, Rep. Fitzpatrick noted that, “in order to bolster our ability to protect our federal judges and their families, we need to safeguard the personally identifiable information of our judges and optimize our nation’s personal data sharing and privacy practices.”

Additionally, the bill is supported by the New Jersey State Bar Association, National Association of Attorneys General, Judicial Conference of the United States, Federal Magistrate Judges Association, American Bar Association (ABA), Dominican Bar Association, New York Intellectual Property Law Association, Federal Bar Council, Hispanic National Bar Association (HNBA), and Federal Judges Association.

***

In memory of Daniel Anderl, taken too soon at 20-years-young. As the only child of U.S. District Court Judge Esther Salas and defense attorney Mark Anderl, Daniel gave his life to save his parents. He was a student at Catholic University in Washington, DC. There is a plaque honoring Daniel at the entrance of the Columbus School of Law at Catholic University, as he planned to pursue a career in law. The plaque is also to serve as a reminder to young people that

The Dark Side of Tik-Tok

In Bethany, Oklahoma, a 12-year-old child died due to strangulation marks on his neck. According to police, this wasn’t due to murder or suicide, rather a TikTok challenge that had gone horribly wrong. The challenge is known by a variety of names, including Blackout Challenge, Pass Out Challenge, Speed Dreaming, and The Fainting Game. The challenge is kids asphyxiating themselves, either by choking themselves out by hand or by using a rope or a belt, to obtain the euphoria when they wake up.

Even if the challenge does not result in death, medical professionals warn that it is extremely dangerous. Every moment you are without oxygen or blood, you risk irreversible damage to a portion of your brain.

Unfortunately, the main goal on social media is to gain as many views as possible, regardless of the danger or expense.

Because of the pandemic kids have been spending a lot of time alone and bored, which has led to preteens participating in social media challenges.

There are some social media challenges that are harmless including the 2014 Ice Bucket Challenge, which earned millions of dollars for ALS research.

However there has also been the Benadryl challenge which began in 2020 that urged people to overdose on the drug in an effort to hallucinate. People were also urged to lick surfaces in public as part of the coronavirus challenge.

One of the latest “challenges” on the social media app TikTok could have embarrassing consequences users never imagined possible. The idea of the Silhouette Challenge is to shoot a video of yourself dancing as a silhouette with a red filter covering up the details of your body. It started out as a way to empower people but has turned into a trend that could come back to haunt you. Participants generally start the video in front of the camera fully clothed. When the music changes, the user appears in less clothing, or nude, as a silhouette obscured by a red filter. But the challenge has been hijacked by people using software to remove that filter and reveal the original footage.

If these filters are removed, that can certainly create an environment where kids’ faces are being put out in the public domain, and their bodies are being shown in ways they didn’t anticipate,” said Mekel Harris licensed pediatric & family psychologist. Young people who participate in these types of challenges aren’t thinking about the long-term consequences.

These challenges reveal a darker aspect to the app, which promotes itself as a teen-friendly destination for viral memes and dancing.

TikTok said it would remove such content from its platform. In an updated post to its newsroom, TikTok said:

“We do not allow content that encourages or replicates dangerous challenges that might lead to injury. In fact, it’s a violation of our community guidelines and we will continue to remove this type of content from our platform. Nobody wants their friends or family to get hurt filming a video or trying a stunt. It’s not funny – and since we remove that sort of content, it certainly won’t make you TikTok famous.”

TikTok urged users to report videos containing the challenge. And it told BBC News there was now text reminding users to not imitate or encourage public participation in dangerous stunts and risky behavior that could lead to serious injury or death.

While the challenge may seem funny or get views on social media platforms, they can have long-lasting health consequences.

Because the First Amendment gives strong protection to freedom of speech, only publishers and authors are liable for content shared online. Section 230(c)(1) of the Communications Decency Act of 1996 states that “no provider or user of an interactive computer service shall be treated as the publisher or any information provided by another information content provider.” This act provides social media companies immunity over the content published by other authors on their platforms as long as intellectual property rights are not infringed. Although the law does not require social media sites to regulate their content, they can still decide to remove content at their discretion. Guidelines on the laws regarding discretionary content censorship are sparse. Because the government is not regulating speech, this power has fallen into the hands of social media giants like TikTok. Inevitably, the personal agendas of these companies are shaping conversations, highlighting the necessity of debating the place of social media platforms in the national media landscape.

THE ROLE OF SOCIAL MEDIA:

Social media is unique in that it offers a huge public platform, instant access to peers, and measurable feedback in the form of likes, views, and comments. This creates strong incentives to get as much favorable peer evaluation and approval as possible. Social media challenges are particularly appealing to adolescents, who look to their peers for cues about what’s cool, crave positive reinforcement from their friends and social networks, and are more prone to risk-taking behaviors, particularly when they’re aware that those whose approval they covet are watching them.

Teens won’t necessarily stop to consider that laundry detergent is a poison that can burn their throats and damage their airways. Or that misusing medications like diphenhydramine​ (Benadryl) can cause serious heart problems, seizures and coma.​ What they will focus on is that a popular kid in class did this and got hundreds of likes and comments.

WHY ARE TEENS SUSCEPTIBLE:

Children are biologically built to become much more susceptible to peer influence throughout puberty, and social media has magnified those peer influence processes, making them significantly more dangerous than ever before. Teens may find these activities entertaining and even thrilling, especially if no one is hurt, which increases their likelihood of participating. Teens are already less capable of evaluating danger than adults, so when friends reward them for taking risks – through likes and comments – it may act as a disinhibitor. These youngsters are being impacted on an unconscious level. The internet issues that are prevalent nowadays make it impossible for youngsters to avoid them. This will not occur unless they have parental engagement.

WHAT WE CAN DO TO CONTROL THE SITUATION:

Due to their lack of exposure to these effects as children, parents today struggle to address the risks of social media use with their children.

Parents, on the other hand, should address viral trends with their children. Parents should check their children’s social media history and communicate with them about their online activities, as well as block certain social media sites and educate themselves on what may be lurking behind their child’s screen.

In the case of viral infections, determine your child’s level of familiarity with any patterns you may have heard about before soliciting their opinion. You may ask as to why they think others will follow the trend and what they believe are some of the risks associated with doing so. Utilize this opportunity to explain why you are concerned about a certain trend.

HOW TO COPE WITH SOCIAL MEDIA USAGE:

It’s important to keep in mind that taking a break is completely appropriate. You are not required to join in every discussion, and disabling your notifications may provide some breathing space. You may set regular reminders to keep track of how long you’ve been using a certain app.

If you’re seeing a lot of unpleasant content in your feed, consider muting or blocking particular accounts or reporting it to the social media company.

If anything you read online makes you feel anxious or frightened, communicate your feelings to someone you trust. Assistance may come from a friend, a family member, a teacher, a therapist, or a helpline. You are not alone, and seeking help is completely OK.

Social media is a natural part of life for young people, and although it may have a number of advantages, it is essential that platforms like TikTok take responsibility for harmful content on their sites.

I welcome the government’s plan to create a regulator to guarantee that social media companies handle cyberbullying and posts encouraging self-harm and suicide.

Additionally, we must ensure that schools teach children what to do if they come across upsetting content online, as well as how to use the internet in a way that benefits their mental health.

To reduce the likelihood of misuse, protections must be implemented.

MY QUESTION TO YOU ALL:

How can social media companies improve their moderation so that children are not left to fend for themselves online? What can they do to improve their in-app security?

How Defamation and Minor Protection Laws Ultimately Shaped the Internet

Kyiv, Ukraine – September 5, 2019: A paper cubes collection with printed logos of world-famous social networks and online messengers, such as Facebook, Instagram, YouTube, Telegram and others.

The Communications Decency Act (CDA) was originally enacted with the intention of shielding minors from indecent and obscene online material. Despite its origins, Section 230 of the Communications Decency Act is now commonly used as a broad legal safeguard for social media platforms to shield themselves from legal liability for content posted on their sites by third parties. Interestingly, the reasoning behind this safeguard arises both from defamation common law, and constitutional free speech laws. As the internet has grown, however, this legal safeguard has gained increasing criticism. However, is this legislation actually undesirable? Many would disagree as section 230 contains “the 26 words that created the internet.”

 

Origin of the Communications Decency Act

The CDA was introduced and enacted as an attempt to shield minors from obscene or indecent content online. Although parts of the Act were later struck down for first amendment free speech violations, the Court left section 230 intact. The creation of section 230 was influenced by two landmark court decisions of defamation lawsuits.

The first case was in 1991, and involved an Internet site that hosted around 150 online forums. A claim was brought against the internet provider when a columnist of one of the online forums posted a defamatory comment about his competitor. The competitor sued the online distributor for the published defamation. The courts categorized the internet service provider as a distributor because they did not review any content of the forums before the content was posted to the site. As a distributor, there was no legal liability, and the case was dismissed.

 

Distributor Liability

Distributor Liability refers to the limited legal consequences that a distributor is exposed to for defamation. A common example of a distributor, is a bookstore or library. The theory behind distributor liability is that it would be impossible for distributors to moderate and censor every piece of content that they disperse because of the sheer volume, and the impossibility of knowing whether something is false or not.

The second case that influenced the creation of section 230, was Stratton Oakmont, Inc. v. Prodigy Servs. Co., in which the court used publisher liability theory to find the internet provider liable for the third party defamatory postings published on its site.  The court deemed the website a publisher because they moderated and deleted certain posts, regardless of the fact that there were far too many postings a day to regulate each one.

 

Publisher Liability

Under common law principles, a person who publishes a third-party’s defamatory statement bears the same legal responsibility as the creator of that statement. This liability is often referred to as “publisher liability,” and is based in theory that a publisher has the knowledge, opportunity, and ability to exercise control over the publication. For example, a newspaper publisher could face legal consequences for the content located within it. The court’s decision was significant because it meant that if a website attempted to moderate certain posts, it would be held liable for all posts.

 

Section 230’s Creation

In response to the Stratton-Oakmond case, and the ambiguous court decisions regarding internet services provider’s liability, members of Congress introduced an amendment to the CDA that later became Section 230. The Amendment was specifically introduced and passed with the goal of encouraging the development of unregulated, free speech online by relieving internet providers from any liability for their content.

 

Text of the Act- Subsection (c)(1) 

“No Provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

 Section 230 further provides that…

“No cause of action may be brought and no liability may be imposed under any State or local law that is inconsistent with this section.”

 The language above removes legal consequences arising from content posted on their forum. Courts have interpreted this subsection as providing broad immunity to online platforms from suits over content of third parties. Because of this, section 230 has become the principal legal safeguard from lawsuits over sites content.

 

The Good

  •  Section 230 can be viewed as being one of the most important pieces of legislation that protects free speech online. One of the unique aspects of this legislation is that it essentially extends free speech protection, applying it to private, non-governmental companies.
  • Without CDA 230, the internet would be a very different place. This section influenced some of the internet’s most distinctive characteristics. The internet promotes free speech and offers the ability for worldwide connectivity.
  • The CDA 230 does not fully eliminate liability or court remedies for victims of online defamation. Rather, it makes only the creator themselves liable for their speech, instead of the speaker and the publisher.

 

 

The Bad

  •  Because of the legal protections section 230 provides, social media networks have less of an incentive to regulate false or deceptive posts. Deceptive online posts can have an enormous impact on society. False posts have the ability to alter election results, or lead to dangerous misinformation campaigns, like the QAnon conspiracy theory, and the anti-vaccination movement.
  • Section 230 is twenty-five years old, and has not been updated to match the internet’s extensive growth.
  • Big Tech companies have been left largely unregulated regarding their online marketplaces.

 

 The Future of 230

While section 230 is still successfully used by social media platforms, concerns over the archaic legislation have mounted. Just recently, Justice Thomas, who is infamous for being a quiet Justice, wrote a concurring opinion articulating his view that the government should regulate content providers as common carriers, like utilities companies. What implications could that have on the internet? With the growing level of criticism surrounding section 230, will Congress will finally attempt to fix this legislation? If not, will the Supreme Court be left to tackle the problem themselves?

Off Campus Does Still Exist: The Supreme Court Decision That Shaped Students Free Speech

We currently live in a world centered around social media. I grew up in a generation where social media apps like Facebook, Snapchat and Instagram just became popular. I remember a time when Facebook was limited to college students, and we did not communicate back and forth with pictures that simply disappear. Currently many students across the country use social media sites as a way to express themselves, but when does that expression go too far? Is it legal to bash other students on social media? What about teachers after receiving a bad test score? Does it matter who sees this post or where the post was written? What if the post disappears after a few seconds? These are all questions that in the past we had no answer to. Thankfully, in the past few weeks the Supreme court has guided us on how to answer these important questions. In Mahanoy Area School District v B.L, the supreme court decided how far a student’s right to free speech can go and how much control a school district has in restricting a student’s off campus speech.

The question presented in the case of Mahanoy Area School District v. B.L was whether a public school has the authority to discipline a student over something they posted on social media while off campus. The student in this case was a girl named Levy. Levy was a sophomore who attended the Mahanoy Area School District. Levy was hoping to make the varsity cheerleading team that year but unfortunately, she did not.  She was very upset when she found out a freshman got the position instead and decided to express her anger about this decision on social media. Levy was in town with her friend at a local convenience store when she sent “F- School, F- Softball, F- Cheerleading, F Everything” to her list of friends on snapchat in addition to posting this on her snapchat story. One of these friends screenshotted the post and sent it to the cheerleading coach. The school district investigated this post and it resulted in Levy being suspended from cheerleading for one year. Levy, along with her parents were extremely upset with this decision and it resulted in a lawsuit that would shape a student’s right to free speech for a long time.

In the lawsuit, Levy and her parents, claimed that Levy’s cheerleading suspension violated her First Amendment right to free speech. They sued Mahanoy Area School District under 42 U.S.C § 1983 claiming that (1) her suspension from the team violated the First Amendment; (2) the school and team rules were overbroad and viewpoint discriminatory; and (3) those rules were unconstitutionally vague. The district court granted summary judgment in favor of Levy, stating that the school had violated her First Amendment rights. The U.S. Court of Appeals for the Third Circuit affirmed the district court decision. The Mahoney School District petitioned for a writ of certiorari. Finally, the case was heard by the Supreme Court.

Mahanoy School District argued that previous ruling in the case, Tinker v. Des Moines Independent Community School District, acknowledges that public schools do not possess absolute authority over students and that students possess First Amendment speech protections at school so long as the students’ expression does not become substantially disruptive to the proper functioning of school. Mahanoy emphasized that the Court intended for Tinker to extend beyond the schoolhouse gates and include not just on-campus speech, but any type of speech that was likely to result in on-campus harm. Levy countered by arguing that the ruling in Tinker only applies to speech protections on school grounds.

In an 8-1 decision the court ruled against Mahanoy. The Supreme Court held that Mahanoy School District violated Levy’s First Amendment Right by punishing her for posting a vulgar story on her snapchat while off campus.  The court ruled that the speech used did not result in severe bullying, nor was substantially disruptive to the school itself. The court also noted that this post was only visible to her friends list on snapchat and would disappear within 24 hours. It is not the school’s job to act as a parent, but it is their job to make sure actions off campus will not result in danger to the school. The Supreme Court also stated that although the student’s expression was unfavorable, if they did not protect the student’s opinions it would limit the students’ ability to think for themselves.

It is remarkably interesting to think about how the minor facts of this case determined the ruling. What if this case was posted on Facebook? One of the factors to consider that helped the court make their decision was that the story was only visible to about 200 of her friends on snapchat and would disappear within a day. One can assume that if Levy made this a Facebook status visible to all with no posting time frame the court could have ruled very differently. Another factor to consider, is that where the Snapchat post was uploaded ended up being another major factor in this case. Based on the Tinker ruling, if Levy posted this on school grounds Mahanoy School District could have the authority to discipline her for her post.

Technology is advancing each day and I am sure that in the future as more social media platforms come out the court will have to set a new precedent. I believe that the Supreme Court made the right decision regarding this case. I feel that speech which is detrimental to another individual should be monitored whether it is Off Campus Speech or On Campus Speech despite the platform that the speech is posted on. In Levy’s case no names were listed, she was expressing frustration for not making a team. I do believe that this speech was vulgar, but do not believe that the school suffered, nor any other students suffered severe detriment from this post.

If you were serving as a Justice on the Supreme Court, would you rule against Mahoney School District? Do you believe it matters which platform the speech is posted on? What about the location of where it was posted?

Advertising in the Cloud

Thanks to social media, advertising to a broad range of people across physical and man-made borders has never been easier. Social media has transformed how people and businesses can interact throughout the world. In just a few moments a marketer can create a post advertising their product halfway across the world and almost everywhere in between. Not only that, but Susan, a charming cat lady in west London, can send her friend Linda, who’s visiting her son in Costa Rica an advertisement she saw for sunglasses she thinks Linda might like. The data collected by social media sites allows marketers to target specific groups of people with their advertisements. For example, if Susan was part of a few Facebook cat groups, she would undoubtedly receive more cat tower or toy related advertisements than the average person.

 

Advertising on social media also allows local stores or venues to advertise to the local communities, targeting groups of people in the local area. New jobs in this area are being created, young entrepreneurs are selling their social media skills to help small business owners create an online presence. Social media has also transformed the way stores advertise to people as well, no longer must stores rely on solely a posterboard, or scripted advertisement. Individuals with a large enough following on social media are sought out by companies to “review” or test their products for free.

Social media has transformed and expanded the marketplace exponentially. Who we can reach in the world, who we can market to and sell to has expanded beyond physical barriers. With these changes, and newfound capabilities through technology, comes a new legal frontier.

 Today, most major brands and companies have their own social media account. Building a store’s “online presence” and promoting brand awareness has now become a priority for many marketing departments. According to Internet Advertising Revenue Report: Full Year 2019 Results & Q1 2020 Revenues, “The Interactive Advertising bureau, an industry trade association, and the research firm eMarketer estimate that U.S. social media advertising revenue was roughly $36 billion in 2019, making up approximately 30% of all digital advertising revenue,” they expect that it will increase to $43 billion in 2020.

The Pew Research Center estimated, “that in 2019, 72% of U.S. adults, or about 184 million U.S. adults, used at least one social media site, based on the results of a series of surveys.”

Companies and people are increasingly utilizing these tools, what are the legal implications? 

This area of law is quickly growing. Advertisers can now directly reach their consumers in an instant, marketing their products at comparable prices. The FTC, Federal Trade Commission has expanded its enforcement actions in this area. Some examples of this are:

  •  The Securities and Exchange Commission Regulation Fair Disclosure addresses, “ the selective disclosure of information by publicly traded companies and other issuers, and the SEC has clarified that disseminating information through social media outlets like Facebook and Twitter is allowed so long as investors have been alerted about which social media will be used to disseminate such information,” 
  • The National Labor Relations Act, “While crafting an effective social media policy regarding who can post for a company or what is acceptable content to post relating to the company is important, companies need to ensure that the policy is not overly broad or can be interpreted as limiting employees’ rights related to protected concerted activity”
  • FDA, “ Even on social media platforms, businesses running promotions or advertising online have to be careful not to run afoul of FDA disclosure requirements”

According to the ABA there are two basic principles in advertising law which apply to any media: 

  1. Advertisers must have a reasonable basis to substantiate claims made; and
  2.  If disclosure is required to prevent an ad from being misleading, such disclosure must appear in a clear and conspicuous manner.

Advertisements may be subject to more specific regulations regarding Children under the Children’s Online Privacy Protection Act (COPPA). This act gives parents control over protections and approvable ways to get verifiable parental consent.  

The Future legality of our Data 

Data brokers are companies that collect information about you and sell that data to other companies or individuals. This information can include everything from family birthdays, addresses, contacts, jobs, education, hobbies, interests, life events and health conditions. Currently, Data brokers are legal in most states. California and Vermont have enacted laws that require data brokers to register their operation in the state. Who owns your data? Should you? Should the sites you are creating the data on? Should it be free for companies to sell? Will states take this issue in different directions? If so, what would these implications be for companies and sites to keep up with?

Facebook’s market capitalization stands at $450 billion.

While there is uncertainty regarding this area of law, it is certain that it is new, expanding and will require much debate. 

According to Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media,  “Collecting user data allows operators to offer different advertisements based on its potential relevance to different users.”   The data collected by social media companies enables them to build complex strategies and sell advertising “space” targeting specific user groups to companies, organizations, and political campaigns (How Does Facebook Make Money). The capabilities here seem endless, “Social media operators place ad spaces in a marketplaces that runs an instantaneous auction with advertisers that can place automated bids.” With the ever expanding possibilities of social media comes a growing legal frontier. 

Removing Content 

 Section 230, a provision of the 1996 Communications Decency Act, states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” (47 U.S.C. § 230). This act shields social media companies from liability for content posted by their users and allows them to remove lawful but objectionable posts.

One legal issue that has been arising here is, advertisements are being taken down by the content monitoring algorithms. According to a Congressional Research Services report, during the COVID-19 pandemic social media companies relied more heavily on automated systems to monitor content. These systems could review large volumes of the content at a time however they mistakenly removed some content. “Facebook’s automated systems have reportedly removed ads from small businesses, mistakenly identifying them as content that violates its policies and causing the business to lose money during the appeals process” (Facebook’s AI Mistakenly Bans Ads for Struggling Businesses). This has affected a wide range of small businesses according to Facebook’s community standards transparency enforcement report. According to this same report, “In 2019, Facebook restored 23% of the 76 million appeals it received, and restored an additional 284 million pieces of content without an appeal—about 2% of the content that it took action on for violating its policies.” 

 

Skip to toolbar