Can Social Media Be Regulated?

In 1996 Congress passed what is known as Section 230 of the Communications Decency Act (CDA) which provides immunity to website publishers for third-party content posted on their websites. The CDA holds that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This Act passed in 1996, was created in a different time and era, one that could hardly envision how fast the internet would grow in the coming years. In 1996, social media for instance consisted of a little-known social media website called Bolt, the idea of a global world wide web, was still very much in its infancy. The internet was still largely based on dial-up technology, and the government was looking to expand the reach of the internet. This Act is what laid the foundation for the explosion of Social Media, E-commerce, and a society that has grown tethered to the internet.

The advent of Smart-Phones in the late 2000s, coupled with the CDA, set the stage for a society that is constantly tethered to the internet and has allowed companies like Facebook, Twitter, YouTube, and Amazon to carve out niches within our now globally integrated society.   Facebook alone in the 2nd quarter of 2021 has averaged over 1.9 billion daily users.

Recent studs conducted by the Pew Research Center show that “[m]ore than eight in ten Americans get news from digital services”

Large majority of Americans get news on digital devices

While older members of society still rely on news media online, the younger generation, namely those 18-29 years of age, receive their news via social media.

Online, most turn to news websites except for the youngest, who are more likely to use social media

The role Social Media plays in the lives of the younger generation needs to be recognized. Social Media has grown at a far greater rate than anyone could imagine. Currently, Social Media operates under its modus operandi, completely free of government interference due to its classification as a private entity, and its protection under Section 230.

Throughout the 20th century when Television News Media dominated the scenes, laws were put into effect to ensure that television and radio broadcasters would be monitored by both the courts and government regulatory commissions. For example, “[t]o maintain a license, stations are required to meet a number of criteria. The equal-time rule, for instance, states that registered candidates running for office must be given equal opportunities for airtime and advertisements at non-cable television and radio stations beginning forty-five days before a primary election and sixty days before a general election.”

What these laws and regulations were put in place for was to ensure that the public interest in broadcasting was protected. To give substance to the public interest standard, Congress has from time to time enacted requirements for what constitutes the public interest in broadcasting. But Congress also gave the FCC broad discretion to formulate and revise the meaning of broadcasters’ public interest obligations as circumstances changed.

The Federal Communications Commission (FCC) authority is constrained by the first amendment but acts as an intermediary that can intervene to correct perceived inadequacies in overall industry performance, but it cannot trample on the broad editorial discretion of licensees. The Supreme Court has continuously upheld the public trustee model of broadcast regulation as constitutional. The criticisms of regulating social media center on the notion that they are purely private entities that do not fall under the purviews of the government, and yet these same issues are what presented themselves in the precedent-setting case of Red Lion Broadcasting Co. v. Federal Communications Commission (1969.  In this case, the court held that “rights of the listeners to information should prevail over those of the broadcasters.” The Court’s holding centered on the public right to information over the rights of a broadcast company to choose what it will share, this is exactly what is at issue today when we look at companies such as Facebook, Twitter, and Snapchat censuring political figures who post views that they feel may be inciteful of anger or violence.

In essence, what these organizations are doing is keeping information and views from the attention of the present-day viewer. The vessel for the information has changed, it is no longer found in television or radio but primarily through social media. Currently, television and broadcast media are restricted by Section 315(a) of the Communications Act and Section 73.1941 of the Commission’s rules which “require that if a station allows a legally qualified candidate for any public office to use its facilities (i.e., make a positive identifiable appearance on the air for at least four seconds), it must give equal opportunities to all other candidates for that office to also use the station.” This is a restriction that is nowhere to be found for Social Media organizations. 

This is not meant to argue for one side or the other but merely to point out that there is a political discourse being stifled by these social media entities, that have shrouded themselves in the veils of a private entity. However, what these companies fail to mention is just how political they truly are. For instance, Facebook proclaims itself to be an unbiased source for all parties, and yet what it fails to mention is that currently, Facebook employs one of the largest lobbyist groups in Washington D.C. Four Facebooks lobbyist have worked directly in the office of House Speaker Pelosi. Pelosi herself has a very direct connection to Facebook, she and her husband own between $550,000 to over $1,000,000 in Facebook stock. None of this is illegal, however, it raises the question of just how unbiased is Facebook.

If the largest source of news for the coming generation is not television, radio, or news publications themselves, but rather Social Media such as Facebook, then how much power should they be allowed to wield without there being some form of regulation? The question being presented here is not a new one, but rather the same question asked in 1969, simply phrased differently. How much information is a citizen entitled to, and at what point does access to that information outweigh the rights of the organization to exercise its editorial discretion? I believe that the answer to that question is the same now as it was in 1969 and that the government ought to take steps similar to those taken with radio and television. What this looks like is ensuring that through Social Media, that the public has access to a significant amount of information on public issues so that its members can make rational political decisions. At the end of that day that it was at stake, the public’s ability to make rational political decisions.

These large Social Media conglomerates such as Facebook and Twitter have long outgrown their place as a private entity, they have grown into a public medium that has tethered itself to the realities of billions of people. Certain aspects of it need to be regulated, mainly those that interfere with the Public Interest, there are ways to regulate this without interfering with the overall First Amendment right of Free Speech for all Americans. Where however Social Media blends being a private forum for all people to express their ideas under firmly stated “terms and conditions”, and being an entity that strays into the political field whether it be by censoring heads of state, or by hiring over $50,000,000 worth of lobbyist in Washington D.C, there need to be some regulations put into place that draw the line that ensures the public still maintains the ability to make rational political decisions. Rational decisions that are not influenced by anyone organization. The time to address this issue is now when there is still a middle ground on how people receive their news and formulate opinions.

Say Bye to Health Misinformation on Social Media?

A study from the Center of Countering Digital Hate, found that social media platforms failed to act on 95% of coronavirus-related disinformation reported to them.

      Over the past few weeks, social media companies have been in the hot seat regarding their lack of action against limiting the amount of fake news and misinformation on their platforms. Especially, the information regarding COVID-19 and the vaccine. Even President Biden remarked on social media platforms- stating Facebook and other companies were “killing people” by serving as platforms for misinformation about the Covid-19 vaccine. Later, Biden clarified his earlier statements by saying that he wasn’t accusing Facebook of killing people, but that he meant he wanted the companies to do something about the misinformation, the outrageous information about the vaccine.”

A few weeks later, Senator, Amy Klobuchar introduced the Health Misinformation Act, which would ultimately create an exemption to Section 230 of the Communication Decency Act. Section 230 has always shielded social media companies from being liable for almost any of the content that is posted on their platform. However, under the Health Misinformation Act, social media companies would be liable for the spread of health-related misinformation. Further, the bill would only apply to social media platforms that use an algorithm that promotes health misinformation- which most social media platforms use algorithms and would only apply to health misinformation during a health crisis. Additionally, if this bill were to pass, then the Department of Health and Human Services would be authorized to define “health misinformation.” Finally, the proposed bill would only apply during a national public health crisis, such as COVID-19. Therefore, this exemption would not apply during “normal” times, when there is no public health crisis.

        Senator Amy Klobuchar and some of her peers believe the time has come to create an exemption to Section 230 because “for far too long, online platforms have not done enough to protect the health of Americans.” Further, Klobuchar believes that the misinformation spread about COVID-19 the vaccine proves to the world that the social media companies have no desire to do anything about this because the misinformation gives the social media platform more activity and because the companies cannot be liable for this misinformation because of Section 230.
Instead, these social media companies, use this misinformation to their advantage to get more activity on their platform by creating features within to incentivizing their users to share the misinformation and to get likes, comments, and other engagements, which rewards engagement rather than accuracy.” Furthermore, a study conducted by MIT found that false news stories are 70% more likely to be retweeted than true stories. Therefore, social media platforms, have no reason to limit this information because of the activity they receive for the misinformation. Especially, when this misinformation benefits the social media platform.

What are the concerns with the Health Misinformation Act?

How will the Department of Health and Human Services define “health misinformation?” it seems very difficult to define such a narrow topic, that the majority will agree upon. Also, I believe there will be a huge amount of criticism from the social media companies about this act. For instance, I can imagine the social media companies arguing how will they be able to implement the definition of “health misinformation” to their algorithm? Such as, what if the information on the health crisis changes? Will the social media company have to constantly change the algorithms with the constant change of health information? For example, at the beginning of the pandemic the information of the importance of the mask changed; from mask not being necessary to masking being crucial to ensure the health and safety of yourself and others.

Will the Bill Pass?

With that being said, I do like the concept of the Health Misinformation Act, because it’s wanting to hold the social media companies accountable for their inaction while trying to protect the public so they receive accurate health-related information. However, I do not believe this bill will pass because of a few issues; first, it may violate the First Amendment, for people’s freedom of speech. Second, while it isn’t right; it is not illegal for individuals to post their opinion or misinformation on social media. Therefore, the bill might not pass because it may violate the First Amendment. Finally, as stated earlier how would social media companies implement these new policies and the change of “health misinformation” and how would the federal agencies regulate the social media companies?

What should be done?

“These are some of the biggest, richest companies in the world and they must do more to prevent the spread of deadly vaccine misinformation.”

     I believe we need to create more regulations and create more exemptions to Section 230. Especially, because Section 230 was created in 1996, our world looks and operates very differently than it did in 1996. Social media is an essential part of our business and cultural world.
Overall, I believe there need to be more regulations put into place to oversee social media companies. We need to create transparency with these companies, so the world can understand what is going on behind the closed doors of these companies. Transparency will allow for agencies to fully understand the algorithms and make for proper regulations.

To conclude, social media companies are a monopoly- even though there are a lot of them, there is only a handful that holds most of the popularity and power. With that being said, all major businesses and monopolies must follow strict regulations from the government. Yet, social media companies seem exempt from these types of strict regulations.

While there has been a push over the past few years to repeal or make changes to Section 230, do you think this bill can pass? If not, what can be done to create more regulations?

Free speech, should it be so free?

In the United States everybody is entitled to free speech; however, we must not forget that the First Amendment of the Constitution only protects individuals from federal and state actions. With that being said, free speech is not protected from censorship by private entities, like social media platforms. In addition, Section 230 of the Communications Decency Act (CDA) provides technology companies like Twitter, YouTube, Facebook, Snapchat, Instagram as well as other social media giants, immunity from liabilities arising from the content posted on their websites. The question becomes whether it is fair for an individual who desires to freely express himself or herself to be banned from certain social media websites by doing so? What is the public policy behind this? What are the standards employed by these social media companies when determining who should or should not be banned? On the other hand, are social media platforms being used as tools or weapons when it comes to politics? Do they play a role in how the public vote? Are the users truly seeing what they think they have chosen to see or are the contents being displayed targeted to the users and may ultimately create biases?

As we have seen earlier this year, former President Trump was banned from several social media platforms as a result of the January 6, 2021 assault at the U.S. Capitol by Trump supporters. It is no secret that our former president is not shy about his comments on a variety of topics. Some audiences view him as outspoken, direct, or perhaps provocative. When Twitter announced its permanent suspension of former President Trump’s account, its rationale was to prevent further incitement of violence. By falsely claiming that the 2020 election had been stolen from him, thousands of Trump supporters gathered in Washington, D.C. on January 5 and January 6 which ultimately led to violence and chaos. As a public figure and a politician, our former president should have known that his actions or viewpoints on social media are likely to trigger a significant impact on the public. Public figures and politicians should be held to a higher standard as they represent citizens who voted for them. As such, they are influential. Technology companies like Twitter saw the former president’s tweets as potential threats to the public as well as a violation of their company policies; hence, it was justified to ban his account. The ban was an instance of private action as opposed to government action. In other words, former President Trump’s First Amendment rights were not violated.

Spare Me Your Outrage, Your Shock. This Is America | Cognoscenti

First, let us discuss the fairness aspect of censorship. Yes, individuals possess rights to free speech; however, if the public’s safety is at stake, actions are required to avoid chaos. For example, you cannot scream “fire”  out of nowhere in a dark movie theater as it would cause panic and unnecessary disorder. There are rules you must comply with in order to use the facility and these rules are in place to protect the general welfare. As a user, if you don’t like the rules set forth by that facility, you can simply avoid using it. It does not necessarily mean that your idea or speech is strictly prohibited, just not on that particular facility. Similar to social media platforms, if users fail to follow their company policies, the companies reserve the right to ban them. Public policy probably outweighs individual freedom. As for the standards employed by these technology companies, there is no bright line. As I previously mentioned, Section 230 grants them immunity from liabilities. That being said, the contents are unregulated and therefore, these social media giants are free to implement and execute policies as they seem appropriate.

The Dangers of Social Networking - TurboFuture

In terms of politics, I believe social media platforms do play a role in shaping their users’ perspectives in some way. This is because the contents that are being displayed are targeted, if not tailored, as they collect data based on the user’s preferences and past habits. The activities each user engages in are being monitored, measured, and analyzed. In a sense, these platforms are being used as a weapon as they may manipulate users without the users even knowing. A lot of times we are not even aware that the videos or pictures that we see online are being presented to us because of past contents we had seen or selected. In other words, these social media companies may be censoring what they don’t want you to see or what they may think you don’t want to see.  For example, some technology companies are pro-vaccination. They are more likely to post information about facts about COVID-19 vaccines or perhaps publish posts that encourage their users to get vaccinated.  We think we have control over what we see or watch, but do we really?

How to Avoid Misinformation About COVID-19 | Science | Smithsonian Magazine

There are advantages and disadvantages to censorship. Censorship can reduce the negative impact of hate speech, especially on the internet. By limiting certain speeches, we create more opportunities for equality. In addition, censorship prevents the spread of racism. For example, posts and videos of racial comments could be blocked by social media companies if deemed necessary. Censorship can also protect minors from seeing harmful content. Because children can be manipulated easily, it helps promote safety.  Moreover, censorship can be a vehicle to stop false information. During unprecedented times like this pandemic, misinformation can be fatal. On the other hand, censorship may not be good for the public as it creates a specific narrative in society. This can potentially cause biases. For example, many blamed Facebook for the outcome of an election as it’s detrimental to our democracy.

Overall, I believe that some sort of social media censorship is necessary. The cyber-world is interrelated to the real world. We can’t let people do or say whatever they want as it may have dramatic detrimental effects. The issue is how do you keep the best of both worlds?

 

Getting Away with Murder

It’s probably not best to “joke” around with someone seeking legal advice about how to get away with murder. Even less so doing it on social media where tone infamously, is not always easily construed. Alas, that is what happened recently in January 2021, in the case In re Sitton out of Tennessee.

Let’s lay out the facts of the case first. Mr. Sitton is an attorney who has been practicing for almost 25 years. He has a Facebook page in which he labels himself as an attorney. A Facebook “friend” of his, named Lauren Houston had posted a publicly viewable question, asking about the legality of carrying a gun in her car in the state of Tennessee. The reason for the inquiry was because she had been involved in a toxic relationship with her ex-boyfriend, who was also the father of her child. As Mr. Sitton had become aware of her allegations of abuse, harassment, violations of child custody arrangement, and requests for orders of protection against the ex, he decided to comment on the post and offer some advice to Ms. Houston. The following was Mr. Sitton’s response to her question:

“I have a carry permit Lauren. The problem is that if you pull your gun, you must use it. I am afraid that, with your volatile relationship with your baby’s daddy, you will kill your ex     your son’s father. Better to get a taser or a canister of tear gas. Effective but not deadly. If you get a shot gun, fill the first couple rounds with rock salt, the second couple with bird shot, then load for bear.

If you want to kill him, then lure him into your house and claim he broke in with intent to do you bodily harm and that you feared for your life. Even with the new stand your ground law, the castle doctrine is a far safer basis for use of deadly force.”

 

Ms. Houston then replied to Mr. Sitton, “I wish he would try.” Mr. Sitton then replied again, “As a lawyer, I advise you to keep mum about this if you are remotely serious. Delete this thread and keep quiet. Your defense is that you are afraid for your life     revenge or premeditation of any sort will be used against you at trial.” Ms. Houston then subsequently deleted the post, following the advice of Mr. Sitton.

Ms. Houston’s ex-boyfriend eventually found out about the post, including Mr. Sitton’s comments and passed screenshots of it to the Attorney General of Shelby County who then sent them to the Tennessee’s Board of Professional Responsibility (“Board”). In August 2018, the Board filed a petition for discipline against him. The petition alleged Mr. Sitton violated Rule of Professional Conduct by “counseling Ms. Houston about how to engage in criminal conduct in a manner that would minimize the likelihood of arrest or conviction.”

Mr. Sitton admitted most of the basic facts but attempted to claim his comments were taken out of context. One of the things Mr. Sitton has admitted to during the Board’s hearing on this matter was that he identified himself as a lawyer in his Facebook posts and intended to give Ms. Houston legal advice and information. He noted Ms. Houston engaged with him on Facebook about his legal advice, and he felt she “appreciated that he was helping her understand the laws of the State of Tennessee.” Mr. Sitton went on to claim his only intent in posting the Facebook comments was to convince Ms. Houston not to carry a gun in her car. He maintained that his Facebook posts about using the protection of the “castle doctrine” to lure Mr. Henderson into Ms. Houston’s home to kill him were “sarcasm” or “dark humor.”

The hearing panel found Mr. Sitton’s claim that his “castle doctrine” comments were “sarcasm” or “dark humor” to be unpersuasive, noting that this depiction was challenged by his own testimony and Ms. Houston’s posts. The panel instead came to the determination that Mr. Sitton intended to give Ms. Houston legal advice about a legally “safer basis for use of deadly force.” Pointing out that the Facebook comments were made in a “publicly posted conversation,” the hearing panel found that “a reasonable person reading these comments certainly would not and could not perceive them to be ‘sarcasm’ or ‘dark humor. They also noted Mr. Sitton lacked any remorse for his actions. It acknowledged that he conceded his Facebook posts were “intemperate” and “foolish,” but it also pointed out that he maintained, “I don’t think what I told her was wrong.”

The Board decided to only suspend Mr. Sitton for 60 days. However, the Supreme Court of Tennessee reviews all punishments once the Board submits a proposed order of enforcement against an attorney to ensure the punishment is fair and uniform to similar circumstances/punishments throughout the state. The Supreme Court found the 60-day suspension to be insufficient and increased Mr. Sitton’s suspension to 1-year active suspension and 3 years on probation.

Really? While I’m certainly glad the Tennessee Supreme Court increased his suspension, I still think one year is dramatically too short. How do you allow an attorney who has been practicing for nearly 30 years to only serve a 1-year suspension for instructing someone on how to get away with murder? Especially when both the court and hearing panel found no mitigating factors, that a reasonable person would not interpret his comments to have been dark humor and that it was to be interpreted as real legal advice? What’s even more mind boggling is that the court found Mr. Sitton violated ABA Standards 5.1 (Failure to Maintain Personal Integrity) and 6.1 (False Statements, Fraud, and Misrepresentation), but then twisted their opinion and essentially said there was no real area in which Mr. Sitton’s actions neatly fall into within those two rules and therefore that is why they are only giving a 1-year suspension. The thing is, that is simply inaccurate for the sentencing guidelines (which the court included in their opinion) for violations of 5.1 and 6.1, it is abundantly obvious that Mr. Sitton’s actions do fall into them clearly, so it is a mystery as to how the court found otherwise.

 

If you were the judge ruling on this disciplinary case, what sentencing would you have handed down?

The Dark Side of Tik-Tok

In Bethany, Oklahoma, a 12-year-old child died due to strangulation marks on his neck. According to police, this wasn’t due to murder or suicide, rather a TikTok challenge that had gone horribly wrong. The challenge is known by a variety of names, including Blackout Challenge, Pass Out Challenge, Speed Dreaming, and The Fainting Game. The challenge is kids asphyxiating themselves, either by choking themselves out by hand or by using a rope or a belt, to obtain the euphoria when they wake up.

Even if the challenge does not result in death, medical professionals warn that it is extremely dangerous. Every moment you are without oxygen or blood, you risk irreversible damage to a portion of your brain.

Unfortunately, the main goal on social media is to gain as many views as possible, regardless of the danger or expense.

Because of the pandemic kids have been spending a lot of time alone and bored, which has led to preteens participating in social media challenges.

There are some social media challenges that are harmless including the 2014 Ice Bucket Challenge, which earned millions of dollars for ALS research.

However there has also been the Benadryl challenge which began in 2020 that urged people to overdose on the drug in an effort to hallucinate. People were also urged to lick surfaces in public as part of the coronavirus challenge.

One of the latest “challenges” on the social media app TikTok could have embarrassing consequences users never imagined possible. The idea of the Silhouette Challenge is to shoot a video of yourself dancing as a silhouette with a red filter covering up the details of your body. It started out as a way to empower people but has turned into a trend that could come back to haunt you. Participants generally start the video in front of the camera fully clothed. When the music changes, the user appears in less clothing, or nude, as a silhouette obscured by a red filter. But the challenge has been hijacked by people using software to remove that filter and reveal the original footage.

If these filters are removed, that can certainly create an environment where kids’ faces are being put out in the public domain, and their bodies are being shown in ways they didn’t anticipate,” said Mekel Harris licensed pediatric & family psychologist. Young people who participate in these types of challenges aren’t thinking about the long-term consequences.

These challenges reveal a darker aspect to the app, which promotes itself as a teen-friendly destination for viral memes and dancing.

TikTok said it would remove such content from its platform. In an updated post to its newsroom, TikTok said:

“We do not allow content that encourages or replicates dangerous challenges that might lead to injury. In fact, it’s a violation of our community guidelines and we will continue to remove this type of content from our platform. Nobody wants their friends or family to get hurt filming a video or trying a stunt. It’s not funny – and since we remove that sort of content, it certainly won’t make you TikTok famous.”

TikTok urged users to report videos containing the challenge. And it told BBC News there was now text reminding users to not imitate or encourage public participation in dangerous stunts and risky behavior that could lead to serious injury or death.

While the challenge may seem funny or get views on social media platforms, they can have long-lasting health consequences.

Because the First Amendment gives strong protection to freedom of speech, only publishers and authors are liable for content shared online. Section 230(c)(1) of the Communications Decency Act of 1996 states that “no provider or user of an interactive computer service shall be treated as the publisher or any information provided by another information content provider.” This act provides social media companies immunity over the content published by other authors on their platforms as long as intellectual property rights are not infringed. Although the law does not require social media sites to regulate their content, they can still decide to remove content at their discretion. Guidelines on the laws regarding discretionary content censorship are sparse. Because the government is not regulating speech, this power has fallen into the hands of social media giants like TikTok. Inevitably, the personal agendas of these companies are shaping conversations, highlighting the necessity of debating the place of social media platforms in the national media landscape.

THE ROLE OF SOCIAL MEDIA:

Social media is unique in that it offers a huge public platform, instant access to peers, and measurable feedback in the form of likes, views, and comments. This creates strong incentives to get as much favorable peer evaluation and approval as possible. Social media challenges are particularly appealing to adolescents, who look to their peers for cues about what’s cool, crave positive reinforcement from their friends and social networks, and are more prone to risk-taking behaviors, particularly when they’re aware that those whose approval they covet are watching them.

Teens won’t necessarily stop to consider that laundry detergent is a poison that can burn their throats and damage their airways. Or that misusing medications like diphenhydramine​ (Benadryl) can cause serious heart problems, seizures and coma.​ What they will focus on is that a popular kid in class did this and got hundreds of likes and comments.

WHY ARE TEENS SUSCEPTIBLE:

Children are biologically built to become much more susceptible to peer influence throughout puberty, and social media has magnified those peer influence processes, making them significantly more dangerous than ever before. Teens may find these activities entertaining and even thrilling, especially if no one is hurt, which increases their likelihood of participating. Teens are already less capable of evaluating danger than adults, so when friends reward them for taking risks – through likes and comments – it may act as a disinhibitor. These youngsters are being impacted on an unconscious level. The internet issues that are prevalent nowadays make it impossible for youngsters to avoid them. This will not occur unless they have parental engagement.

WHAT WE CAN DO TO CONTROL THE SITUATION:

Due to their lack of exposure to these effects as children, parents today struggle to address the risks of social media use with their children.

Parents, on the other hand, should address viral trends with their children. Parents should check their children’s social media history and communicate with them about their online activities, as well as block certain social media sites and educate themselves on what may be lurking behind their child’s screen.

In the case of viral infections, determine your child’s level of familiarity with any patterns you may have heard about before soliciting their opinion. You may ask as to why they think others will follow the trend and what they believe are some of the risks associated with doing so. Utilize this opportunity to explain why you are concerned about a certain trend.

HOW TO COPE WITH SOCIAL MEDIA USAGE:

It’s important to keep in mind that taking a break is completely appropriate. You are not required to join in every discussion, and disabling your notifications may provide some breathing space. You may set regular reminders to keep track of how long you’ve been using a certain app.

If you’re seeing a lot of unpleasant content in your feed, consider muting or blocking particular accounts or reporting it to the social media company.

If anything you read online makes you feel anxious or frightened, communicate your feelings to someone you trust. Assistance may come from a friend, a family member, a teacher, a therapist, or a helpline. You are not alone, and seeking help is completely OK.

Social media is a natural part of life for young people, and although it may have a number of advantages, it is essential that platforms like TikTok take responsibility for harmful content on their sites.

I welcome the government’s plan to create a regulator to guarantee that social media companies handle cyberbullying and posts encouraging self-harm and suicide.

Additionally, we must ensure that schools teach children what to do if they come across upsetting content online, as well as how to use the internet in a way that benefits their mental health.

To reduce the likelihood of misuse, protections must be implemented.

MY QUESTION TO YOU ALL:

How can social media companies improve their moderation so that children are not left to fend for themselves online? What can they do to improve their in-app security?

How Defamation and Minor Protection Laws Ultimately Shaped the Internet

Kyiv, Ukraine – September 5, 2019: A paper cubes collection with printed logos of world-famous social networks and online messengers, such as Facebook, Instagram, YouTube, Telegram and others.

The Communications Decency Act (CDA) was originally enacted with the intention of shielding minors from indecent and obscene online material. Despite its origins, Section 230 of the Communications Decency Act is now commonly used as a broad legal safeguard for social media platforms to shield themselves from legal liability for content posted on their sites by third parties. Interestingly, the reasoning behind this safeguard arises both from defamation common law, and constitutional free speech laws. As the internet has grown, however, this legal safeguard has gained increasing criticism. However, is this legislation actually undesirable? Many would disagree as section 230 contains “the 26 words that created the internet.”

 

Origin of the Communications Decency Act

The CDA was introduced and enacted as an attempt to shield minors from obscene or indecent content online. Although parts of the Act were later struck down for first amendment free speech violations, the Court left section 230 intact. The creation of section 230 was influenced by two landmark court decisions of defamation lawsuits.

The first case was in 1991, and involved an Internet site that hosted around 150 online forums. A claim was brought against the internet provider when a columnist of one of the online forums posted a defamatory comment about his competitor. The competitor sued the online distributor for the published defamation. The courts categorized the internet service provider as a distributor because they did not review any content of the forums before the content was posted to the site. As a distributor, there was no legal liability, and the case was dismissed.

 

Distributor Liability

Distributor Liability refers to the limited legal consequences that a distributor is exposed to for defamation. A common example of a distributor, is a bookstore or library. The theory behind distributor liability is that it would be impossible for distributors to moderate and censor every piece of content that they disperse because of the sheer volume, and the impossibility of knowing whether something is false or not.

The second case that influenced the creation of section 230, was Stratton Oakmont, Inc. v. Prodigy Servs. Co., in which the court used publisher liability theory to find the internet provider liable for the third party defamatory postings published on its site.  The court deemed the website a publisher because they moderated and deleted certain posts, regardless of the fact that there were far too many postings a day to regulate each one.

 

Publisher Liability

Under common law principles, a person who publishes a third-party’s defamatory statement bears the same legal responsibility as the creator of that statement. This liability is often referred to as “publisher liability,” and is based in theory that a publisher has the knowledge, opportunity, and ability to exercise control over the publication. For example, a newspaper publisher could face legal consequences for the content located within it. The court’s decision was significant because it meant that if a website attempted to moderate certain posts, it would be held liable for all posts.

 

Section 230’s Creation

In response to the Stratton-Oakmond case, and the ambiguous court decisions regarding internet services provider’s liability, members of Congress introduced an amendment to the CDA that later became Section 230. The Amendment was specifically introduced and passed with the goal of encouraging the development of unregulated, free speech online by relieving internet providers from any liability for their content.

 

Text of the Act- Subsection (c)(1) 

“No Provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

 Section 230 further provides that…

“No cause of action may be brought and no liability may be imposed under any State or local law that is inconsistent with this section.”

 The language above removes legal consequences arising from content posted on their forum. Courts have interpreted this subsection as providing broad immunity to online platforms from suits over content of third parties. Because of this, section 230 has become the principal legal safeguard from lawsuits over sites content.

 

The Good

  •  Section 230 can be viewed as being one of the most important pieces of legislation that protects free speech online. One of the unique aspects of this legislation is that it essentially extends free speech protection, applying it to private, non-governmental companies.
  • Without CDA 230, the internet would be a very different place. This section influenced some of the internet’s most distinctive characteristics. The internet promotes free speech and offers the ability for worldwide connectivity.
  • The CDA 230 does not fully eliminate liability or court remedies for victims of online defamation. Rather, it makes only the creator themselves liable for their speech, instead of the speaker and the publisher.

 

 

The Bad

  •  Because of the legal protections section 230 provides, social media networks have less of an incentive to regulate false or deceptive posts. Deceptive online posts can have an enormous impact on society. False posts have the ability to alter election results, or lead to dangerous misinformation campaigns, like the QAnon conspiracy theory, and the anti-vaccination movement.
  • Section 230 is twenty-five years old, and has not been updated to match the internet’s extensive growth.
  • Big Tech companies have been left largely unregulated regarding their online marketplaces.

 

 The Future of 230

While section 230 is still successfully used by social media platforms, concerns over the archaic legislation have mounted. Just recently, Justice Thomas, who is infamous for being a quiet Justice, wrote a concurring opinion articulating his view that the government should regulate content providers as common carriers, like utilities companies. What implications could that have on the internet? With the growing level of criticism surrounding section 230, will Congress will finally attempt to fix this legislation? If not, will the Supreme Court be left to tackle the problem themselves?

Cancel Culture….. The Biggest Misconception of the 21st Century

Cancel Culture  refers to the popular practice of withdrawing support for (canceling) public figures and companies after they have done or said something considered objectionable or offensive.

Being held accountable isn’t new.

If a public figure has done something or has said something offensive to me why can’t I express my displeasure or discontinue my support for them? Cancel culture is just accountability culture. Words have consequences, and accountability is one of them. However, this is nothing new. We are judged by what we say in our professional and personal lives. For example, whether we like it or not when we’re on a job hunt we are held accountable for what we say or may have said in the past. According to Sandeep Rathore, (2020, May 5). 90% of Employers Consider an Applicant’s Social Media Activity During Hiring Process, employers believe that social media is important to assess job candidates. This article explains that these jobs are  searching your social media for certain red flags like, anything that can be considered hate speech, illegal or illicit content, negative comments about previous jobs or client, threats to people or past employers, confidential or sensitive information about people or previous employers. Seems like a prospective employer can cancel you for a job for things you may have done or said in the past. Sound familiar?

You ever been on a first date? Has your date ever said something so objectionable or offensive that you just cancel them after the first date? I’m sure it has happened to some people. This is just another example of people being held accountable for what they say.

Most public figures who are offended by cancel culture have a feeling of entitlement. They feel they have the right to say anything, even if it’s offensive and hurtful, and bear no accountability. In Sarah Hagi, (2019 November 19). Cancel Culture is not real, at least not in the way people believe it is, Hagi explained that Cancel Culture is turned into a catch-all for when people in power face consequences for their actions or receive any type of criticism, something that they’re not used to.”

What harm is Cancel Culture causing?

Many cancel culture critics say cancel culture is limiting free speech. This I don’t get. The very essence of cancel culture is free speech. Public figures have the right to say what they want and the public has the right to express disapproval and displeasure with what they said. Sometimes this comes in the form of boycotting, blogging, social media posting etc. Public figures who feel that they have been cancelled might have bruised egos, be embarrassed, or might have their career impacted a little but that comes as a consequence of free speech. A Public figure losing fans, customers, or approval in the public eye is not an infringement on their rights. It’s just the opposite. It’s the people of the public expressing their free speech. They have the right to be a fan of who they want, a customer of who they want, and to show approval for who they want. Lastly, Cancel Culture can be open dialogue but  rarely do we see the person that is on the receiving end of a call out wanting to engage in open dialogue with the people who are calling them out.

No public figures are actually getting cancelled.

According to AJ Willingham, (2021 March 7). It’s time to Cancel this talk of cancel culture, “people who are allegedly cancelled still prevail in the end”.  The article gives an example of when Dr. Sues was supposedly cancelled due to racist depictions in his book, but instead his book sales actually went up.  Hip Hop rapper Tory Lanez was supposedly cancelled for allegedly shooting  female rapper Megan the stallion in the foot. Instead of being cancelled he dropped an album describing what happened the night of the shooting and his album skyrocketed in sales. There are numerous examples that show that people are not really being cancelled, but instead simply being called out for their objectionable or offensive behavior.

Who are the real victims here?

In AJ Willingham, (2021 March 7). It’s time to Cancel this talk of cancel culture, the article states “there are real problems that exist…. to know the difference look at the people who actually suffer when these cancel culture wars play out.  There are men and women who allege wrong doing at the risk of their own career. Those are the real victims.” This a problem that needs to be identified in cancel culture debate. To many people are are prioritizing the feelings of the person that is being called out rather than the person that is being oppressed. In Jacqui Higgins-Dailey, (2020, September 3). You need to calm down : You’re getting called out, not cancelled, Dailey explains “ When someone of a marginalized group says they are being harmed, we (the dominant group) say the harm wasn’t our intent. But impact and intent are not the same. When a person doesn’t consider the impact their beliefs, thoughts, words and actions have on a marginalized group, they continue to perpetuate the silencing of that group. Call-out culture is a tool. Ending call-out culture silences marginalized groups who have been censored far too long. The danger of cancel culture is refusing to take criticism. That is stifling debate. That is digging into a narrow world view”.

 

 

 

 

 

 

 

 

 

How One Teenager’s Snapchat Shaped Students Off-Campus Free Speech Rights

Did you ever not make your high school sports team or get a bad grade on an exam? What did you do to blow off steam? Did you talk to your friends or parents about it or write in your journal about it? When I was in High school- some of my classmates would use Twitter or Snapchat to express themselves. However, the rates for the use of smartphones and social media were much lower than they are today. For instance, today high school students use their smartphones and social media at an incredibly high rate compared to when I was in high school almost ten years ago. In fact, according to Pew Research Center, 95% of teenagers have access to smartphones and 69% of teenagers use Snapchat. This is exactly why the recent Supreme Court decision on Mahanoy Area School District v. B.L. is more important than ever, as it pertains to student’s free speech rights and how much power schools have in controlling their student’s off-campus speech.  Further, this decision is even more necessary because the last time the Supreme Court ruled on student’s free speech was over fifty years ago in Tinker v. Des Moines, way before anyone had smartphones or social media. Therefore, the latest decision by the Supreme Court will shape the future of the power of school districts and the first Amendment rights for students for maybe the next fifty years.

 

The main issue in Mahanoy Area School District v. B.L. is whether public schools can discipline students over something they said off-campus. The facts in this case, occurred when Levy, was a sophomore at Mahoney Area School District. Levy didn’t make the varsity cheerleading team; naturally, she was upset and frustrated about the situation. So, that weekend, Levy was at the convenience store in town with a friend. Levy and the friend took a Snap Chat with their middle finger raised with the caption “F- School, F-Softball, F-Cheerleading, F-Everything” and sent it to her Snap Chat friends. Then, the picture was screenshotted and shown to the cheerleading coach. Which lead to Levy being suspended from the cheerleading team for one year.

 

Furthermore, Levy and her parents did not agree with the suspension and the school’s involvement in Levy’s off-campus speech. Therefore, Levy and her parents filed a lawsuit claiming their suspension violated Levy’s First Amendment free speech rights. Levy sued the school under 42 U.S.C. § 1983 alleging (1) that her suspension from the team violated the First Amendment; (2) that the school and team rules were overbroad and viewpoint discriminatory; and (3) that those rules were unconstitutionally vague. The district court granted summary judgment in favor of Levy, stating that the school had violated her First Amendment rights. The U.S. Court of Appeals for the Third Circuit affirmed the district court decision. The Mahoney School District petitioned for a writ of certiorari.

 

In an 8-1 decision the Supreme Court ruled in favor of Levy. The Supreme Court held that the Mahoney Area School District violated Levy’s First Amendment rights by punishing her for using vulgar language that criticized the school on social media. The Supreme Court noted numerous reasons why they ruled in favor of Levy. Additionally, The Supreme Court noted the importance of schools monitoring and punishing some off-campus speech. Such as, speech and behavior that is “serious or severe bullying or harassment targeting particular individuals; threats aimed at teachers or other students.” This is more necessary than ever before due to the increase in online bullying and harassment; that can impact the day-to-day activities of the school and the development of minors.

 

While it’s important in some circumstances for schools to monitor and address off-campus speech. The Supreme Court noted three reasons that would limit schools from interfering with student’s off-campus speech. First, a school, concerning off-campus speech, will rarely stand in loco parentis. Therefore, schools do not have more authority than parents. Especially not for off-campus speech. The parent is the authority figure; and will decide to discipline or not in most activities in their child’s life, especially what happens outside of school. This is important because parents have the authority to raise and discipline their children the way they believe, not based on the school district’s beliefs.

 

Second, “from the student perspective, regulations of off-campus speech, when coupled with regulations of on-campus speech, include all the speech a student utters during the full 24-hour day.” There would be no boundaries or limitations to what the school district would be allowed to discipline their students on. For instance, what if a group of students on a Saturday night decided to make a Tik Tok, and during the Tik Tok, the students curse and use vulgar language, would they be in trouble? If there were no limits to what the school could punish for off-campus speech, then those students could be in trouble for their Tik Tok video. Therefore, it’s important that the Supreme Court made this distinction to protect the student Frist Amendment rights.

 

Finally, the third reason is the school itself has an interest in protecting a student’s unpopular expression, especially when the expression takes place off-campus.” For instance, the Supreme Court stated that if schools did not protect their students’ unpopular opinions, this would limit and ruin the student’s ability to express themselves and schools are a place for students to learn and create their own opinion- even if that opinion differs from the school’s. To conclude, this would severely impact the student’s ability to think for themselves and create their own opinion, and respect other’s opinions that differ from their own.

 

Overall, I agree with the Supreme Court’s decision in this case. I believe it’s essential to separate in-school speech and off-campus speech. However, the only time off-campus speech should be monitored and addressed by the school is if there is bullying, harassing, or threatening language against the school, groups, or individuals at the school. With that being said, the Supreme Court noted three very important reasons as to why the public schools cannot have full control of students’ off-campus speech. All three of these reasons are fair and justifiable to protect the parents and students from being overly controlled by the school. To conclude, there is still a lot of questions and uncertainty, especially since technology is rapidly advancing and new social media platforms emerging frequently. I am curious if the Supreme Court will rule on a similar within the next fifty years and how this will impact schools in the next few years.

 

Do you agree with the Supreme Court decision and how do you see this ruling impacting public schools over the next few years?