Sport Regulation of Legal Matters with Social Media

The internet is becoming more accessible to individuals throughout the world. With more access to the internet, there is a growing population on the social media platforms. Social media platforms, such as Facebook (Meta), X (Twitter), Snapchat, and YouTube. These platforms provide an opportunity for engagement between consumers and producers.

 

Leagues such as the MLB, NFL, La Liga, and more have created an account, establishing presences in the social media world where they may interact with their fans (consumers) and their athletes (employees).

Why Social Media matters in sports.

As presence on Social Media platforms continue to grow so does the need for businesses to market themselves on the platforms. Therefore, leagues such as the MLB have created policies for its employees and athletes to follow. The MLB is a private organization even though it is spread around the United States. Usually sports leagues are private organizations headquartered in a specific state, New York HQ is where employees handle league matters. These organizations may create their own policies or guidelines which they may enforce internally. Even though organizations such as the MLB may go ahead an place their own policies, they must abide by Federal and State labor, corporate, criminal and more types of law. The policies that these leagues provide can give the leagues more power to ensure that they are abiding by the laws necessary to continue on the national and at times international scale.

MLB’s Management of Social Media. 

MLB’s Social Media policies are prefaced by this paragraph explaining who within the MLB establishes the policies. “Consistent with the authority vested in the Commissioner by the Major League Constitution (“MLC”) and the Major League Baseball Interactive Media Rights Agreement (“IMRA”), the Commissioner has implemented the following policy regarding the use of social media by individuals affiliated with Major League Baseball and the 30 Clubs. Nothing contained in this policy is intended to restrict or otherwise alter any of the rights otherwise granted by the IMRA.” To enforce power and regulation in Social Media, the league has referred to their Interactive Media Rights Agreement and their commissioner. These organizations generally will have an elected to serve the organization and help with executive managerial decisions.

There is a list of 10 explcit types of conduct related to Social Media for which the MLB Prohibits (A few rules that stand out will be listed):

1. Displaying or transmitting Content via Social Media in a manner that reasonably could be construed as an official public communication of any MLB Entity or attributed to any MLB Entity.

2. Using an MLB Entity’s logo, mark, or written, photographic, video, or audio property in any way that might indicate an MLB Entity’s approval of Content, create confusion as to attribution, or jeopardize an MLB Entity’s legal rights concerning a logo or mark.

3. Linking to the website of any MLB Entity on any Social Media outlet in any way that might indicate an MLB Entity’s approval of Content or create confusion as to attribution.

NOTE: Only Covered Individuals who are authorized by the Senior Vice
President, Public Relations of the Commissioner’s Office to use Social Media on behalf of an MLB Entity and display Content on Social Media in that capacity are exempt from Sections 1, 2 and 3 of this policy.

5. Displaying or transmitting Content that reasonably could be construed as
condoning the use of any substance prohibited by the Major or Minor League Drug Programs, or the Commissioner’s Drug Program.

7. Displaying or transmitting Content that reasonably could be viewed as
discriminatory, bullying, and/or harassing based on race, color, ancestry, sex, sexual orientation, national origin, age, disability, religion, or other categories protected by law and/or which would not be permitted in the workplace, including, but not limited to, Content that could contribute to a hostile work environment (e.g., slurs, obscenities, stereotypes) or reasonably could be viewed as retaliatory.

10. Displaying or transmitting Content that violates applicable local, state or federal law or regulations.

 

Notice that these policies are provided to the organization as a whole, but there are exceptions for individuals whose role for the league involves Social Media. Workers are privileged to not be bound by rules 1-3 but employees/athletes such as Ohtani are bound.

Mizuhara/Ohtani Gambling Situation.

One of the biggest stories of the MLB this year was the illegal gambling situation of Ohtani and his interpreter. In the MLB’s policies, gambling is strictly prohibited regardless if it is legal in the state where the athlete is a citizen.

In California, the state has yet to legalize betting. Therefore to place a bet, one would have to do so with a bookie and bookkeeper, not with an application such as Fanduel or go to a Tribal location where gambling is administered. 

Per the commissioner’s orders, the MLB launched an internal investigation on the matter as the situation involves violations of their policies and even criminal acts. The MLB may deem a punishment they find fit at the end of their investigation. However, the DOI is limited to how much the MLB funds them. The MLB’s Department of Investigation can only do so much with the limited resources that the MLB provides them to conduct investigations.

However, Ohtani was found to be a victim and there was a federal investigation launched. The complaint lists many counts of bank fraud allegations. In conducting the investigation, a forensic review of Mizuhara’s phone and texts were acquired. In addition, so were the suspected bookkeepers. There was evidence of the individuals discussing ways to bet, how to earn and pay debts, and discussions of wiring money from banks in excessive amounts.

What Does This All Mean?

The law and its administrations are beginning to adapt and acknowledge the presence of the internet. It is common to find Phones and communications through the internet seized for evidence in cases. The internet is essential for life. It must be determined if, as a society, do we want to have limits set since we are required to use the internet to live. Also, if we want to set limits to speech dependent on employment.

When in Doubt, DISCLOSE it Out!

The sweeping transformation of social media platforms over the past several years has given rise to convenient and cost-effective advertising. Advertisers are now able to market their products or services to consumers (i.e. users) at low cost, right at their fingertips…literally! But convenience comes with a few simple and easy rules. Influencers, such as, athletes, celebrities, and high-profile individuals are trusted by their followers to remain transparent. Doing so does not require anything difficult. In fact, including “Ad” or “#Ad” at the beginning of a post is satisfactory. The question then becomes, who’s making these rules?

The Federal Trade Commission (FTC) works to stop deceptive or misleading advertising and provides guidance on how to go about doing so. Under the FTC, individuals have a legal obligation to clearly and conspicuously disclose their material connection to the products, services, brands, and/or companies they promote on their feeds. The FTC highlights one objective component to help users identify an endorsement. That is, a statement made by the speaker where their relationship with the advertiser is such that the speaker’s statement can be understood to be sponsored by the advertiser. In other words, if the speaker is acting on behalf of the advertiser, then that statement will be taken as an endorsement and subject to guidelines. Several factors will determine this, such as compensation, free products, and the terms of any agreement. Two basic principles of advertising law apply to all types of advertising in any media. They include 1) a reasonable basis to evidence claims and 2) clear and conspicuous disclosure. Overall, the FTC works to ensure transparent sponsorship in an effort to maintain consumer trust.

The Breakdown—When, How, & What Else

Influencers should disclose when they have a financial, employment, personal, or family relationship with a brand. Financial relationships do not have to be limited to money. If for example, a brand gives you a free product, disclosure is required even if you were not asked to mention it in a post. Similarly, if a user posts from abroad, U.S. law still applies if it is reasonably foreseeable that U.S. consumers will be affected.

When disclosing your material connection to the brand, make sure that disclosure is easy to see and understand. The FTC has previously disapproved of disclosure in places that are remote from the post itself. For instance, users should not have to press “show more” in the comments section to see that the post is actually an endorsement.

Another important aspect advertisers and endorsers should consider when disclosing are making sure not to talk about items they have not yet tried. They should also avoid saying that a product was great when they in fact thought it was not. In addition, individuals should not convey information or produce claims that are unsupported by actual evidence.

However, not everyone who posts about a brand needs to disclose. If you want to post a Sephora haul or a Crumbl Cookie review, that is okay! As long as a company is not giving you products for free or paying you to sponsor them, individuals are free to post at their leisure, without disclosing.

Now that you realize how seamless disclosure is, it may be surprising that people still fail to do so.

Rule Breakers

In Spring 2020 we saw an uptick of social media posts due to the fact that most people abided by stay-at-home orders and turned to social media for entertainment. TikTok is deemed particularly addictive, with users spending substantially more time on it over other apps, such as Instagram and Twitter.

TikTok star Charlie D’Amelio spoke positively about the enhancement drink, Muse in a Q&A post. She never acknowledged that the brand was paying her to sponsor their product and failed to use the platform’s content enabling tool which makes it even easier for users to disclose. D’Amelio is the second most followed account on the platform.

The Teami brand found itself in a similar position when stars like Cardi B and Brittany Renner made unfounded claims that the wellness company made products that resulted in unrealistic health benefits. The FTC instituted a complaint alleging that the company misled consumers to think that their 30-day detox pack would ensure weight loss. A subsequent court order prohibited them from making such unsubstantiated claims.

Still, these influencers hardly got punished, but received a mere ‘slap on the wrist’ for making inadequate disclosures. They were ultimately sent warning letters and received some bad press.

Challenges in Regulation & Recourse

Section 5(a) of the FTC Act is the statute that allows the agency to investigate and prevent unfair methods of competition. It is what gives them the authority to seek relief for consumers. This includes injunctions and restitution and in some cases, civil penalties. However, regulation is challenging because noncompliance is so easy. While endorsers have the ultimate responsibility to disclose their content, advertising companies are urged to implement procedures that make doing so more probable. There are never-ending amounts of content on social media to regulate, making it difficult for entities like the FTC to know when rules are actually being broken.

Users can report undisclosed posts through their social media accounts directly, their state attorneys general office, or to the FTC. Private parties can also bring suit. In 2022, a travel agency group sued a travel influencer for deceptive advertising. The influencer made false claims, such as being the first woman to travel to every country and failed to disclose paid promotions on her Instagram and TikTok accounts. The group seeks to enjoin the influencer from advertising without disclosing and to engage in corrective measures on her remaining posts that violate the FTC’s rules. Social media users are better able to weigh the value of endorsements when they can see the truth behind such posts.

In a world filled with filters, when it comes to advertisements on social media, let’s just keep it real.

Update Required: An Analysis of the Conflict Between Copyright Holders and Social Media Users

Opening

For anyone who is chronically online as yours truly, in one way or another we have seen our favorite social media influencers, artists, commentators, and content creators complain about their problems with the current US Intellectual Property (IP) system. Be it that their posts are deleted without explanation or portions of their video files are muted, the combination of factors leading to copyright issues on social media is endless. This, in turn, has a markedly negative impact on free and fair expression on the internet, especially within the context of our contemporary online culture. For better or worse, interaction in society today is intertwined with the services of social media sites. Conflict arises when the interests of copyright holders clash with this reality. They are empowered by byzantine and unrealistic laws that hamper our ability to exist as freely as we do in real life. While they do have legitimate and fundamental rights that need to be protected, such rights must be balanced out with desperately needed reform. People’s interaction with society and culture must not be hampered, for that is one of the many foundations of a healthy and thriving society. To understand this, I venture to analyze the current legal infrastructure we find ourselves in.

Current Relevant Law

The current controlling laws for copyright issues on social media are the Copyright Act of 1976 and the Digital Millennium Copyright Act (DMCA). The DMCA is most relevant to our analysis; it gives copyright holders relatively unrestrained power to demand removal of their property from the internet and to punish those using illegal methods to get ahold of their property. This broad law, of course, impacted social media sites. Title II of the law added 17 U.S. Code § 512 to the Copyright Act of 1976, creating several safe harbor provisions for online service providers (OSP), such as social media sites, when hosting content posted by third parties. The most relevant of these safe harbors to this issue is 17 U.S. Code § 512(c), which states that an OSP cannot be liable for monetary damages if it meets several requirements and provides a copyright holder a quick and easy way to claim their property. The mechanism, known as a “notice and takedown” procedure, varies by social media service and is outlined in their terms and conditions of service (YouTube, Twitter, Instagram, TikTok, Facebook/Meta). Regardless, they all have a complaint form or application that follows the rules of the DMCA and usually will rapidly strike objectionable social media posts by users. 17 U.S. Code § 512(g) does provide the user some leeway with an appeal process and § 512(f) imposes liability to those who send unjustifiable takedowns. Nevertheless, a perfect balance of rights is not achieved.

The doctrine of fair use, codified as 17 U.S. Code § 107 via the Copyright Act of 1976, also plays a massive role here. It established a legal pathway for the use of copyrighted material for “purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research” without having to acquire right to said IP from the owner. This legal safety valve has been a blessing for social media users, especially with recent victories like Hosseinzadeh v. Klein, which protected reaction content from DMCA takedowns. Cases like Lenz v. Universal Music Corp further established that fair use must be considered by copyright holders when preparing for takedowns. Nevertheless, failure to consider said rights by true copyright holders still happens, as sites are quick to react to DMCA complaints. Furthermore, the flawed reporting systems of social media sites lead to abuse by unscrupulous actors faking true ownership. On top of that, such legal actions can be psychologically and financially intimidating, especially when facing off with a major IP holder, adding to the unbalanced power dynamic between the holder and the poster.

The Telecommunications Act of 1996, which focuses primarily on cellular and landline carriers, is also particularly relevant to social media companies in this conflict. At the time of its passing, the internet was still in its infancy. Thus, it does not incorporate an understanding of the current cultural paradigm we find ourselves in. Specifically, the contentious Section 230 of the Communication Decency Act (Title V of the 1996 Act) works against social media companies in this instance, incorporating a broad and draconian rule on copyright infringement. 47 U.S. Code § 230(e)(2) states in no uncertain terms that “nothing in this section shall be construed to limit or expand any law pertaining to intellectual property.” This has been interpreted and restated in Perfect 10, Inc. v. CCBill LLC to mean that such companies are liable for user copyright infringement. This gap in the protective armor of Section 230 is a great concern to such companies, therefore they react strongly to such issues.

What is To Be Done?

Arguably, fixing the issues around copyright on social media is far beyond the capacity of current legal mechanisms. With ostensibly billions of posts each day on various sites, regulation by copyright holders and sites is far beyond reason. It will take serious reform in the socio-cultural, technological, and legal arenas before a true balance of liberty and justice can be established. Perhaps we can start with an understanding by copyright holders not to overreact when their property is posted online. Popularity is key to success in business, so shouldn’t you value the free marketing that comes with your copyrighted property getting shared honestly within the cultural sphere of social media?  Social media sites can also expand their DMCA case management teams or create tools for users to accredit and even share revenue with, if they are an influencer or content creator, the copyright holder. Finally, congressional action is desperately needed as we have entered a new era that requires new laws. That being said, achieving a balance between the free exchange of ideas and creations and the rights of copyright holders must be the cornerstone of the government’s approach to socio-cultural expression on social media. That is the only way we can progress as an ever more online society.

 

Image: Freepik.com

https://www.freepik.com/free-vector/flat-design-intellectual-property-concept-with-woman-laptop_10491685.htm#query=intellectual%20property&position=2&from_view=keyword”>Image by pikisuperstar

Social Media Addiction

Social Media was created as an educational and informational resource for American Citizens. Nonetheless, it has become a tool for AI bots and tech companies to predict our next moves by manipulating our minds on social media apps. Section 230 of the Communications Decency Act helped create the modern internet we use today. However, it was initially a 1996 law that regulated online pornography. Specifically, Section 230 provides legal immunity from liability for internet services and users for content posted online. Tech companies do not just want to advertise to social media users but instead want to predict a user’s next move. The process of these manipulative tactics used by social media apps has wreaked havoc on the human psyche and destroyed the social aspects of life by keeping people glued to a screen so big tech companies can profit off of it. 

Social media has changed a generation for the worse, causing depression and sometimes suicide, as tech designers manipulate social media users for profit. Social media companies for decades have been shielded from legal consequences for what happens on their platforms. However, due to recent studies and court cases, this may be able to change and allow for big tech social media companies to be held accountable. A former Facebook employee, France Haugen, a whistleblower to the Senate, stated not to trust Facebook as they knowingly pushed products that harm children and young adults to further profits, which Section 230 cannot sufficiently protect. Haugen further states that researchers at Instagram (a Facebook-owned Social Media App) knew their app was worsening teenagers’ body images and mental health, even as the company publicly downplayed these effects.

There is a California Bill, Social Media Platform Duty to Children Act, that aims to make tech firms liable for Social media Addiction in children; this would allow parents and guardians to use platforms that they believe addicted children in their care through advertising, push notifications and design features that promote compulsive use, particularly the continual consumption of harmful content on issues such as eating disorders and suicide. This bill would hold companies accountable regardless of whether they deliberately designed their products to be addictive.

Social Media addiction is a psychological, behavioral dependence on social media platforms such as Instagram, Snapchat, Facebook, TikTok, bereal, etc. Mental Disorders are defined as conditions that affect ones thinking, feeling, mood, and behaviors. Since the era of social media, especially from 2010 on, doctors and physicians have had a hard time diagnosing patients with social media addiction and mental disorders since they seem to go hand in hand. Social Media addiction has been seen to improve mood and boost health promotions with ads. However, at the same time, it can increase the negative aspects of activities that the youth (ages 13-21) take part in. Generation Z (“Zoomers”) are people born in the late 1990s to 2010s with an increased risk of social media addiction, which has been linked to depression. 

study measured the Difficulties in Emotion Regulation Scale (“DEES”) and Experiences in Close Relationships (“ECR”) to characterize the addictive potential that social media communication applications have based on their measure of the brain. The first measure in the study was a six-item short scale consisting of DEES that was a 36-item, six-factor self-report measure of difficulties, assessing

  1. awareness of emotional responses,
  2. lack of clarity of emotional reactions,
  3. non-acceptance of emotional responses,
  4. limited access to emotion regulation strategies perceived as applicable,
  5. difficulties controlling impulses when experiencing negative emotions, and
  6. problems engaging in goal-directed behaviors when experiencing negative emotions. 

The second measure is ECR-SV which includes a twelve-item test evaluating adult attachment. The scale comprised two six-item subscales: anxiety and avoidance. Each item was rated on a 7-point scale ranging from 1 = strongly disagree to 7 = strongly agree, which is another measure of depression, anxiety, and mania were DSM-5. The results depict that scoring at least five of the nine items on the depression scale during the same two-week period classified depression. Scoring at least three of the six symptoms on the anxiety scale was to sort anxiety. Scoring at least three of the seven traits in the mania scale has classified mania. 

The objectives of these studies were to clarify that there is a high prevalence of social media addiction among college students and confirms statistically that there is a positive relationship between social media addiction and mental disorders by reviewing previous studies. 

The study illustrates that there are four leading causes of social media abuse: 1)The increase in depression symptoms have occurred in conjunction with the rise of smartphones since 2007, 2) Young people, especially Generation Z, spend less time connecting with friends, and they spend more time connecting with digital content. Generation Z is known for quickly losing focus at work or study because they spend much time watching other people’s lives in an age of information explosion. 3) An increase in depression is low self-esteem when they feel negative on Social Media compared to those who are more beautiful, more famous, and wealthier. Consequently, social media users might become less emotionally satisfied, making them feel socially isolated and depressed. 4) Studying pressure and increasing homework load may cause mental problems for students, therefore promoting the matching of social media addiction and psychiatric disorders. 

The popularity of the internet, smartphones, and social networking sites are unequivocally a part of modern life. Nevertheless, it has contributed to the rise of depressive and suicidal symptoms in young people. Shareholders of social media apps should be more aware of the effect their advertising has on its users. Congress should regulate social media as a public policy matter to prevent harm, such as depression or suicide among young people. The best the American people can do is shine a light on the companies that exploit and abuse their users, to the public and to congress, to hold them accountable as Haugen did. There is hope for the future as the number of bills surrounding the topic of social media in conjunction with mental health effects has increased since 2020. 

Shadow Banning Does(n’t) Exist

Shadow Banning Doesn’t Exist

#mushroom

Recent posts from #mushroom are currently hidden because the community has reported some content that may not meet Instagram’s community guidelines.

 

Dear Instagram, get your mind outta the gutter! Mushrooms are probably one of the most searched hashtags in my Instagram history. It all started when I found my first batch of wild chicken-of-the-woods mushrooms. I wanted to learn more about mushroom foraging, so I consulted Instagram. I knew there were tons of foragers sharing photos, videos, and tips about finding different species. But imagine not being able to find content related to your hobby?

What if you loved eggplant varieties? But nothing came up in the search bar? Perhaps you’re an heirloom eggplant farmer trying to sell your product on social media? Yet you’ve only gotten two likes—even though you added #eggplantman to your post. Shadow banned? I think yes.

The deep void of shadow banning is a social media user’s worst nightmare. Especially for influencers whose career depends on engagement. Shadow banning comes with so many uncertainties, but there are a few factors many users agree on:

      1. Certain posts and videos remain hidden from other users
      2. It hurts user engagement
      3. It DOES exist

#Shadowbanning

Shadow banning is an act of restricting or censoring a user’s content on social media without notifying the user. This usually occurs when a user posts content deemed inappropriate or it violates the platform’s guidelines. If a user is shadow banned, the user’s content is only visible to the user and their followers.

Influencers, artists, creators, and business owners are vulnerable victims to the shadow banning void. They depend the most on user engagement, growth, and reaching new audiences. As much as it hurts them, it also hurts other users searching for this specific content. There’s no clear way of telling whether you’ve been shadow banned. You don’t get a notice. You can’t make an appeal to fix your lack of engagement. However, you will see a decline in engagement because no one can see your content in their feeds.

According to the head of Instagram, Adam Mosseri, “shadow banning is not a thing.” In an interview with the Meta CEO, Mark Zuckerberg, he stated Facebook has “no policy that is shadow banning.” Even a Twitter blog stated, “People are asking us if we shadow ban. We do not.” There is no official way of knowing if it exists, but there is evidence it does take place on various social media platforms.

#Shadowbanningisacoverup?

Pole dancing on social media probably would have been deemed inappropriate 20 years ago. But this isn’t the case today. Pole dancing is a growing sport industry. Stigmas associating strippers with pole dancing is shifting with its increasing popularity and trendy nature. However, social media standards may still be stuck in the early 2000s.

In 2019, user posts with hashtags including #poledancing, #polesportorg, and #poledancenation were hidden from Instagram’s Explore page. This affected many users who connect and share new pole dancing techniques with each other. It also had a huge impact on businesses who rely on the pole community to promote their products and services: pole equipment, pole clothing, pole studios, pole sports competitions, pole photographers, and more.

Due to a drastic decrease in user engagement, a petition directing Instagram to stop pole dancing censorship was circulated worldwide. Is pole dancing so controversial it can’t be shared on social media? I think not. There is so much to learn from sharing information virtually, and Section 230 of the Communications Decency Act supports this.

Section 230 was passed in 1996, and it provides limited federal immunity to websites from lawsuits if a user posts something illegal. This means that if User X decides to post illegal content on Twitter, the Twitter platform could not be sued because of User X’s post. Section 230 does not stop the user who posted such content from being sued, so User X can still be held accountable.

It is clear that Section 230 embraces the importance of sharing knowledge. Section 230(a)(1) tells us this. So why would Instagram want to shadow ban pole dancers who are simply sharing new tricks and techniques?

The short answer is: It’s inappropriate.

But users want to know: what makes it inappropriate?

Is it the pole? A metal pole itself does not seem so.

Is it the person on the pole? Would visibility change depending on gender?

Is it the tight clothing? Well, I don’t see how it is any different from my 17  bikini photos on my personal profile.

Section 230 also provides a carve-out for sex-related work, such as sex trafficking. But this is where the line is drawn between appropriate and inappropriate content. Sex trafficking is illegal, but pole dancing is not. Instagram’s community guidelines also support this. Under the guidelines, sharing pole dancing content would not violate it. Shadow banning clearly seeks to suppress certain content, and in this case, the pole dancing community was a target.

Cultural expression also battles with shadow banning. In 2020, Instagram shadow banned Caribbean Carnival content. The Caribbean Carnival is an elaborate celebration to commemorate slavery abolition in the West Indies and showcases ensembles representing different cultures and countries.

User posts with hashtags including #stluciacarnival, #fuzionmas, and #trinidadcarnival2020 could not be found nor viewed by other users. Some people viewed this as suppressing culture and impacting tourism. Additionally, Facebook and Instagram shadow banned #sikh for almost three months. Due to numerous user feedback, the hashtag was restored, but Instagram failed to state how or why the hashtag was blocked.

In March 2020, The Intercept obtained internal TikTok documents alluding to shadow banning methods. Documents revealed moderators were to suppress content depicting users with “‘abnormal body shape,’ ‘ugly facial looks,’ dwarfism, and ‘obvious beer belly,’ ‘too many wrinkles,’ ‘eye disorders[.]'” While this is a short excerpt of the longer list, this shows how shadow banning may not be a coincidence at all.

Does shadow banning exist? What are the pros and cons of shadow banning?

 

 

 

Corporate Use of Social Media: A Fine Line Between What Could-, Would-, and Should-be Posted

 

Introduction

In recent years, social media has taken a hold on nearly every aspect of human interaction and turned the way we communicate on its head. Social media apps’ high speed capability of disseminating information instantaneously have affected the way many sectors of business operate. From entertainment, social, environmental, educational, or financial, social media has bewildered the legal departments of many in house general counsels across all industries. Additionally, the generational shaft between the person actually posting for the account versus their supervisor has only exacerbated the potential for communications to miss their mark and cause controversy or adverse effects.

These days, most companies have social media accounts, but not all accounts are created equal, and they certainly are not all monitored the same. In most cases, these accounts are not regulated at all except by their own internal managers and #CancelCulture. Depending on the product or company, social media managers have done their best to stay abreast of changes in popular hashtags, trends and challenges, and the overall shift from a corporate tone of voice to one of relatability–more Gen-Z-esque, if you will. But with this shift, the rights and implications of corporate speech through social media has been put to the test.

Changes in Corporate Speech on Social Media 

In the last 20 years, corporate use of social media has become a battle of relevance. With the decline of print media, social media, and its apps, have emerged as a marketing necessity. Early social media use was predominantly geared towards social purposes. If we look at the origins of Facebook, Myspace, and Twitter it is clear that these apps were intended for superficial uses—not corporate communications—but this all changed with the introduction of LinkedIn, which sparked a dynamic shift towards business and professional use of social media.

Today social media is used to report on almost every aspect of our lives, from disaster preparation and emergency responses to political updates, to dating and relationship finders, and customer service based tasks, social media truly covers all. It is also more common now days to get backlash for not speaking out or using social media after a major social or political movement occurs. Social media is also increasingly being used for research with geolocation technology, for organizing demonstrations and political unrest, and in the business context, for development in sales, marketing, networking, and hiring or recruiting practices.

These changes are starting to lead to significant conversations in the business world when it comes to company speech, regulated disclosures and First Amendment rights. For example, so far, there is minimal research on how financial firms disseminate communications to investor news outlets via social media and in which format they are being responded to. And while some may view social media as an opportunity to further this kind of investor outreach, others have expressed concerns that disseminating communications in this manner could result in a company’s loss of control over such communications entirely.

The viral nature of social media allows not just investors to connect more easily with companies but also with individuals who may not directly follow that company and would therefore be a lot less likely to be informed about a company’s prior financial communications and the importance of any changes. This creates risk for a company’s investor communications via social media because of the potential to spread and possibly reach uniformed individuals which could in turn produce adverse consequences for the company when it comes to concerns about reliance and misleading information.

Corporate Use, Regulations, and Topics of Interest on Social Media 

With the rise of social media coverage on various societal issues, these apps have become a platform for news coverage, political movements, and social concerns and, for some generations, a platform that replaces traditional news media almost entirely. Specifically, when it comes to the growing interest in ESG related matters and sustainable business practices, social media poses as a great tool for information communication. For example, the Spanish company Acciona has recently been reported by the latest Epsilon Icarus Analytics Panel on ESG Sustainability, as having Spain’s highest resonating ESG content of all their social networks. Acciona demonstrates the potential leadership capabilities for a company to fundamentally impact and effectuate digital communications on ESG related topics. This developing content strategy focuses on brand values, and specifically, for Acciona, strong climate-change based values, female leadership, diversity, and other cultural, societal changes which demonstrates this new age of social media as a business marketing necessity.

Consequentially, this shift in usage of social media and the way we treat corporate speech on these platforms has left room for emerging regulation. Commercial or corporate speech is generally permissible under Constitutional Free Speech rights, so long as the corporation is not making false or misleading statements. Section 230 provides broad protection to internet content providers from accountability based on information disseminated on their platform. In most contexts, social media platforms will not be held accountable for the consequences resulting therefrom (i.e. a bad user’s speech). For example, a recent lawsuit was dismissed in favor of the defendant, TikTok, and its parent company, after a young girl died from participation in a trending challenge that went awry because under § 230 the platform was immune from liability.

In essence, when it comes to ESG-related topics, the way a company handles its social media and the actual posts they put out can greatly affect the company’s success and reputation as often ESG focused perspectives affect many aspects of the operation of the business. The type of communication, and coverage on various issues, can impact a company’s performance in the short term and long term hemispheres–the capability of which can effectuate change in corporate environmental practices, governance, labor and employment standards, human resource management and more.

With ESG trending, investors, shareholders, and regulators now face serious risk management concerns. Companies must now, more publicly, address news concerning their social responsibilities, on a much more frequent basis as ESG concerns continue to rise. Public company activities, through Consumer Service Reports, are mandated in annual 10-K filings and disclosures by the SEC, along with ESG disclosures thanks to a recent rule promulgation. These disclosures are designed to hold accountable and improve environmental, social, and economic performance when it comes to their respective stakeholders’ expectations.

Conclusion

In conclusion, social media platforms have created an entirely new mechanism for corporate speech to be implicated. Companies should proceed cautiously when covering social, political, environmental, and related concerns and their methods of information dissemination as well as the possible effects their posts may have on business performance and reputation overall.

A Uniquely Bipartisan Push to Amend/Repeal CDA 230

Last month, I wrote a blog post about the history and importance of the Communications Decency Act, section 230 (CDA 230). I ended that blog post by acknowledging the recent push to amend or repeal section 230 of the CDA. In this blog post, I delve deeper into the politics behind the push to amend or repeal this legislation.

“THE 26 WORDS THAT SHAPED THE INTERNET”

If you are unfamiliar with CDA 230, it is the sole legislation that governs the internet world. Also known as “the 26 words that shaped the internet” Congress specifically articulated in the act that the internet is able to flourish, due to a “minimum of government regulation.” This language has resulted in an un-regulated internet, ultimately leading to problems concerning misinformation.

Additionally, CDA 230(c)(2) limits civil liability for posts that social media companies publish. This has caused problems because social media companies lack motivation to filter and censor posts that contain misinformation.

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” (47 U.S.C. § 230).

Section 230’s liability shade has been extended far beyond Congress’s original intent, which was to protect social media companies against defamation claims. The features of this legislation have resulted in a growing call to update section 230.

In this day and age, an idea or movement rarely gains bi-partisan support anymore. Interestingly, though, amending, or repealing section 230 has gained recent bipartisan support. As expected, however, each party has differing reasons as to why the law should be changed.

BIPARTISAN OPPOSITION

Although the two political parties are in agreement that the legislation should be amended, their reasoning behind it stems from differing places. Republicans tend to criticize CDA 230 for allowing social media companies to selectively censor conservative actors and posts. In contrast, democrats criticize the law for allowing social media companies to disseminate false, and deceptive information.

 DEMOCRATIC OPPOSITION

On the democratic side of the aisle, President Joe Biden has repeatedly called for Congress to repeal the law. In an interview with The New York Times, President Biden was asked about his personal view regarding CDA 230, in which he replied…

“it should be revoked. It should be revoked because it is not merely an internet company. It is propagating falsehoods they know to be false, and we should be setting standards not unlike the Europeans are doing relative to privacy. You guys still have editors. I’m sitting with them. Not a joke. There is no editorial impact at all on Facebook. None. None whatsoever. It’s irresponsible. It’s totally irresponsible.”

House Speaker Nancy Pelosi has also voiced opposition, calling CDA 230 “a gift” to the tech industry that could be taken away.

The law has often been credited by the left for fueling misinformation campaigns, like Trumps voter fraud theory, and false COVID information. In response, social media platforms began marking certain posts as unreliable.  This led to the reasoning behind republicans opposition to section 230.

REPUBLICAN OPPOSITION

Former President Trump has voiced his opposition to CDA 230 numerous times. He first started calling for the repeal of the legislation in May of 2020, after Twitter flagged two of his tweets regarding mail-in voting, with a warning label that stated “Get the facts about mail-in ballots.” In fact, in December, Donald Trump, the current President at the time, threatened to veto the National Defense Authorization Act annual defense funding bill, if CDA 230 was not revoked. The former presidents opposition was so strong, he issued an Executive Order in May of last year urging the government to re-visit CDA 230. Within the order, the former president wrote…

“Section 230 was not intended to allow a handful of companies to grow into titans controlling vital avenues for our national discourse under the guise of promoting open forums for debate, and then to provide those behemoths blanket immunity when they use their power to censor …”

The executive order also asked the Federal Communications Commission to write regulations that would remove protections for companies that “censored” speech online. Although the order didn’t technically affect CDA 230, and was later revoked by President Biden, it resulted in increased attention on this archaic legislation.

LONE SUPPORTERS

Support for the law has not completely vanished, however. As expected, many social media giants support leaving CDA 230 untouched. The Internet Association, an industry group representing some of the largest tech companies like Google, Facebook, Amazon, and Microsoft, recently announced that the “best of the internet would disappear” without section 230, warning that it would lead to numerous companies being subject to an array of lawsuits.

In a Senate Judiciary hearing in October 2020, Facebook CEO Mark Zuckerberg and Twitter CEO Jack Dorsey warned that revoking Section 230 could…

“collapse how we communicate on the Internet.”

However, Mark Zuckerberg took a more moderate position as the hearing continued, telling Congress that he thought lawmakers should update the law.

Facebook has taken a more moderate approach by acknowledging that 230 should be updated. This approach is likely in response to public pressure due to increased awareness. Irregardless, it signifies a likely chance that section 23o will be updated in the future, since Facebook represents one of the largest social media companies protected by 230. A complete repeal of this law would create such major impacts, however, that this scenerio seems unlikely to happen. Nevertheless, growing calls for change, and a Democratic controlled Congress points to a likelihood of future revision of the section.

DIFFERING OPINIONS

Although both sides of Washington, and even some social media companies, agree the law should be amended; the two sides differ greatly on how to change the law.

As mentioned before, President Biden has voiced his support for repealing CDA 230 altogether. Alternatively, senior members of his party, like Nancy Pelosi have suggested simply revising or updating the section.

Republican Josh Hawley recently introduced legislation to amend section 230. The proposed legislation would require companies to prove a “duty of good faith,” when moderating their sites, in order to receive section 230 immunity. The legislation included a $5,000 fee for companies that don’t comply with the legislation.

Adding to the confusion of the section 230 debate, many fear the possible implications of repealing or amending the law.

FEAR OF CHANGE

Because CDA 230 has been referred to as “the first amendment of the internet,” many people fear that repealing this section altogether would result in a limitation on free speech online. Although President Biden has voiced his support for this approach, it seems unlikely to happen, as it would result in massive implications.

One major implication of repealing or amending CDA 230 is that it could allow for numerous lawsuits against social media companies. Not only would major social media companies be affected by this, but even smaller companies like Slice, could become the subject of defamation litigation by allowing reviews to be posted on their website. This could lead to an existence of less social media platforms, as some would not be able to afford legal fees. Many fear that these companies would further censor online posts for fear of being sued. This may also result in higher costs for these platforms. In contrast, companies could react by allowing everything, and anything to be posted, which could result in an unwelcome online environment. This would be in stark contrast to the Congress’s original intent in the creation of the CDA, to protect children from seeing indecent posts on the internet.

FUTURE CHANGE..?

 

Because of the intricacy of the internet, and the archaic nature of CDA 230, there are many differing opinions as to how to successfully fix the problems the section creates. There are also many fears about the consequences of getting rid of the legislation. Are there any revisions you can think of that could successfully deal with the republicans main concern, censorship? Can you think of any solutions for dealing with the democrats concern of limiting the spread of misinformation. Do you think there is any chance that section 230 will be repealed altogether? If the legislation were to be repealed, would new legislation need to be created to replace CDA 230?

 

AI Avatars: Seeing is Believing

Have you ever heard of deepfake? The term deepfake comes from “deep learning,” a set of intelligent algorithms that can learn and make decisions on their own. By applying deep learning, deepfake technology replaces faces from the original images or videos with another person’s likeness.

What does deep learning have to do with switching faces?

Basically, deepfake allows AI to learn automatically from its data collection, which means the more people try deepfake, the faster AI learns, thereby making its content more real.

Deepfake enables anyone to create “fake” media.

How does Deepfake work?

First, an AI algorithm called an encoder collects endless face shots of two people. The encoder then detects similarities between the two faces and compresses the images so they can be delivered. A second AI algorithm called a decoder receives the package and recovers it to reconstruct the images to perform a face swap.

Another way deepfake uses to swap faces is GAN, or a generative adversarial network. A GAN adds two AI algorithms against each other, unlike the first method where encoder and decoder work hand in hand.
The first algorithm, the generator, is given random noise and converts it into an image. This synthetic image is then added to a stream of real photos like celebrities. This combination of images gets delivered to the second algorithm, the discriminator. After repeating this process countless times, the generator and discriminator both improve. As a result, the generator creates completely lifelike faces.

For instance, Artist Bill Posters used deepfake technology to create a fake video of Mark Zuckerberg , saying that Facebook’s mission is to manipulate its users.

Real enough?

How about this. Consider having Paris Hilton’s famous quote, “If you don’t even know what to say, just be like, ‘That’s hot,’” replaced by Vladimir Putin, President of Russia. Those who don’t know either will believe that Putin is a Playboy editor-in-chief.

Yes, we can all laugh at these fake jokes. But when something becomes overly popular, it has to come with a price.

Originally, deepfake was developed by an online user of the same name for the purpose of entertainment, as the user had put it.

Yes, Deepfake meant pornography.

The biggest problem of deepfake is that it is challenging to detect the difference and figure out which one is the original. It has become more than just superimposing one face onto another.

Researchers found that more than 95% of deepfake videos were pornographic, and 99% of those videos had faces replaced with female celebrities. Experts explained that these fake videos lead to the weaponization of artificial intelligence used against women, perpetuating a cycle of humiliation, harassment, and abuse.

How do you spot the difference?

As mentioned earlier, the algorithms are fast learners, so for every breath we take, deepfake media becomes more real. Luckily, research showed that deepfake faces do not blink normally or even blink at all. That sounds like one easy method to remember. Well, let’s not get ahead of ourselves just yet. When it comes to machine learning, nearly every problem gets corrected as soon as it gets revealed. That is how algorithms learn. So, unfortunately, the famous blink issue already had been solved.

But not so fast. We humans may not learn as quickly as machines, but we can be attentive and creative, which are some qualities that tin cans cannot possess, at least for now.
It only takes extra attention to detect Deepfake. Ask these questions to figure out the magic:

Does the skin look airbrushed?
Does the voice synchronize with the mouth movements?
Is the lighting natural, or does it make sense to have that lighting on that person’s face?

For example, the background may be dark, but the person may be wearing a pair of shiny glasses reflecting the sun’s rays.

Oftentimes, deepfake contents are labeled as deepfake because creators want to display themselves as artists and show off their works.
In 2018, a software named Deeptrace was developed to detect deepfake contents. A deeptrace lab reported that deepfake videos are proliferating online, and its rapid growth is “supported by the growing commodification of tools and services that lower the barrier for non-experts—from well-maintained open source libraries to cheap deepfakes-as-a-service websites.”

The pros and cons of deepfake

It may be self-explanatory to name the cons, but here are some other risks deepfake imposes:

  • Destabilization: the misuse of deepfake can destabilize politics and international relations by falsely implicating political figures in scandals.
  • Cybersecurity: the technology can also negatively influence cybersecurity by having fake political figures incite aggression.
  • Fraud: audio deepfake can clone voices to convince people to believe that they are talking to actual people and induce them into giving away private information.

Well then, are there any pros to deepfake technology other than having entertainment values? Surprisingly, a few:

  • Accessibility: deepfake creates various vocal personas that can turn text into speech, which can help with speech impediments.
  • Education: deepfake can deliver innovative lessons that are more engaging and interactive than traditional lessons. For example, deepfake can bring famous historical figures back to life and explain what happened during their time. Deepfake technology, when used responsibly, can be served as a better learning tool.
  • Creativity: instead of hiring a professional narrator, implementing artificial storytelling using audio deepfake can tell a captivating story and let its users do so only at a fraction of the cost.

If people use deepfake technology with high ethical and moral standards on their shoulders, it can create opportunities for everyone.

Case

In a recent custody dispute in the UK, the mother presented an audio file to prove that the father had no right to take away their child. In the audio, the father  was heard making a series of violent threats towards his wife.

The audio file was compelling evidence. When people thought the mother would be the one to walk out with a smile on her face, the father’s attorney thought something was not right. The attorney challenged the evidence, and it was revealed through forensic analysis that the audio was tailored using a deepfake technology.

This lawsuit is still pending. But do you see any other problems in this lawsuit? We are living in an era where evidence tampering is easily available to anyone with the Internet. It would require more scrutiny to figure out whether evidence is altered.

Current legislation on deepfake.

The National Defense Authorization Act for Fiscal Year 2021 (“NDAA”), which became law as Congress voted to override former President Trump’s veto, also requires the Department of Homeland Security (“DHS”) to issue an annual report for the next five years on manipulated media and deepfakes.

So far, only three states took action against deepfake technology.
On September 1, 2019, Texas became the first state to prohibit the creation and distribution of deepfake content intended to harm candidates for public office or influence elections.
Similarly, California also bans the creation of “videos, images, or audio of politicians doctored to resemble real footage within 60 days of an election.”
Also, in 2019, Virginia banned deepfake pornography.

What else does the law say?

Deep fakes are not illegal per se. But depending on the content, a deepfake can breach data protection law, infringe copyright and defamation. Additionally, if someone shares non-consensual content or commits a revenge porn crime, it is punishable depending on the state law. For example, in New York City, the penalties for committing a revenge porn crime are up to one year in jail and a fine of up to $1,000 in criminal court.

Henry Ajder, head of threat intelligence at Deeptrace, raised another issue: “plausible deniability,” where deepfake can wrongfully provide an opportunity for anyone to dismiss actual events as fake or cover them up with fake events.

What about the First Amendment rights?

The First Amendment of the U.S. Constitution states:
“Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.”

There is no doubt that injunctions against deepfakes are likely to face First Amendment challenges. The First Amendment will be the biggest challenge to overcome. Even if the lawsuit survives, lack of jurisdiction over extraterritorial publishers would inhibit their effectiveness, and injunctions will not be granted unless under particular circumstances such as obscenity and copyright infringement.

How does defamation law apply to deepfake?

How about defamation laws? Will it apply to deepfake?

Defamation is a statement that injures a third party’s reputation.  To prove defamation, a plaintiff must show all four:

1) a false statement purporting to be fact;

2) publication or communication of that statement to a third person;

3) fault amounting to at least negligence; and

4) damages, or some harm caused to the person or entity who is the subject of the statement.

As you can see, deepfake claims are not likely to succeed under defamation because it is difficult to prove that the content was intended to be a statement of fact. All that the defendant will need to protect themselves from defamation claims is to have the word “fake” somewhere in the content. To make it less of a drag, they can simply say that they used deep”fake” to publish their content.

Pursuing a defamation claim against nonconsensual deepfake pornography also poses a problem. The central theme of the claim is the nonconsensual part, and our current defamation law fails to address whether or not the publication was consented to by the victim.

To reflect our transformative impact of artificial intelligence, I would suggest making new legislation to regulate AI-backed technology like deepfake. Perhaps this could lower the hurdle that plaintiffs must face.
What are your suggestions in regards to deepfake? Share your thoughts!

 

 

 

 

 

Alarming Side of Youtube

Social media has now become an integrated part of an individual’s life. From Facebook to twitter, Instagram, snapchat to the latest edition, that is TikTok, social media has made its way into a person’s life and occupies the same value as that of eating, sleeping, exercising etc. There is no denying the dopamine hit you get from posting on Instagram or scrolling endlessly, liking, sharing, commenting and re-sharing etc. From checking your notifications and convincing yourself, “Right, just five minutes, I am going to check my notifications” to spending hours on social media, it is a mixed bag. While I find that being in social media is to an extent a way to relax and alleviate stress, I also believe social media and its influence on peoples’ lives should not cross a certain threshold.

We all like a good laugh. We get a good laugh from people doing funny things on purpose or people pranking other people to get a laugh. Most individuals nowadays use some sort of social medial platforms to watch content or make content. YouTube is once such platform. After Google, YouTube is the most visited website on the internet. Everyday about a billion hours of videos are watched by people all over the world. I myself, contribute to those billion hours.

Now imagine you are on YouTube, you start watching a famous youtuber’s videos, you then realize this video is not only disturbing but is also very offensive. You stop watching the video. That’s it. You think that is a horrible video and think no more of it. On the contrary, there have been videos on YouTube which have caused mass controversy all over the internet since the platforms birth in 2005. Let us now explore the dark side of YouTube.

There is an industry that centers around pranks done to members of the public which is less about humor and more about shock value. There is nothing wrong with a harmless prank, but when doing a prank, one must be considerate how their actions are perceived by others, one wrong move and you could end facing charges or a conviction.

Across the social media platform there are many creators of such prank videos. Not all of them have been well received by the public or by the fands of the creators. One such incident is where YouTube content creators, Alan and Alex Stokes who are known for their gag videos plead guilty to charges centering around fake bank robberies staged by them.

The twins wore black clothes and ski masks, carried cash filled duffle bags for a video where they pretended to have robbed a bank. They then ordered an uber who, unaware of the prank had refused to drive them. An onlooker called the police believing that the twins had robbed a bank and were attempting to carjack the vehicle. Police arrived at the scene and held the driver at gunpoint until it was revealed and determined that it was a prank. The brothers were not charged and let off with a warning. They however, pulled the same stunt at a university some four hours later and were arrested.

They were charged with one felony count of false imprisonment by violence, menace or fraud, or deceit and one misdemeanor count of falsely reporting an emergency. The charges carry a maximum penalty of five years in prison. “These were not pranks. These are crimes that could have resulted in someone getting seriously injured or even killed.” said Todd Spitzer, Orange County district attorney.

The brothers accepted a bargain from the judge. In return for a guilty plea, the felony count would be reduced a misdemeanor resulting in one year probation and 160 hours of community service and compensation. The plea was entered despite the prosecution stating that tougher charges were necessary. The judge also warned the brothers, who have over 5 million YouTube subscribers not to make such videos.

Analyzing the scenario above, I would agree with the district attorney. Making prank videos and racking up videos should not come at the cost of inciting fear and panic in the community. The situation with the police could have escalated severely which might have led to a more gruesome outcome. The twins were very lucky, however, in the next incident, the man doing a prank video in Tennessee was not.

In filming a YouTube prank video, 20 year old Timothy Wilks was shot dead in a parking lot of an Urban Air indoor trampoline park. David Starnes Jr, admitted to shooting Wilks when he and an unnamed individual approached him and a group wielding butcher knives and lunged at them. David told the police that he shot one of them in defense of himself and others.

Wilks’s friend said they were filming a video of a robbery prank for their YouTube channel. This was a supposed to be a recorded YouTube video meant to capture the terrified reactions of their prank victims. David was unaware of this prank and pulled out his gun to protect himself and others. No one has been charged yet in regard to the incident.

The above incident is an example of how pranks can go horribly wrong and result in irreparable damage. This poses the question, who do you blame, the 20 years old man staging a very dangerous prank video, or the 23-year-old who fired his gun in response to that?

Monalisa Perez, a youtuber from Minnesota fatally shot and killed her boyfriend in an attempt to film a stunt of firing a gun 30 cm away from her boyfriend, Predo Ruiz, who only had a thick book of 1.5inch to protect him. Perez pleaded guilty to second degree manslaughter and was sentenced to six months’ imprisonment.

Perez and her boyfriend Ruiz would document their everyday lives in Minnesota by posting pranks videos on YouTube to gain views. Before the fatal stunt, Perez tweeted, “Me and Pedro are probably going to shoot one of the most dangerous videos ever. His idea, not mine.”

Perez had previously experimented before and thought that the hardback Encyclopedia would be enough to stop the bullet. Perez fired a .50-calibre Desert Eagle, which is known to be an extremely powerful handgun which pierced the encyclopedia and fatally wounded Ruiz.

Perez will serve a 180-day jail term, serve 10 years of supervised probation, be banned for life from owning firearms and make no financial gain from the case. The sentence is below the minimum guidelines, but it was allowed on the ground that the stunt was mostly Ruiz’s idea.

Dangerous pranks such as the one above has left a man dead and a mother of two grieving for fatally killing her partner.

In response to the growing concerns of filming various trends and videos, YouTube have updated their policies regarding “harmful and dangerous” content and explicitly banned pranks and challenges that may cause immediate or lasting physical or emotional harm. The policies page showcases three types of videos that are now prohibited. They are: 1) Challenges that encourage acts that have an inherent risk of sever harm; 2) Pranks that make victims they are physical danger and 3) Pranks that cause emotional distress to children.

Prank videos may depict the dark side of how content crating can go wrong but they are not the only ones. In 2017, youtuber, Logan Paul became the source of controversy after posting a video of him in a Japanese forest called Aokigahara near the base of Mount Fuji. Aokigahara is a dense forest with lush trees and greenery. The forest is, however, infamous for being known as the suicide forest. It is a frequent site for suicides and is also considered haunted.

Upon entering the forest, the youtuber came across a dead body hung from a tree. The actions and depictions of Logan Paul around the body are what caused controversy and outrage. The video has since been taken down from YouTube. An apology video was posted by Logan Paul trying to defend his actions. This did nothing to quell the anger on the internet. He then came out with a second video where he could be seen tearing up on camera. In addressing the video, YouTube expressed condolences and stated that they prohibit such content which are shocking or disrespectful. Paul lost the ability to make money on his videos through advertisement which is known as demonetization. He was also removed from the Google Preferred program, where brands can sell advertisement to content creators on YouTube.

That consequences of Logan Paul’s actions did not end there. A production company is suing the youtuber on the claims that the video of him in the Aokigahara resulted in the company losing a multimillion-dollar licencing agreement with Google. The video caused Google to end its relationship with Planeless Pictures, the production company and not pay the $3.5 million. Planeless Pictures are now suing Paul claiming that he pay the amount as well as additional damage and legal fees.

That is not all. Youtube has been filled with controversies which have resulted in lawsuits.

A youtuber by the name of Kanghua Ren was fined $22300 and was also sentenced to 15 months imprisonment for filming himself giving a homeless man an oreo filled with toothpaste. He gave 20 euros and oreo cookies to a homeless which were laced with toothpaste instead of cream. The video depicts the homeless man vomiting after eating the cookie. In the video Ren stated that although he had gone a bit far, the action would help clean the homeless person’s teeth. The court, however, did not take this lightly and sentenced him. The judge stated that this was not an isolated act and that Ren had shown cruel behaviour towards vulnerable victims.

These are some of the pranks and videos that have gained online notoriety. There are many other videos which have portrayed child abuse, following a trend by eating tidepods as well as making sharing anti-Semitic videos and using racist remarks. The most disturbing thing about these videos is that they are not only viewed by adults but also children. In my opinion these videos could be construed as having some influence on young individuals.

Youtube is a diverse platform home to millions of content creators. Since its inception it has served as a mode of entertainment and means of income to many individuals. From posting cat videos online to making intricate, detailed, and well directed short films, YouTube has revolutionized the video and content creation spectrum. Being an avid viewer of many channels on YouTube, I find that incidents like these, give YouTube a bad name. Proper policies and guidelines should be enacted and imposed and if necessary government supervision may also be exercised.

The Dark Side of Tik-Tok

In Bethany, Oklahoma, a 12-year-old child died due to strangulation marks on his neck. According to police, this wasn’t due to murder or suicide, rather a TikTok challenge that had gone horribly wrong. The challenge is known by a variety of names, including Blackout Challenge, Pass Out Challenge, Speed Dreaming, and The Fainting Game. The challenge is kids asphyxiating themselves, either by choking themselves out by hand or by using a rope or a belt, to obtain the euphoria when they wake up.

Even if the challenge does not result in death, medical professionals warn that it is extremely dangerous. Every moment you are without oxygen or blood, you risk irreversible damage to a portion of your brain.

Unfortunately, the main goal on social media is to gain as many views as possible, regardless of the danger or expense.

Because of the pandemic kids have been spending a lot of time alone and bored, which has led to preteens participating in social media challenges.

There are some social media challenges that are harmless including the 2014 Ice Bucket Challenge, which earned millions of dollars for ALS research.

However there has also been the Benadryl challenge which began in 2020 that urged people to overdose on the drug in an effort to hallucinate. People were also urged to lick surfaces in public as part of the coronavirus challenge.

One of the latest “challenges” on the social media app TikTok could have embarrassing consequences users never imagined possible. The idea of the Silhouette Challenge is to shoot a video of yourself dancing as a silhouette with a red filter covering up the details of your body. It started out as a way to empower people but has turned into a trend that could come back to haunt you. Participants generally start the video in front of the camera fully clothed. When the music changes, the user appears in less clothing, or nude, as a silhouette obscured by a red filter. But the challenge has been hijacked by people using software to remove that filter and reveal the original footage.

If these filters are removed, that can certainly create an environment where kids’ faces are being put out in the public domain, and their bodies are being shown in ways they didn’t anticipate,” said Mekel Harris licensed pediatric & family psychologist. Young people who participate in these types of challenges aren’t thinking about the long-term consequences.

These challenges reveal a darker aspect to the app, which promotes itself as a teen-friendly destination for viral memes and dancing.

TikTok said it would remove such content from its platform. In an updated post to its newsroom, TikTok said:

“We do not allow content that encourages or replicates dangerous challenges that might lead to injury. In fact, it’s a violation of our community guidelines and we will continue to remove this type of content from our platform. Nobody wants their friends or family to get hurt filming a video or trying a stunt. It’s not funny – and since we remove that sort of content, it certainly won’t make you TikTok famous.”

TikTok urged users to report videos containing the challenge. And it told BBC News there was now text reminding users to not imitate or encourage public participation in dangerous stunts and risky behavior that could lead to serious injury or death.

While the challenge may seem funny or get views on social media platforms, they can have long-lasting health consequences.

Because the First Amendment gives strong protection to freedom of speech, only publishers and authors are liable for content shared online. Section 230(c)(1) of the Communications Decency Act of 1996 states that “no provider or user of an interactive computer service shall be treated as the publisher or any information provided by another information content provider.” This act provides social media companies immunity over the content published by other authors on their platforms as long as intellectual property rights are not infringed. Although the law does not require social media sites to regulate their content, they can still decide to remove content at their discretion. Guidelines on the laws regarding discretionary content censorship are sparse. Because the government is not regulating speech, this power has fallen into the hands of social media giants like TikTok. Inevitably, the personal agendas of these companies are shaping conversations, highlighting the necessity of debating the place of social media platforms in the national media landscape.

THE ROLE OF SOCIAL MEDIA:

Social media is unique in that it offers a huge public platform, instant access to peers, and measurable feedback in the form of likes, views, and comments. This creates strong incentives to get as much favorable peer evaluation and approval as possible. Social media challenges are particularly appealing to adolescents, who look to their peers for cues about what’s cool, crave positive reinforcement from their friends and social networks, and are more prone to risk-taking behaviors, particularly when they’re aware that those whose approval they covet are watching them.

Teens won’t necessarily stop to consider that laundry detergent is a poison that can burn their throats and damage their airways. Or that misusing medications like diphenhydramine​ (Benadryl) can cause serious heart problems, seizures and coma.​ What they will focus on is that a popular kid in class did this and got hundreds of likes and comments.

WHY ARE TEENS SUSCEPTIBLE:

Children are biologically built to become much more susceptible to peer influence throughout puberty, and social media has magnified those peer influence processes, making them significantly more dangerous than ever before. Teens may find these activities entertaining and even thrilling, especially if no one is hurt, which increases their likelihood of participating. Teens are already less capable of evaluating danger than adults, so when friends reward them for taking risks – through likes and comments – it may act as a disinhibitor. These youngsters are being impacted on an unconscious level. The internet issues that are prevalent nowadays make it impossible for youngsters to avoid them. This will not occur unless they have parental engagement.

WHAT WE CAN DO TO CONTROL THE SITUATION:

Due to their lack of exposure to these effects as children, parents today struggle to address the risks of social media use with their children.

Parents, on the other hand, should address viral trends with their children. Parents should check their children’s social media history and communicate with them about their online activities, as well as block certain social media sites and educate themselves on what may be lurking behind their child’s screen.

In the case of viral infections, determine your child’s level of familiarity with any patterns you may have heard about before soliciting their opinion. You may ask as to why they think others will follow the trend and what they believe are some of the risks associated with doing so. Utilize this opportunity to explain why you are concerned about a certain trend.

HOW TO COPE WITH SOCIAL MEDIA USAGE:

It’s important to keep in mind that taking a break is completely appropriate. You are not required to join in every discussion, and disabling your notifications may provide some breathing space. You may set regular reminders to keep track of how long you’ve been using a certain app.

If you’re seeing a lot of unpleasant content in your feed, consider muting or blocking particular accounts or reporting it to the social media company.

If anything you read online makes you feel anxious or frightened, communicate your feelings to someone you trust. Assistance may come from a friend, a family member, a teacher, a therapist, or a helpline. You are not alone, and seeking help is completely OK.

Social media is a natural part of life for young people, and although it may have a number of advantages, it is essential that platforms like TikTok take responsibility for harmful content on their sites.

I welcome the government’s plan to create a regulator to guarantee that social media companies handle cyberbullying and posts encouraging self-harm and suicide.

Additionally, we must ensure that schools teach children what to do if they come across upsetting content online, as well as how to use the internet in a way that benefits their mental health.

To reduce the likelihood of misuse, protections must be implemented.

MY QUESTION TO YOU ALL:

How can social media companies improve their moderation so that children are not left to fend for themselves online? What can they do to improve their in-app security?

Skip to toolbar