Update Required: An Analysis of the Conflict Between Copyright Holders and Social Media Users

Opening

For anyone who is chronically online as yours truly, in one way or another we have seen our favorite social media influencers, artists, commentators, and content creators complain about their problems with the current US Intellectual Property (IP) system. Be it that their posts are deleted without explanation or portions of their video files are muted, the combination of factors leading to copyright issues on social media is endless. This, in turn, has a markedly negative impact on free and fair expression on the internet, especially within the context of our contemporary online culture. For better or worse, interaction in society today is intertwined with the services of social media sites. Conflict arises when the interests of copyright holders clash with this reality. They are empowered by byzantine and unrealistic laws that hamper our ability to exist as freely as we do in real life. While they do have legitimate and fundamental rights that need to be protected, such rights must be balanced out with desperately needed reform. People’s interaction with society and culture must not be hampered, for that is one of the many foundations of a healthy and thriving society. To understand this, I venture to analyze the current legal infrastructure we find ourselves in.

Current Relevant Law

The current controlling laws for copyright issues on social media are the Copyright Act of 1976 and the Digital Millennium Copyright Act (DMCA). The DMCA is most relevant to our analysis; it gives copyright holders relatively unrestrained power to demand removal of their property from the internet and to punish those using illegal methods to get ahold of their property. This broad law, of course, impacted social media sites. Title II of the law added 17 U.S. Code § 512 to the Copyright Act of 1976, creating several safe harbor provisions for online service providers (OSP), such as social media sites, when hosting content posted by third parties. The most relevant of these safe harbors to this issue is 17 U.S. Code § 512(c), which states that an OSP cannot be liable for monetary damages if it meets several requirements and provides a copyright holder a quick and easy way to claim their property. The mechanism, known as a “notice and takedown” procedure, varies by social media service and is outlined in their terms and conditions of service (YouTube, Twitter, Instagram, TikTok, Facebook/Meta). Regardless, they all have a complaint form or application that follows the rules of the DMCA and usually will rapidly strike objectionable social media posts by users. 17 U.S. Code § 512(g) does provide the user some leeway with an appeal process and § 512(f) imposes liability to those who send unjustifiable takedowns. Nevertheless, a perfect balance of rights is not achieved.

The doctrine of fair use, codified as 17 U.S. Code § 107 via the Copyright Act of 1976, also plays a massive role here. It established a legal pathway for the use of copyrighted material for “purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research” without having to acquire right to said IP from the owner. This legal safety valve has been a blessing for social media users, especially with recent victories like Hosseinzadeh v. Klein, which protected reaction content from DMCA takedowns. Cases like Lenz v. Universal Music Corp further established that fair use must be considered by copyright holders when preparing for takedowns. Nevertheless, failure to consider said rights by true copyright holders still happens, as sites are quick to react to DMCA complaints. Furthermore, the flawed reporting systems of social media sites lead to abuse by unscrupulous actors faking true ownership. On top of that, such legal actions can be psychologically and financially intimidating, especially when facing off with a major IP holder, adding to the unbalanced power dynamic between the holder and the poster.

The Telecommunications Act of 1996, which focuses primarily on cellular and landline carriers, is also particularly relevant to social media companies in this conflict. At the time of its passing, the internet was still in its infancy. Thus, it does not incorporate an understanding of the current cultural paradigm we find ourselves in. Specifically, the contentious Section 230 of the Communication Decency Act (Title V of the 1996 Act) works against social media companies in this instance, incorporating a broad and draconian rule on copyright infringement. 47 U.S. Code § 230(e)(2) states in no uncertain terms that “nothing in this section shall be construed to limit or expand any law pertaining to intellectual property.” This has been interpreted and restated in Perfect 10, Inc. v. CCBill LLC to mean that such companies are liable for user copyright infringement. This gap in the protective armor of Section 230 is a great concern to such companies, therefore they react strongly to such issues.

What is To Be Done?

Arguably, fixing the issues around copyright on social media is far beyond the capacity of current legal mechanisms. With ostensibly billions of posts each day on various sites, regulation by copyright holders and sites is far beyond reason. It will take serious reform in the socio-cultural, technological, and legal arenas before a true balance of liberty and justice can be established. Perhaps we can start with an understanding by copyright holders not to overreact when their property is posted online. Popularity is key to success in business, so shouldn’t you value the free marketing that comes with your copyrighted property getting shared honestly within the cultural sphere of social media?  Social media sites can also expand their DMCA case management teams or create tools for users to accredit and even share revenue with, if they are an influencer or content creator, the copyright holder. Finally, congressional action is desperately needed as we have entered a new era that requires new laws. That being said, achieving a balance between the free exchange of ideas and creations and the rights of copyright holders must be the cornerstone of the government’s approach to socio-cultural expression on social media. That is the only way we can progress as an ever more online society.

 

Image: Freepik.com

https://www.freepik.com/free-vector/flat-design-intellectual-property-concept-with-woman-laptop_10491685.htm#query=intellectual%20property&position=2&from_view=keyword”>Image by pikisuperstar

Jonesing For New Regulations of Internet Speech

From claims that the moon landing was faked to Area 51, the United States loves its conspiracy theories. In fact, a study sponsored by the University of Chicago found that more than half of Americans believe at least one conspiracy theory. While this is not a new phenomenon, the increasing use and reliance on social media has allowed misinformation and harmful ideas to spread with a level of ease that wasn’t possible even twenty years ago.

Individuals with a large platform can express an opinion that creates a harm to the people that are personally implicated in the ‘information’ being spread. Presently, a plaintiff’s best option to challenge harmful speech is through a claim for defamation. The inherent problem is that opinions are protected by the First Amendment and, thus, not actionable as defamation.

This leaves injured plaintiffs limited in their available remedies because statements in the context of the internet are more likely to be seen as an opinion. The internet has created a gap where we have injured plaintiffs and no available remedy. With this brave new world of communication, interaction, and the spread of information by anyone with a platform comes a need to ensure that injuries sustained by this speech will have legal recourse.

Recently, Alex Jones lost a defamation claim and was ordered to pay $965 million to the families of the Sandy Hook victims after claiming that the Sandy Hook shooting that occurred in 2012 was a “hoax.” Despite prevailing at trial, the statements that were the subject of the suit do not fit neatly into the well-established law of defamation, which makes reversal on appeal likely.

The elements of defamation require that the defendant publish a false statement purporting it to be true, which results in some harm to the plaintiff. However, just because a statement is false does not mean that the plaintiff can prove defamation because, as the Supreme Court has recognized, false statements still receive certain First Amendment protections. In Milkovich v. Lorain Journal Co., the Court held that “imaginative expression” and “loose, figurative, or hyperbolic language” is protected by the First Amendment.

The characterization of something as a “hoax” has been held by courts to fall into this category of protected speech. In Montgomery v. Risen, a software developer brought a defamation action against an author who made a statement claiming that plaintiff’s software was a “hoax.” The D.C. Circuit held that characterization of something as an “elaborate and dangerous hoax” is hyperbolic speech, which creates no basis for liability. This holding was mirrored by several courts including the District Court of Kansas in Yeagar v. National Public Radio, the District Court of Utah in Nunes v. Rushton, and the Superior Court of Delaware in Owens v. Lead Stories, LLC.

The other statements made by Alex Jones regarding Sandy Hook are also hyperbolic language. These statements include: “[i]t’s as phony as a $3 bill”, “I watched the footage, it looks like a drill”, and “my gut is… this is staged. And you know I’ve been saying the last few months, get ready for big mass shootings, and then magically, it happens.” While these statements are offensive and cruel to the suffering families, it is really difficult to characterize them as something objectively claimed to be true. ‘Phony’, ‘my gut is’, ‘looks like’, and ‘magically’ are qualifying the statement he is making as a subjective opinion based on his interpretation of the events that took place.

It is indisputable that the statements Alex Jones made caused harm to these families. They have been subjected to harassment, online abuse, and death threats from his followers. However, no matter how harmful these statements are, that does not make it defamation. Despite this, a reasonable jury was so appalled by this conduct that they found for the plaintiffs. This is essentially reverse jury nullification. They decided that Jones was culpable and should be held legally responsible even if there is no adequate basis for liability.

The jury’s determination demonstrates that current legal remedies are inadequate to regulate potentially harmful speech that can spread like wildfire on the internet. The influence that a person like Alex Jones has over his followers establishes a need for new or updated laws that hold public figures to a higher standard even when they are expressing their opinion.

A possible starting point for regulating harmful internet speech at a federal level might be through the commerce clause, which allows Congress to regulate instrumentalities of commerce. The internet, by its design, is an instrumentality of interstate commerce by enabling for the communication of ideas across state lines.

Further, the Federal Anti-Riot Act, which was passed in 1968 to suppress civil rights protestors might be an existing law that can serve this purpose. This law makes it a felony to use a facility of interstate commerce to (1) incite a riot; or (1) to organize, promote, encourage, participate in, or carry on a riot. Further, the act defines riot as:

 [A] public disturbance involving (1) an act or acts of violence by one or more persons part of an assemblage of three or more persons, which act or acts shall constitute a clear and present danger of, or shall result in, damage or injury to the property of any other person or to the person of any other individual or (2) a threat or threats of the commission of an act or acts of violence by one or more persons part of an assemblage of three or more persons having, individually or collectively, the ability of immediate execution of such threat or threats, where the performance of the threatened act or acts of violence would constitute a clear and present danger of, or would result in, damage or injury to the property of any other person or to the person of any other individual.

Under this definition, we might have a basis for holding Alex Jones accountable for organizing, promoting, or encouraging a riot through a facility (the internet) of interstate commerce. The acts of his followers in harassing the families of the Sandy Hook victims might constitute a public disturbance within this definition because it “result[ed] in, damage or injury… to the person.” While this demonstrates one potential avenue of regulating harmful internet speech, new laws might also need to be drafted to meet the evolving function of social media.

In the era of the internet, public figures have an unprecedented ability to spread misinformation and incite lawlessness. This is true even if their statements would typically constitute an opinion because the internet makes it easier for groups to form that can act on these ideas. Thus, in this internet age, it is crucial that we develop a means to regulate the spread of misinformation that has the potential to harm individual people and the general public.

Miracles Can Be Misleading

Want to lose 20 pounds in 4 days? Try this *insert any miracle weight-loss product * and you’ll be skinny in no time!

Miracle weight-loss products (MWLP) are dietary supplements that either work as an appetite suppressant or forcefully induce weight loss. These products are not approved or indicated by pharmaceutical agencies as weight loss prophylactics. Social media users are continuously bombarded with the newest weight-loss products via targeted advertisements and endorsements from their favorite influencers. Users are force fed false promises of achieving the picture-perfect body while companies are profiting off their delusions. Influencer marketing has increased significantly as social media becomes more and more prevalent. 86 percent of women use social media for purchasing advice. 70 percent of teens trust influencers more than traditional celebrities. If you’re on social media, then you’ve seen your favorite influencer endorsing some form of a MWLP and you probably thought to yourself “well if Kylie Jenner is using it, it must be legit.”

The advertisements of MWLP are promoting an unrealistic and oversexualized body image. This trend of selling skinny has detrimental consequences, often leading to body image issues, such as body dysmorphia and various eating disorders. In 2011, the Florida House Experience conducted a study among 1,000 men and women. The study revealed that 87 percent of women and 65 percent of men compare their bodies to those they see on social media. From the 1,000 subjects, 50 percent of the women and 37 percent of the men viewed their bodies unfavorably when compared to those they saw on social media. In 2019, Project Know, a nonprofit organization that studies addictive behaviors, conducted a study which suggested that social media can worsen genetic and psychological predispositions to eating disorders.

Who Is In Charge?

The collateral damages that advertisements of MWLP have on a social media user’s body image is a societal concern. As the world becomes more digital, even more creators of MWLP are going to rely on influencers to generate revenue for their products, but who is in charge of monitoring the truthfulness of these advertisements?

In the United States, the Federal Trade Commission (FTC) and the Food and Drug Administration (FDA) are the two federal regulators responsible for promulgating regulations relating to dietary supplements and other MWLP. While the FDA is responsible for the labeling of supplements, they lack jurisdiction over advertising. Therefore, the FTC is primarily responsible for advertisements that promote supplements and over-the-counter drugs.

The FTC regulates MWLP advertising through the Federal Trade Commission Act of 1914 (the Act). Sections 5 and 12 of the Act collectively prohibit “false advertising” and “deceptive acts or practices” in the marketing and sales of consumer products, and grants authority to the FTC to take action against those companies. An advertisement is in violation of the Act when it is false, misleading, or unsubstantiated. An advertisement is false or misleading when it contains “objective, material representation that is likely to deceive consumers acting reasonably under the circumstances.” An advertisement is unsubstantiated when it lacks “a reasonable basis for its contained representation.” With the rise of influencer marketing, the Act also requires influencers to clearly disclose when they have a financial or other relationship with the product they are promoting.

Under the Act, the FTC has taken action against companies that falsely advertise MWLP. The FTC typically brings enforcement claims against companies by alleging that the advertiser’s claims lack substantiation. To determine the specific level and type of substantiation required, the FTC considers what is known as the “Pfizer factors” established In re Pfizer. These factors include:

    • The type and specificity of the claim made.
    • The type of product.
    • The possible consequences of a false claim.
    • The degree of reliance by consumers on the claims.
    • The type, and accessibility, of evidence adequate to form a reasonable basis for making the particular claims.

In 2014, the FTC applied the Pfizer factors when they brought an enforcement action seeking a permanent injunction against Sensa Products, LLC. Since 2008, Sensa sold a powder weight loss product that allegedly could make an individual lose 30 pounds in six months without dieting or exercise. The company advertised their product via print, radio, endorsements, and online ads. The FTC claimed that Sensa’s marketing techniques were false and deceptive because they lacked evidence to support their health claims, i.e., losing 30 pounds in six months. Furthermore, the FTC additionally claimed that Sensa violated the Act by failing to disclose that their endorsers were given financial incentives for their customer testimonials. Ultimately, Sensa settled, and the FTC was granted the permanent injunction.

What Else Can We Do?

Currently, the FTC, utilizing its authority under the Act, is the main legal recourse for removing these deceitful advertisements from social media. Unfortunately, social media platforms, such as Facebook, Twitter, Instagram, etc., cannot be liable for the post of other users. Under section 230 of the Communications Decency Act, “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” That means, social media platforms cannot be held responsible for the misleading advertisements of MWLP; regardless of if the advertisement is through an influencer or the companies own social media page and regardless of the collateral consequences that these advertisements create.

However, there are other courses of action that social media users and social media platforms have taken to prevent these advertisements from poisoning the body images of users. Many social media influencers and celebrities have rose to the occasion to have MWLP advertisements removed. In fact, in 2018, Jameela Jamil, an actress starring on The Good Place, launched an Instagram account called I Weigh which “encourages women to feel and look beyond the flesh on their bones.” Influencer activism has led to Instagram and Facebook blocking users, under the age of 18, from viewing posts advertising certain weight loss products or other cosmetic procedures. While these are small steps in the right direction, more work certainly needs to be done.

What Evidence is Real in a World of Digitally Altered Material?

Imagine you are prosecuting a child pornography case and have incriminating chats made through Facebook showing the Defendant coercing and soliciting sexually explicit material from minors.  Knowing that you will submit these chats as evidence in trial, you acquire a certificate from Facebook’s records custodian authenticating the documents.  The custodian provides information that confirms the times, accounts and users.  That should be enough, right?

Wrong.  Your strategy relies on the legal theory that chats made through a third-party provider fall into a hearsay exception known as the “business records exemption.”  Under the Federal Rules of Evidence 902(11) “self-authenticating” business records “provides that ‘records of a regularly conducted activity’ that fall into the hearsay exception under Rule 803(6)—more commonly known as the “business records exception”—may be authenticated by way of a certificate from the records custodian.”  (Fed. R. Evid. 902(11)), (United States v. Browne, 834 F.3d 403 (3d Cir. 2016)).

Why does this certification fail to actually show authenticity?  The Third Circuit answers, saying there must be additional, outside evidence (extrinsic) establishing relevance of the evidence.  (United States v. Browne, 834 F.3d 403 (3d Cir. 2016)).

Relevance is another legal concept where “its existence simply has some ‘tendency to make the existence of any fact that is of consequence to the determination of the action more probable or less probable than it would be without the evidence.’”  (United States v. Jones, 566 F.3d 353, 364 (3d Cir. 2009) (quoting Fed. R. Evid. 401)).  Put simply, the existence of this evidence has a material effect on the evaluation of an action.

In Browne, the Third Circuit says the “business records exemption” is not enough because Facebook chats are fundamentally different than business records.  Business records are “supplied by systematic checking, by regularity and continuity which produce habits of precision, by actual experience of business in relying upon them, or by a duty to make an accurate record as part of a continuing job or occupation,” which results in records that can be relied upon as legitimate.

The issue here deals with authenticating the entirety of the chat – not just the timestamps or cached information.  The court delineates this distinction, saying “If the Government here had sought to authenticate only the timestamps on the Facebook chats, the fact that the chats took place between particular Facebook accounts, and similarly technical information verified by Facebook ‘in the course of a regularly conducted activity,’ the records might be more readily analogized to bank records or phone records conventionally authenticated and admitted under Rules 902(11) and 803(6).”

In contrast, Facebook chats are not authenticated based on confirmation of their substance, but instead on the user linked to that account.  Moreover, in this case, the Facebook records certification showed “alleged” activity between user accounts but not the actual identification of the person communicating, which the court found is not conclusive in determining authorship.

The policy concern is that information is easily falsified – accounts may be created with a fake name and email address, or a person’s account may be hacked into and operated by another.  As a result of the ruling in Browne, submitting chat logs into evidence made through a third party such as Facebook requires more than verification of technical data.  The Browne court describes the second step for evidence to be successfully admitted – there must be, extrinsic, or additional outside evidence, presented to show that the chat logs really occurred between certain people and that the content is consistent with the allegations.  (United States v. Browne, 834 F.3d 403 (3d Cir. 2016))

When there is enough extrinsic evidence, the “authentication challenge collapses under the veritable mountain of evidence linking [Defendant] and the incriminating chats.”  In the Browne case, there was enough of this outside evidence that the court found there was “abundant evidence linking [Defendant] and the testifying victims to the chats conducted… [and the] Facebook records were thus duly authenticated” under Federal Rule of Evidence 901(b)(1) in a traditional analysis.

The idea that extrinsic evidence must support authentication of evidence collected from third-party platforms is echoed in the Seventh Circuit decision United States v. Barber, 937 F.3d 965 (7th Cir. 2019).  Here, “this court has relied on evidence such as the presence of a nickname, date of birth, address, email address, and photos on someone’s Facebook page as circumstantial evidence that a page might belong to that person.”

The requirement for extrinsic evidence represents a shift in thinking from the original requirement that the government carries the burden of only ‘“produc[ing] evidence sufficient to support a finding’ that the account belonged to [Defendant] and the linked messages were actually sent and received by him.”  United States v. Barber, 937 F.3d 965 (7th Cir. 2019) citing Fed. R. Evid. 901(a), United States v. Lewisbey, 843 F.3d 653, 658 (7th Cir. 2016).  Here, “Facebook records must be authenticated through the ‘traditional standard’ of Rule 901.” United States v. Frazier, 443 F. Supp. 3d 885 (M.D. Tenn. 2020).

The bottom line is that Facebook cannot attest to the accuracy of the content of its chats and can only provide specific technical data.  This difference is further supported by a District Court ruling mandating traditional analysis under Rule 901 and not allowing a business hearsay exception, saying “Rule 803(6) is designed to capture records that are likely accurate and reliable in content, as demonstrated by the trustworthiness of the underlying sources of information and the process by which and purposes for which that information is recorded… This is no more sufficient to confirm the accuracy or reliability of the contents of the Facebook chats than a postal receipt would be to attest to the accuracy or reliability of the contents of the enclosed mailed letter.”  (United States v. Browne, 834 F.3d 403, 410 (3rd Cir. 2016), United States v. Frazier, 443 F. Supp. 3d 885 (M.D. Tenn. 2020)).

Evidence from social media is allowed under the business records exemption in a select-few circumstances.  For example, United States v. El Gammal, 831 F. App’x 539 (2d Cir. 2020) presents a case that does find authentication of Facebook’s message logs based on testimony from a records custodian.  However, there is an important distinction here – the logs admitted were directly from a “deleted” output, where Facebook itself created the record, rather than a person.  Accordingly, the Tenth Circuit agreed that “spreadsheets fell under the business records exception and, alternatively, appeared to be machine-generated non-hearsay.”  United States v. Channon, 881 F.3d 806 (10th Cir. 2018).

What about photographs – are pictures taken from social media dealt with in the same way as chats when it comes to authentication?  Reviewing a lower court decision, the Sixth Circuit in United States v. Farrad, 895 F.3d 859 (6th Cir. 2018) found that “it was an error for the district court to deem the photographs self-authenticating business records.”  Here, there is a bar on using the business exception that is similar to that found in the authentication of chats, where photographs must also be supported by extrinsic evidence.

While not using the business exception to do so, the court in Farrad nevertheless found that social media photographs were admissible because it would be logically inconsistent to allow “physical photos that police stumble across lying on a sidewalk” while barring “electronic photos that police stumble across on Facebook.”  It is notable that the court does not address the ease with which photographs may be altered digitally, given that was a major concern voiced by the Browne court regarding alteration of digital text.

United States v. Vazquez-Soto, 939 F.3d 365 (1st Cir. 2019) further supports the idea that photographs found through social media need to be authenticated traditionally.  Here, the court explains the authentication process, saying “The standard [the court] must apply in evaluating a[n] [item]’s authenticity is whether there is enough support in the record to warrant a reasonable person in determining that the evidence is what it purports to be.” United States v. Vazquez-Soto, 939 F.3d 365 (1st Cir. 2019) quoting United States v. Blanchard, 867 F.3d 1, 6 (1st Cir. 2017) (internal quotation marks omitted); Fed. R. Evid. 901(a).”  In other words, based on the totality of the evidence to include extrinsic evidence, do you believe the photograph is real?  Here, “what is at issue is only the authenticity of the photographs, not the Facebook page” – it does not necessarily matter who posted the photo, only what was depicted.

Against the backdrop of an alterable digital world, courts seek to emplace guards against falsified information.  The cases here represent the beginning of a foray into what measures can be realistically taken to protect ourselves from digital fabrications.

 

https://www.rulesofevidence.org/article-ix/rule-902/

https://www.rulesofevidence.org/article-viii/rule-803/

https://casetext.com/case/united-states-v-browne-12

https://www.courtlistener.com/opinion/1469601/united-states-v-jones/?order_by=dateFiled+desc&page=4

https://www.rulesofevidence.org/article-iv/rule-401/

https://www.rulesofevidence.org/article-ix/rule-901/

https://casetext.com/case/united-states-v-barber-103

https://casetext.com/case/united-states-v-lewisbey-4

https://casetext.com/case/united-states-v-frazier-175

https://casetext.com/case/united-states-v-el-gammal

https://casetext.com/case/united-states-v-channon-8

https://casetext.com/case/united-states-v-farrad

https://casetext.com/case/united-states-v-vazquez-soto-1?q=United%20States%20v.%20Vazquez-Soto,%20939%20F.3d%20365%20(1st%20Cir.%202019)&PHONE_NUMBER_GROUP=P&sort=relevance&p=1&type=case&tab=keyword&jxs=

Say Bye to Health Misinformation on Social Media?

A study from the Center of Countering Digital Hate, found that social media platforms failed to act on 95% of coronavirus-related disinformation reported to them.

      Over the past few weeks, social media companies have been in the hot seat regarding their lack of action against limiting the amount of fake news and misinformation on their platforms. Especially, the information regarding COVID-19 and the vaccine. Even President Biden remarked on social media platforms- stating Facebook and other companies were “killing people” by serving as platforms for misinformation about the Covid-19 vaccine. Later, Biden clarified his earlier statements by saying that he wasn’t accusing Facebook of killing people, but that he meant he wanted the companies to do something about the misinformation, the outrageous information about the vaccine.”

A few weeks later, Senator, Amy Klobuchar introduced the Health Misinformation Act, which would ultimately create an exemption to Section 230 of the Communication Decency Act. Section 230 has always shielded social media companies from being liable for almost any of the content that is posted on their platform. However, under the Health Misinformation Act, social media companies would be liable for the spread of health-related misinformation. Further, the bill would only apply to social media platforms that use an algorithm that promotes health misinformation- which most social media platforms use algorithms and would only apply to health misinformation during a health crisis. Additionally, if this bill were to pass, then the Department of Health and Human Services would be authorized to define “health misinformation.” Finally, the proposed bill would only apply during a national public health crisis, such as COVID-19. Therefore, this exemption would not apply during “normal” times, when there is no public health crisis.

        Senator Amy Klobuchar and some of her peers believe the time has come to create an exemption to Section 230 because “for far too long, online platforms have not done enough to protect the health of Americans.” Further, Klobuchar believes that the misinformation spread about COVID-19 the vaccine proves to the world that the social media companies have no desire to do anything about this because the misinformation gives the social media platform more activity and because the companies cannot be liable for this misinformation because of Section 230.
Instead, these social media companies, use this misinformation to their advantage to get more activity on their platform by creating features within to incentivizing their users to share the misinformation and to get likes, comments, and other engagements, which rewards engagement rather than accuracy.” Furthermore, a study conducted by MIT found that false news stories are 70% more likely to be retweeted than true stories. Therefore, social media platforms, have no reason to limit this information because of the activity they receive for the misinformation. Especially, when this misinformation benefits the social media platform.

What are the concerns with the Health Misinformation Act?

How will the Department of Health and Human Services define “health misinformation?” it seems very difficult to define such a narrow topic, that the majority will agree upon. Also, I believe there will be a huge amount of criticism from the social media companies about this act. For instance, I can imagine the social media companies arguing how will they be able to implement the definition of “health misinformation” to their algorithm? Such as, what if the information on the health crisis changes? Will the social media company have to constantly change the algorithms with the constant change of health information? For example, at the beginning of the pandemic the information of the importance of the mask changed; from mask not being necessary to masking being crucial to ensure the health and safety of yourself and others.

Will the Bill Pass?

With that being said, I do like the concept of the Health Misinformation Act, because it’s wanting to hold the social media companies accountable for their inaction while trying to protect the public so they receive accurate health-related information. However, I do not believe this bill will pass because of a few issues; first, it may violate the First Amendment, for people’s freedom of speech. Second, while it isn’t right; it is not illegal for individuals to post their opinion or misinformation on social media. Therefore, the bill might not pass because it may violate the First Amendment. Finally, as stated earlier how would social media companies implement these new policies and the change of “health misinformation” and how would the federal agencies regulate the social media companies?

What should be done?

“These are some of the biggest, richest companies in the world and they must do more to prevent the spread of deadly vaccine misinformation.”

     I believe we need to create more regulations and create more exemptions to Section 230. Especially, because Section 230 was created in 1996, our world looks and operates very differently than it did in 1996. Social media is an essential part of our business and cultural world.
Overall, I believe there need to be more regulations put into place to oversee social media companies. We need to create transparency with these companies, so the world can understand what is going on behind the closed doors of these companies. Transparency will allow for agencies to fully understand the algorithms and make for proper regulations.

To conclude, social media companies are a monopoly- even though there are a lot of them, there is only a handful that holds most of the popularity and power. With that being said, all major businesses and monopolies must follow strict regulations from the government. Yet, social media companies seem exempt from these types of strict regulations.

While there has been a push over the past few years to repeal or make changes to Section 230, do you think this bill can pass? If not, what can be done to create more regulations?

California Law Mandates Allowing Minors to Delete Social Media Posts

California has recently become the first state to enact a law requiring social media companies to give young users (under-18) the chance to delete regretful posts. Federal law lacks such a provision due mainly to the opposing argument that this would be too burdensome on social media companies. Many young social media users do not think before posting irresponsible, reputation-damaging words and pictures to the Internet. The “erase bill” was signed Monday by Governor Jerry Brown and comes into effect in January 2015.

The erase bill is lauded by many such as the founder and CEO of Common Sense Media, who stated, “[t]his puts privacy in the hands of kids, teenagers and the parents, not under the control of an anonymous tech company.” Senate leader Darrell Steinberg noted, “This is a groundbreaking protection for our kids who often act impetuously…before they think through the consequences. They deserve the right to remove this material that could haunt them for years to come.” The law also mandates that social media companies inform minors about their right to erase posts.

One blatant flaw in the legislation is that the law does not force the companies to remove the content completely from the servers. The posts thus survive in the vast cyber-sphere. However, allowing minors to retract ignorant statements and posts from the Internet seems to be a good start in the direction of future federal protection.

The article discussing this new legislation notes that pictures and posts discoverable online could ruin a young person’s ability to land a prestigious summer internship or even admittance into college. After all, employers and recruiters certainly Google young applicants, probably even before reading their applications.

The aim of this legislation is to get other states on board, and eventually to persuade Washington to construct binding law. As a graduate student without any social media, I never had to worry about the potential issues arising from regrettable social media posts. However, as we all make mistakes, especially in our teenage years, it seems appropriate to me that lawmakers would want to give minors the ability to right their wrongs in the days following such posts. I often regret words that come out of my mouth, let alone statements and/or photos that are memorialized on the Internet.

Do you think a young person’s future should be jeopardized for posting substance on the Internet that reflects a moment of their stupidity? We all undoubtedly must be held accountable for what we say, but shouldn’t minors get some leeway? Or, should schools and companies seeking to hire these minors be privy to the potential for such misconduct? I for one support this type of legislation. What do you think?

 

“Erase Law” News Article