Meta AI: Innovation, but at what cost?

Artificial Intelligence has become the cutting edge of technology for decades to come, and to this point, nobody knows its complete capabilities. AI is limitless. The more recent advancements include Social Media Companies developing their own AI configurations to enhance the user’s experiences and allows users to use AI to do tasks like text/image generation, assist users navigation through the app, and more.  So what’s the issue? Well, companies like Meta are creating their own AI for their platforms as open-sourced models which can pose significant privacy risks to their users.

What is Meta & Meta AI?

Meta which was formerly known as “The Facebook Inc.” rebranded to encompass a variety of platforms under one corporation which includes relevant social networks such as Instagram and WhatsApp. Both are commonly known social media platforms that also connect millions of people around the globe. Meta developed their AI platform “Meta AI” in April of 2024 which can do things like Answer questions, Generate photos, Search Instagram reels, Provide emotional support, Assist with tasks like solving scholastic problems, write emails, and more.

Photo Credits

Open-Source V. Closed-Source

Meta has established that their AI is an open-source model, but what’s the difference? Well, AI can be either an open-sourced or closed-sourced. An open-sourced AI Model means that the data and software are publicly available to anyone. By sharing code and data, developers can learn from each other and continue to innovate the AI model. Users of an open-source AI Model have the ability to examine the AI systems they use, which can promote transparency. However, there can be difficulties in regulating bad actors.

Closed-sourced models keep their data and software secret strictly to their owners and developers. By keeping their code and data secret, closed-source AI companies can protect their trade secrets and prevent unauthorized access or copying. Closed-source AI, however, tends to be less innovative as 3rd party developers cannot contribute to future technological advancements of the AI model. It is also difficult for users to examine and patrol the model because they  do not have access to the data inputted and the software.

The Cost:

In order to train this open sourced model Meta used a variety of users data. What data exactly Meta is taking from you? Well to highlight some of the controversial data they are taking, it includes: Content that users create, Messages users send and receive that aren’t end-to-end encrypted, users engagement with posts, Purchases users make through meta, users Contact Info, Device information, GPS location, IP Address, and Cookie Data. All of which according to their privacy policy are permitted for their use. Meta disclaims in their privacy policy that “Meta may share certain information about you that is processed when using the AI’s with third parties who help us provide you with more relevant or useful responses.”This includes personal information.

By Meta being committed to open-sourcing their AI they pose a great privacy risk to their users. While they have already noted that they may share personal information with 3rd parties in certain situations, outside developers have the opportunity to expose vulnerabilities within their algorithm by reverse-engineering the code to extract data that the Algorithm was trained with. Which in Meta’s case, can involve the personal information of the users that they used to train the model. Additionally, 3rd parties will also now have access to a wide variety of consumer information without consumers’ giving direct consent to them. Companies can then use this information to their commercial advantage. 

Meta has stated that they have taken exemplary steps in order to ensure the protection of their user’s data from third parties. This includes the development of third-party oversight and management programs that mitigate risk and implement what they believe to be the necessary steps to do so. To note, Facebook has been breached on more than one occasion, most notably in relation to the Cambridge Analytica Scandal. where Cambridge Analytica stole more than 10 million users of Facebook personal information for voter profiling and targeting.

Innovative:

Upon release, there were privacy concerns amongst users since Meta’s AI model was open-sourced. Mark Zuckerberg, CEO of Meta issued a public statement highlighting the benefits of their AI model being open-sourced, to summarize:

  1. Open-sourced AI is good for developers because it gives them the technological freedom to control the software, and open-source models are developing at a faster rate than closed models. 
  2. The model will all meta to continue to be competitive, allowing them to spend more money on research. 
  3. By being open-sourced it gives the world an opportunity for economic growth and better security for everyone because it will allow Meta to be at the forefront of AI advancement.

Effectively, Metas’ open-source model is beneficial to ensure consistent technological achievement for the company.

Photo Credits:

What Users Can Do:

In reality, it is difficult to regulate open-sourced AI from bad actors. Therefore, governmental action is needed to protect users personal data from being exploited. Recently 12 states have taken initiative to protect users. For example, the State of California amended the CCPA to protect users’ personal information for the usage of training AI models. Imposing that users must affirmatively authorize the usage of their info, otherwise, it is prohibited. As for the rest of the nation, there is little to none state or federal regulation regarding users’ privacy, The American Data and Protection Act failed to pass a congressional vote, therefore rendering millions of people defenseless.

For users who are looking to stop Meta from using their data, there is no sort of opt-out button across the United States. However, according to Meta, depending on a user’s setting preferences, a photo or post can be stopped from being used by making them Private. Unfortunately, this is not retroactive and all previous data will not be removed from the model. 

While Meta looks to be at the forefront of AI, their open-sourced model poses serious security risks for their users due to lack of regulation and questionable protection.

Don’t Talk to Strangers! But if it’s Online, it’s Okay?

It is 2010.  You are in middle school and your parents let your best friend come over on a Friday night.  You gossip, talk about crushes, and go on all social media sites.  You decide to try the latest one, Omegle.  You automatically get paired with a stranger to talk to and video chat with.  You speak to a few random people, and then, with the next click, a stranger’s genitalia are on your screen.

Stranger Danger

Omegle is a free video-chatting social media platform.  Its primary function has become meeting new people and arranging “online sexual rendezvous.”  Registration is not required.  Omegle randomly pairs users for one-on-one video sessions.  These sessions are anonymous, and you can skip to a new person at any time.  Although there is a large warning on the home screen saying “you must be 18 or older to use Omegle”, no parental controls are available through the platform.  Should you want to install any parental controls, you must use a separate commercial program.

While the platform’s community guidelines illustrate the “dos and don’ts” of the site, it seems questionable that the platform can monitor millions of users, especially when users are not required to sign up, or to agree to any of Omegle’s terms and conditions.  It, therefore, seems that this site could harbor online predators, raising quite a few issues.

One recent case surrounding Omegle involved a pre-teen who was sexually abused, harassed, and blackmailed into sending a sexual predator obscene content.  In A.M. v. Omegle.com LLC, the open nature of Omegle ended up matching an 11-year-old girl with a sexual predator in his late thirties.  Being easily susceptible, he forced the 11-year-old into sending pornographic images and videos of herself, perform for him and other predators, and recruit other minors.  This predator was able to continue this horrific crime for three years by threatening to release these videos, pictures, and additional content publicly.  The 11-year-old plaintiff sued Omegle on two general claims of platform liability through Section 230, but only one claim was able to break through the law.

Unlimited Immunity Cards!

Under 47 U.S.C. § 230 (Section 230), social media platforms are immune from liability for content posted by third parties.  As part of the Communications Decency Act of 1996, Section 230 provides almost full protection against lawsuits for social media companies since no platform is seen as a publisher or speaker of user-generated content posted on the site.  Section 230 has gone so far to say that Google and Twitter were immune from liability for claims that their platforms were used to aid terrorist activities.  In May of 2023, these cases moved up to the Supreme Court.  Although the court declined to rule for the Google case, they ruled on the Twitter case.  Google was found not liable for the claim that they stimulated the growth of ISIS through targeted recommendations and inspired an attack that killed an American student.  Twitter was immune for the claim that the platform aided and abetted a terrorist group to raise funds and recruit members for a terrorist attack.

Wiping the Slate

In February of 2023, the District Court in Oregon for the Portland Division found that Section 230 immunity did not apply to Omegle in a products liability claim, and the platform was held liable for these predatory actions committed by the third party on the site.  By side-stepping the third-party freedom of speech issue that comes with Section 230 immunity for an online publisher, the district court found Omegle responsible under the Plaintiff’s products liability claim, which targeted the platforms’ defective design, defective warning, negligent design, and failure to warn.

Three prongs need to be proved to preclude a platform from liability under Section 230:

  1. A provider of an interactive site,
  2. Whom is sought to be treated as a publisher or speaker, and
  3. For information provided by a third-party.

It is clear that Omegle is an interactive site that fits into the definition provided by Section 230.  The issue then falls on the second and third prongs: if the cause of action treated Omegle as the speaker of third-party content.  The sole function of randomly pairing strangers causes the foreseen danger of pairing a minor with an adult. Shown in the present case, “the function occurs before the content occurs.” By designing the platform negligently and with knowing disregard for the possibility of harm, the court ultimately concluded that the liability of the platform’s function does not pertain to third-party published content and that the claim targeted specific functions rather than users’ speech on the platform.  Section 230 immunity did not apply for this first claim and Omegle was held liable.

Not MY Speech

The plaintiff’s last claim dealing with immunity under Section 230 is that Omegle negligently failed to apply reasonable precautions to provide a safe platform.  There was a foreseeable risk of harm when marketing the service to children and adults and randomly pairing them.  Unlike the products liability claim, the negligence claim was twofold: the function of matching people and publishing their communications to each other, both of which fall directly into Section 230’s immunity domain.  The Oregon District Court drew a distinct line between the two claims, so although Omegle was not liable under Section 230 here through negligent service, they were liable through products liability.

If You Cannot Get In Through the Front Door, Try the Back Door!

For almost 30 years, social media platforms have been nearly immune from liability pertaining to Section 230 issues.  In the last few years, with the growth of technology on these platforms, judges have been trying to find loopholes in the law to hold companies liable.  A.M. v. Omegle has just moved through the district court level.  If appealed, it will be an interesting case to follow and see if the ruling will stand or be overruled in conjunction with the other cases that have been decided.  

How do you think a higher court will rule on issues like these?

Destroying Defamation

The explosion of Fake News spread among social media sites is destroying a plaintiff’s ability to succeed in a defamation action. The recent proliferation of rushed journalism, online conspiracy theories, and the belief that most stories are, in fact, “Fake News” have created a desert of veracity. Widespread public skepticism about even the most mainstream social media reporting means plaintiffs need help convincing jurors that third parties believed any reported statement to be true. Such proof is necessary for a plaintiff to prove the elements of defamation.

Fake News Today

Fake News is any journalistic story that knowingly and intentionallyincludes untrue factual statements. Today, many speak of Fake News as a noun. There is no shortage of examples of Fake News and its impact.

      • Pizzagate: During the 2016 Presidential Election, Edgar Madison Welch, 28, read a story on (then) Facebook that Hilary Clinton was running a child trafficking ring out of the basement of a pizzeria. Welch, a self-described vigilante, shot open a locked door of the pizzeria with his AR-15.
      • A study by three MIT scholars found that false news stories spread faster on Twitter than true stories, with the former being 70% more likely to be retweeted than the latter.
      • During the defamation trial of Amber Heard and Johnny Depp, a considerable number of “Fake News” reports circulated across social media platforms, particularly TikTok, Twitter, and YouTube, attacking Ms. Heard at a disproportionality more significant rate than Mr. Depp.

 

What is Defamation?

To establish defamation, a plaintiff must show the defendant published a false assertion of fact that damages the plaintiff’s reputation. Hyperbolic language or other indications that a statement was not meant to be taken seriously are not actionable. Today’s understanding that everything on the Internet is susceptible to manipulation destroys defamation.

Because the factuality of a statement is a question of law, a plaintiff must first convince a judge that the offending statement is fact and not opinion. Courts often find that Internet and social media statements are hyperbole or opinion. If a plaintiff succeeds in persuading the judge, then the issue of whether the statement defamed the plaintiff heads to the jury. A jury faced with defamation must determine whether the statement of fact harmed the defendant’s reputation or livelihood to the extent that it caused the plaintiff to incur damages. The prevalence of Fake News creates another layer of difficulty for the Internet plaintiff, who must convince the jury that the statement was true.

Defamation’s Slow and Steady Erosion

Since the 1960s, the judiciary has limited plaintiffs’ ability to succeed in defamation claims. The decisions in Sullivan v. New York Times and Gertz increased the difficulty for public figures, and those with limited public figure status, to succeed by requiring them to prove actual malice against a defendant, a standard higher than the mere negligence standard allowed for individuals who are not of community interest.

The rise of Internet use, mainly social media, presents plaintiffs with yet another hurdle. Plaintiffs can only succeed if the challenged statement is fact, not opinion. However, judges find that statements made on the Internet are opinions and not points. The combined effect of Supreme Court limitations on proof and the increased belief that social media posts are mostly opinions has limited the plaintiff’s ability to succeed in a defamation claim.

Destroying Defamation

If the Supreme Court and social media have eroded defamation, Fake News has destroyed it. Today, convincing a jury that a false statement purporting to be fact has defamed a plaintiff is difficult given the dual issues of society’s objective mistrust of the media and the understanding that information on the Internet is generally opinion, not fact. Fake News sows confusion and makes it almost impossible for jurors to believe any statement has the credibility necessary to cause harm.

To be clear, in some instances, fake News is so intolerable that a jury will find for the plaintiffs. A Connecticut jury found conspiracy theorist Alex Jones liable for defamation based on his assertion that the government had faked the Sandy Hook shootings.

But often, plaintiffs are unsuccessful where the challenged language is conflated with untruths. Fox News successfully defended itself against a lawsuit claiming that it had aired false and deceptive content about the coronavirus, even though its reporting was, in fact, untrue.

Similarly, a federal judge dismissed a defamation case against Fox News for Tucker Carlson’s report that the plaintiff had extorted then-President Donald Trump. In reaching its conclusion, the judge observed that Carlson’s comments were rhetorical hyperbole and that the reasonable viewer “‘arrive[s] with the appropriate amount of skepticism.”‘ Reports of media success in defending against defamation claims further fuel media mistrust.

The current polarization caused by identity politics is furthering the tendency for Americans to mistrust the media. Sarah Palin announced that the goal of her recent defamation case against The New York Times was to reveal that the “lamestream media” publishes “fake news.”

If jurors believe that no reasonable person could credit a challenged statement as accurate, they cannot find that the statement the plaintiff asserts is defamatory caused harm. An essential element of defamation is that the defendant’s remarks damaged the plaintiff’s reputation. The large number of people who believe News is fake, the media’s rush to publish, and external attacks on credible journalism have created a problematization of truth among members of society. The potential for defamatory harm is minimal when every news story is questionable. Ultimately, the presence of Fake News is a blight on the tort of defamation and, like the credibility of present-day news organizations, will erode it to the point of irrelevance.

Is there any hope for a world without Fake News?

 

Social Media Has Gone Wild

Increasing technological advances and consumer demands have taken shopping to a new level. You can now buy clothes, food, and household items from the comfort of your couch, and in a few clicks: add to cart, pay, ship, and confirm. Not only are you limited to products sold in nearby stores, but shipping makes it possible to obtain items internationally. Even social media platforms have shopping features for users, such as Instagram Shopping, Facebook Marketplace, and WhatsApp. Despite its convenience, online shopping has also created an illegal marketplace for wildlife species and products.

Most trafficked animal-the Pangolin

Wildlife trafficking is the illegal trading or sale of wildlife species and their products. Elephant ivory, rhinoceros horns, turtle shells, pangolin scales, tiger furs, and shark fins are a few examples of highly sought after wildlife animal products. As social media platforms expand, so does wildlife trafficking.

Wildlife Trafficking Exists on Social Media?

Social media platforms make it easier for people to connect with others internationally. These platforms are great for staying in contact with distant aunts and uncles, but it also creates another method for criminals and traffickers to communicate. It provides a way to remain anonymous without having to meet in-person, which makes it harder for law enforcement to identify a user’s true identity. Even so, can social media platforms be held responsible for making it easier for criminals to commit wildlife trafficking crimes?

Thanks to Section 230 of the Communications Decency Act, the answer is most likely: no.

Section 230 provides broad immunity to websites for content a third-party user posts on the website. Even when a user posts illegal content on a website, the website cannot be held liable for such content. However, there are certain exceptions where websites have no immunity. It includes human and sex trafficking. Although these carve-outs are fairly new, it is clear that there is an interest in protecting people vulnerable to abuse.

So why don’t we apply the same logic to animals? Animals are also a vulnerable population. Many species are unmatched to guns, weapons, traps, and human encroachment on their natural habitats. Similar to children, animals may not have the ability to understand what trafficking is or even the physical strength to fight back. Social media platforms, like Facebook, attempt to combat the online wildlife trade, but its efforts continue to fall short.

How is Social Media Fighting Back?

 

In 2018, the World Wildlife Fund and 21 tech companies created the Coalition to End Wildlife Trafficking Online. The goal was to reduce illegal trade by 80% by 2020. While it is difficult to measure whether this goal is achievable, some social media platforms have created new policies to help meet this goal.

“We’re delighted to join the coalition to end wildlife trafficking online today. TikTok is a space for creative expression and content promoting wildlife trafficking is strictly prohibited. We look forward to partnering with the coalition and its members as we work together to share intelligence and best-practices to help protect endangered species.”

Luc Adenot, Global Policy Lead, Illegal Activities & Regulated Goods, TikTok

In 2019, Facebook banned the sale of animals altogether on its platform. But this did not stop users. A 2020 report showed a variety of illegal wildlife was for sale on Facebook. This clearly shows the new policies were ineffective. Furthermore, the report stated:

“29% of pages containing illegal wildlife for sale were found through the ‘Related Pages’ feature.”

This suggests that Facebook’s algorithm purposefully connects users to pages and similar content based on a user’s interest. Algorithms incentivize users to rely and depend on wildlife trafficking content. They will continue to use social media platforms because it does half of the work for them:

      • Facilitating communication
      • Connecting users to potential buyers
      • Connecting users to other sellers
      • Discovering online chat groups
      • Discovering online community pages

This fails to reduce wildlife trafficking outreach. Instead, it accelerates visibility of this type of content to other users. Does Facebook’s algorithms go beyond Section 230 immunity?

Under these circumstances, Facebook maintains immunity. In Gonzalez v. Google LLC, the court explains how websites are not liable for user content when the website employs content-neutral algorithms. This means that a website did nothing more than program an algorithm to present similar content to a user’s interest. The website did not offer direct encouragement to publish illegal content, nor did it treat the content differently from other user content.

What about when a website profits from illegal posts? Facebook receives a 5% selling fee for each shipment sold by a user. Since illegal wildlife products are rare, these transactions are highly profitable. A pound of ivory can be worth up to $3,300. If a user sells five pounds of ivory from endangered elephants on Facebook, the platform would profit $825 from one transaction. The Facebook Marketplace algorithm is similar to the algorithm based on user interest and engagement. Here, Facebook’s algorithm can push illegal wildlife products to a user who has searched for similar products. Yet, if illegal products are constantly pushed and successful sales are made, Facebook then benefits and makes a profit off these transactions. Does this mean that Section 230 will continue to protect Facebook when it profits from illegal activity?

Evading Detection

Even with Facebook’s prohibited sales policy, users get creative to avoid detection. A simple search of “animals for sale” led me to a public Facebook group. Within 30 seconds of scrolling, I found a user selling live coral, and another user selling an aquarium system with live coral, and live fish. The former reads: Leather $50. However, the picture shows a live coral in a fish tank. Leather identifies the type of coral it is, without saying it’s coral. Even if this was fake coral, a simple Google search shows a piece of fake coral is worth less than $50. If Facebook is failing to prevent users from selling live coral and live fish, it is most likely failing to prevent online wildlife trafficking on its platform.

Another method commonly used to evade detection is when users post a vague description or a photo of an item and include the words “pm me” or “dm me.” These are abbreviations for “private message me” or “direct message me.” It is a quick way to direct interested users to personally reach out to the individual and discuss details in a private chat. It is a way to communicate outside of the leering public eye. Sometimes a user will offer alternative contact methods, such as a personal phone number or an email address. This transitions the interaction off of or to a new social media platform.

Due to high profitability, there are lower stakes when transactions are conducted anonymously online. Social media platforms are great for concealing a user’s identity. Users can use fake names to maintain anonymity behind their computer and phone screen. There are no real consequences for using a fake name when the user is unknown. Nor is there any type of identity verification to truly discover the user’s true identity. Even if a user is banned, the person can create a new account under a different alias. Some users are criminals tied to organized crime syndicates or terrorist groups. Many users operate outside of the United States and are overseas, which makes it difficult to locate them. Thus, social media platforms incentivize criminals to hide among various aliases with little to lose.

Why Are Wildlife Products Popular?

Wildlife products have a high demand for human benefit and use. Common reasons why humans value wildlife products include:

Do We Go After the Traffickers or the Social Media Platform?

Taking down every single wildlife trafficker, and users that facilitate these transactions would be the perfect solution to end wildlife trafficking. Realistically, it’s too difficult to identify these users due to online anonymity and geographical limitations. On the other hand, social media platforms continue to tolerate these illegal activities.

Here, it is clear that Facebook is not doing enough to stop wildlife trafficking. With each sale made on Facebook, Facebook receives a percentage. Section 230 should not protect Facebook when it reaps the benefits of illegal transactions. This takes it a step too far and should open Facebook to the market of: Section 230 liability.

Should Facebook maintain Section 230 immunity when it receives proceeds from illegal wildlife trafficking transactions? Where do we draw the line?

Skip to toolbar