Meta AI: Innovation, but at what cost?

Artificial Intelligence has become the cutting edge of technology for decades to come, and to this point, nobody knows its complete capabilities. AI is limitless. The more recent advancements include Social Media Companies developing their own AI configurations to enhance the user’s experiences and allows users to use AI to do tasks like text/image generation, assist users navigation through the app, and more.  So what’s the issue? Well, companies like Meta are creating their own AI for their platforms as open-sourced models which can pose significant privacy risks to their users.

What is Meta & Meta AI?

Meta which was formerly known as “The Facebook Inc.” rebranded to encompass a variety of platforms under one corporation which includes relevant social networks such as Instagram and WhatsApp. Both are commonly known social media platforms that also connect millions of people around the globe. Meta developed their AI platform “Meta AI” in April of 2024 which can do things like Answer questions, Generate photos, Search Instagram reels, Provide emotional support, Assist with tasks like solving scholastic problems, write emails, and more.

Photo Credits

Open-Source V. Closed-Source

Meta has established that their AI is an open-source model, but what’s the difference? Well, AI can be either an open-sourced or closed-sourced. An open-sourced AI Model means that the data and software are publicly available to anyone. By sharing code and data, developers can learn from each other and continue to innovate the AI model. Users of an open-source AI Model have the ability to examine the AI systems they use, which can promote transparency. However, there can be difficulties in regulating bad actors.

Closed-sourced models keep their data and software secret strictly to their owners and developers. By keeping their code and data secret, closed-source AI companies can protect their trade secrets and prevent unauthorized access or copying. Closed-source AI, however, tends to be less innovative as 3rd party developers cannot contribute to future technological advancements of the AI model. It is also difficult for users to examine and patrol the model because they  do not have access to the data inputted and the software.

The Cost:

In order to train this open sourced model Meta used a variety of users data. What data exactly Meta is taking from you? Well to highlight some of the controversial data they are taking, it includes: Content that users create, Messages users send and receive that aren’t end-to-end encrypted, users engagement with posts, Purchases users make through meta, users Contact Info, Device information, GPS location, IP Address, and Cookie Data. All of which according to their privacy policy are permitted for their use. Meta disclaims in their privacy policy that “Meta may share certain information about you that is processed when using the AI’s with third parties who help us provide you with more relevant or useful responses.”This includes personal information.

By Meta being committed to open-sourcing their AI they pose a great privacy risk to their users. While they have already noted that they may share personal information with 3rd parties in certain situations, outside developers have the opportunity to expose vulnerabilities within their algorithm by reverse-engineering the code to extract data that the Algorithm was trained with. Which in Meta’s case, can involve the personal information of the users that they used to train the model. Additionally, 3rd parties will also now have access to a wide variety of consumer information without consumers’ giving direct consent to them. Companies can then use this information to their commercial advantage. 

Meta has stated that they have taken exemplary steps in order to ensure the protection of their user’s data from third parties. This includes the development of third-party oversight and management programs that mitigate risk and implement what they believe to be the necessary steps to do so. To note, Facebook has been breached on more than one occasion, most notably in relation to the Cambridge Analytica Scandal. where Cambridge Analytica stole more than 10 million users of Facebook personal information for voter profiling and targeting.

Innovative:

Upon release, there were privacy concerns amongst users since Meta’s AI model was open-sourced. Mark Zuckerberg, CEO of Meta issued a public statement highlighting the benefits of their AI model being open-sourced, to summarize:

  1. Open-sourced AI is good for developers because it gives them the technological freedom to control the software, and open-source models are developing at a faster rate than closed models. 
  2. The model will all meta to continue to be competitive, allowing them to spend more money on research. 
  3. By being open-sourced it gives the world an opportunity for economic growth and better security for everyone because it will allow Meta to be at the forefront of AI advancement.

Effectively, Metas’ open-source model is beneficial to ensure consistent technological achievement for the company.

Photo Credits:

What Users Can Do:

In reality, it is difficult to regulate open-sourced AI from bad actors. Therefore, governmental action is needed to protect users personal data from being exploited. Recently 12 states have taken initiative to protect users. For example, the State of California amended the CCPA to protect users’ personal information for the usage of training AI models. Imposing that users must affirmatively authorize the usage of their info, otherwise, it is prohibited. As for the rest of the nation, there is little to none state or federal regulation regarding users’ privacy, The American Data and Protection Act failed to pass a congressional vote, therefore rendering millions of people defenseless.

For users who are looking to stop Meta from using their data, there is no sort of opt-out button across the United States. However, according to Meta, depending on a user’s setting preferences, a photo or post can be stopped from being used by making them Private. Unfortunately, this is not retroactive and all previous data will not be removed from the model. 

While Meta looks to be at the forefront of AI, their open-sourced model poses serious security risks for their users due to lack of regulation and questionable protection.

Leave a Reply

Your email address will not be published. Required fields are marked *

Skip to toolbar