The New Face of Misinformation: Deepfakes and Their Legal Challenges

The New Face of Misinformation: Deepfakes and Their Legal Challenges

What if I told you that the video of a celebrity endorsing a product or a politician delivering a shocking confession wasn’t real? That’s the unsettling reality deepfakes have created, where seeing and hearing are no longer believing. Deepfakes are a double edged sword – while they can be used for entertainment and creative expression, they also raise significant risks to privacy, spread misinformation, and enable manipulation. In a 2022 iProov study, 71% of global respondents were unaware of what deepfakes were, yet 57% believed they could distinguish between real videos and deepfakes. This gap in knowledge is alarming, considering deepfake technology can create entirely convincing and fictional representations of public figures​.

What Are Deepfakes, and Why Are They Problematic?

A deepfake is a highly realistic image, audio, or video generated by artificial intelligence, through deep learning models, to make someone appear as if they are saying or doing things they never did. Think face swaps, cloned voices, or AI-generated speeches. Deepfakes have been used to defame individuals, including public figures, by falsely attributing statements to them, and have been used to create non-consensual explicit content, such as deepfake pornography.

A prime example is when sexually explicit deepfake images of Taylor Swift flooded social media in January 2024, forcing platforms like X (formerly Twitter) to take drastic action by removing content and banning accounts involved. This raised questions about social media companies’ responsibility in controlling the spread of harmful AI-generated content.

Legal Ramifications: Are Deepfakes Protected by the First Amendment?

Deepfakes exist in a legal gray area. Defamation and copyright can sometimes address misuse of AI, but they don’t fully account for the complexities of AI-generated content due to its rapidly evolving nature. This creates tension with constitutional rights, particularly the First Amendment, which protects free speech. Courts now face difficult decisions, such as whether and how to punish deepfake creators, where to draw the line between free speech and harmful content, and how to address the legal ambiguity that has complicated federal regulation, prompting some states to take action.

The DEEPFAKES Accountability Act: A Federal Response

While no comprehensive federal regulation addressing deepfakes currently exists, the DEEPFAKES Accountability Act provides a framework for future legislation. Initially introduced in 2019 and renewed by Representative Yvette Clarke in 2023, the bill requires clear disclosures for AI-generated content to inform viewers, gives victims the right to seek damages, and introduces criminal penalties for malicious use of deepfakes. It specifically targets harmful applications of deepfakes, such as spreading disinformation, manipulating elections, and creating non-consensual explicit content.

If enacted, this law would empower agencies like the FTC and DOJ to enforce these regulations. However, achieving a balance between protecting victims and safeguarding free speech rights presents a challenge. Courts would likely need to evaluate each case individually, carefully weighing the totality of the circumstances surrounding the use of deepfake technology, which also poses significant challenges within legal practice, particularly regarding the authenticity of evidence.

The “Deepfake Defense” – How Deepfakes Impact the Legal System

Deepfakes present evidentiary challenges in courtrooms. Lawyers and defendants can now argue that incriminating videos are fabricated. In fact, recent high-profile cases have seen defendants cast doubt on the authenticity of video evidence by claiming it was AI-manipulated. This concept has been coined the “deepfake defense” by Judge Herbert B. Dixon Jr., who discusses its implications in his article, “The Deepfake Defense: An Evidentiary Conundrum.” Judge Dixon notes that deepfakes are videos created or altered with the aid of AI, leading to situations where it becomes difficult for courts to determine the authenticity of evidence presented.

This defense has emerged in high-profile cases. Attorneys for defendants charged with storming the Capitol on January 6, 2021, argued that the jury could not trust the video evidence displaying their clients at the riot, as there was no assurance that the footage was real or had not been altered. Experts have proposed amendments to the Federal Rules of Evidence to clarify the responsibilities of parties in authenticating evidence. Without a consistent approach, the risk of misjudgment remains, highlighting the urgent need for legal systems to adapt to the modern realities of AI technology. Prolonged trial processes result from these challenges, ultimately undermining judicial economy and wasting valuable resources. Courts are increasingly forced to rely on experts to authenticate digital evidence, analyzing factors such as eye movements, lighting, and speech patterns. However, as deepfakes become more advanced, even these experts struggle to detect them, raising significant concerns about how the judicial system will adapt.

What’s Next: A New Era of Regulation and Responsibility?

Deepfakes force us to rethink the role of social media platforms in preventing the spread of misinformation. Platforms like Instagram, X, and Facebook have started labeling manipulated media, but should these warnings be mandatory? Should platforms be held liable for the spread of deepfakes? These are critical questions that legislators and tech companies must address. Moving forward, balancing regulation and innovation will be key. Policymakers must develop laws that protect victims without stifling technological progress. At the same time, raising public awareness about the risks of deepfakes is essential.

In a world where deepfake technology is becoming increasingly accessible, trust in digital media is more fragile than ever. Laws and policies are starting to catch up, but it will take time to fully address the legal challenges deepfakes present. Until then, the best defense is an informed public, aware of the power and pitfalls of artificial intelligence.

 

Social Media Got Me Fired!

Have you ever wondered if in the age of Social Media, your employers ever looked you up and if that affected you getting the job? Well, continue reading to find out!     

Employers look at your social media profiles to find out anything they can about you before interviewing or hiring you. A 2022 Harris Poll found that 70% of employers in the survey would screen potential employees’ social media profiles before offering them a position. A Career Builder Poll found that 54% of employers ruled out a candidate due to discovering something they disagreed with on their social media profile. 

Pre-employment background checks now go beyond criminal and public records and employment history. If hiring managers can’t find you online, there is an increased chance they will not move forward with your application. In fact, 21% of employers polled said they are not likely to consider a candidate who does not have a social media presence.

However, don’t fret; social media can also be why you get your next job. The Aberdeen Group found that 73% of job seekers between 18 and 34 obtained their last job through social media. People seeking employment have unlimited domains to find jobs on platforms such as Linkedin, Stackoverflow, GitHub, Facebook, TikTok, and other websites. A Career Builder survey found that 44% of hiring managers and employers have discovered content on a candidate’s social media profile that caused them to hire the candidate. Due to this change in hiring and recruitment, employers have to engage the newer generation in the job force through competitive social media advertisements. Job seekers and employers alike use their social media profiles for networking, sourcing, and building recognition.

Is social media a double-edged sword? If you have social media, it can lessen your chances of being employed. At the same time, many jobs are posted on social media, which can be why you get hired. You can use your social media, i.e., Linkedin, to promote yourself and stay active on the platform at least once a week. Employers are interested in how you use your social media.

Regarding Facebook, Instagram, and TikTok, keep them neutral and clean. Have your account private, and before you post a picture, ask yourself if you are comfortable with the CEO or your boss seeing that picture. If the answer is yes, go ahead and post; if you are unsure, the best bet is to not post it. 

We walk a fine line in the great age of Social media; many do’s and don’t vary depending on your job and in what area. Someone working for Google would have a different social media presence and posts than someone working for the Prosecutor’s Office. You can always turn to your employee handbook once you are hired or ask HR to be on the safe side.

Harry Kazakian stated in an article for Forbes Magazine that he screens potential employees’ social media to eliminate potential risks. He does this to ensure employee harmony in the workplace. Specifically, Kazakian is looking to avoid candidates who post: constant negative content, patterns of overt anger, suggestions of violence, associations with questionable characters, signs of crass behavior, or even too many political posts. 

Legally, employers may use social media to recruit candidates by advertising job openings or performing background checks to confirm that a job candidate or applicant is qualified. This allows employers to monitor your website activity, e-mail account, and instant messages. This right, however, cannot be used as a means of discrimination. 

Half the states in the US have enacted laws that do not allow employers to access employees’ social media accounts. California prohibits employers from asking for social media passwords of their current or prospective employees. Maryland, Virginia, and Illinois offer protections to job seekers, so they do not have to divulge their social media passwords or provide account access. California, Illinois, New Jersey, and New York, among other states, have enacted laws prohibiting employers from discriminating based on an employee’s lawful off-duty conduct. 

Federal laws prohibit employers from discriminating against a prospective or current employee based on information on the employee’s social media relating to their race, color, national origin, gender, age, disability, and immigration or citizenship status. Employees should be conscious of what information they display on social media websites. However, federal law prohibits companies of a specified size from illegally discriminating against their employees. Title VII, ADA, and GINA apply to private employers, educational institutions, and state and local governments with 15 or more employees. The ADEA applies to employers with 20 or more employees.

California, Colorado, Connecticut, Illinois, Minnesota, Nevada, New York, North Dakota, and Tennessee all have laws that prohibit employers from firing an employee for engaging in lawful activity, on the employer’s premises, during nonworking hours, even if this activity is not in direct conflict with the essential business-related interests of the employer, are unwelcome, objectionable, or not acceptable. However, the Courts in the states mentioned above will weigh the employee protections against an employer’s business interests. If the Court rules that the employer’s interests outweigh employee privacy concerns, the employer is exempt from the law. Be aware that some laws provide explicit exemptions for employers.

Legal risks based on Employment Discrimination, Title VII of the Civil Rights Act, the Age Discrimination in Employment Act, the Equal Pay Act, Title II of the Genetic Information Nondiscrimination Act, and the Equal Employment Opportunity Commission can occur when employers use social media to screen a job applicant. Section 8(a)(3) of the National Labor Relations Act prohibits discrimination against applicants based on union affiliation or support. Therefore, using social media to screen out applicants on this basis may lead to an unfair labor charge against the company.

 A critical case decided in 2016, Hardin v. Dadlani, regarded a hiring manager who had previously preferred white female employees. The hiring manager instructed an employee to look up an applicant on Facebook and invite her for an interview “if she looks good.” The Court ruled that this statement made by the hiring manager can be reasonably construed to refer to her race, which can establish discriminatory animus. Discriminatory animus is when an employee may prove discrimination by either direct or circumstantial evidence. Direct evidence is evidence, if true, that proves the fact of discriminatory animus without inference or presumption. A single remark can show a discriminatory animus. 

Be an intelligent job candidate by knowing your rights. Companies that use third-party social media background checks must comply with disclosure and authorization requirements since it is considered consumer reporting agency under the Fair Credit Reporting Act. Therefore, an employer must give notice to the prospective employee or current employee that they want to acquire a consumer report for employment purposes and obtain written consent from the prospective employee or existing employee. 

Happy job hunting, and think before you post!

Skip to toolbar