The #Trademarkability of #Hashtags

The #hashtag is an important marketing tool that has revolutionized how companies conduct business. Essentially, hashtags serve to identify or facilitate a search for a keyword or topic of interest by typing a pound sign (#) along with a word or phrase (e.g., #OOTD or #Kony2012). Placing a hashtag at the beginning of a word or phrase on Twitter, Instagram, Facebook, TikTok, etc., turns the word or phrase into a hyperlink attaching it to other related posts, thus driving traffic to users’ sites. This is a great way to promote a product, service or campaign while simultaneously reducing marketing costs and increasing brand loyalty, customer engagement, and, of course, sales. But with the rise of this digital “sharing” tool comes a new wave of intellectual property challenges. Over the years, there has been increasing interest in including the hashtag in trademark applications.

#ToRegisterOrNotToRegister

According to the United States Patent and Trademark Office (USPTO), a term containing the hash symbol or the word “hashtag” MAY be registered as a trademark. The USPTO recognizes hashtags as registrable trademarks “only if [the mark] functions as an identifier of the source of the applicant’s goods or services.” Additionally, Section 1202.18 of the Trademark Manual of Examining Procedure (TMEP) further explains that “when examining a proposed mark containing the hash symbol, careful consideration should be given to the overall context of the mark, the placement of the hash symbol in the mark, the identified goods and services, and the specimen of use, if available. If the hash symbol immediately precedes numbers in a mark, or is used merely as the pound or number symbol in a mark, such marks should not necessarily be construed as hashtag marks. This determination should be made on a case-by-case basis.”

Like other forms of trademarks, one would seek registration of a hashtag in order to exclude others from using the mark when selling or offering the goods or services listed in the registration. More importantly, the existence of the trademark would serve in protecting against consumer confusion. This is the same standard that is applied to other words, phrases, or symbols that are seeking trademark registration. The threshold question when considering whether to file a trademark application for a hashtag is whether the hashtag is a source identifier for goods or services, or whether it merely describes a particular topic, movement, or idea.

#BarsToRegistration

Merely affixing a hashtag to a mark does not automatically make it registerable. For example, in 2019, the Trademark Trial and Appeal Board (TTAB) denied trademark registration for #MAGICNUMBER108 because it did not function as a trademark for shirts and is therefore not a source identifier. Rather, the TTAB found that the social media evidence suggests that the public sees the hashtag as a “widely used message to convey information about the Chicago Cubs baseball team”, namely, their 2016 World Series win after a 108-year drought.  The TTAB went on to say that just because the mark is unique doesn’t mean that the public would perceive it is an indication of a source. This further demonstrates the importance of a goods- source association of the mark.

Hashtags that would not function as trademarks are those simply relating to certain topics that are not associated with any goods or services. So, for example, cooking: #dinnersfortwo, #mealprep, or #healthylunches. These hashtags would likely be searched by users to find information relating to cooking or recipe ideas. When encountering these hashtags on social media, users would probably not link them to a specific brand or product. On the contrary, hashtags like #TheSaladLab or #ChefCuso would likely be linked to specific social media influencers who use that mark in connection with their goods and services and as such, could function as a trademark. Other examples of hashtags that would likely function as trademarks are brands themselves (#sephora, #prada, or #nike). Even slogans for popular brands would suffice (#justdoit, #americarunsondunkin, or #snapcracklepop).

#Infringement

What makes trademarked hashtags unique from other forms of trademarked material is that hashtags actually serve a purpose other than just identifying the source of the goods- they are used to index key words on social media to allow users to follow topics they are interested in. So, does that mean that using a trademarked hashtag in your social media post will create a cause of action for trademark infringement? The answer to this question is every lawyer’s favorite response: it depends. Sticking with the example above, assuming #TheSaladLab is a registered trademark, referencing the tag in this blog post alone would likely not warrant a trademark infringement claim, but if I were to sell kitchen tools or recipe books with the tag #TheSaladLab, that might rise to the level of infringement. However, courts are still unclear about the enforceability of hashtagged marks. In 2013, a Mississippi District Court stated in an order that “hashtagging a competitor’s name or product in social media posts could, in certain circumstances, deceive consumers.” The court never actually made a ruling on whether the use of the hashtag was actually infringing the registered mark.

This is problematic because on one hand, regardless of whether there is a hashtag in front of the mark, the owner of a registered trademark is entitled to bring a cause of action for trademark infringement when someone else uses their mark in commerce without their permission in the same industry. On the other hand, when one uses a trademark with the “#” symbol in front of it for the purposes of sharing information on social media, they are simply complying with the norms of the internet. The goal is to strike a balance between protecting the rights of IP owners and also protecting the rights of users’ freedom of expression on social media.

While the courts are somewhat behind in dealing with infringement relating to hashtagged trademark material, for the time being, various social media platforms (Instagram, Facebook, Twitter, YouTube) have procedures in place that allow users to report misuse of trademark-protected material or other intellectual property-related concerns.

States are ready to challenge Section 230

On January 8, 2021, Twitter permanently suspended @realDonaldTrump.  The decision followed an initial warning to the then-president and conformed to its published standards as defined in its public interest framework.   The day before, Meta (then Facebook) restricted President Trump’s ability to post content on Facebook or Instagram.   Both companies cited President Trump’s posts praising those who violently stormed the U.S. Capitol on January 6, 2021 in support of their decisions.

Members of the Texas and Florida legislatures, together with their governors, were seemingly enraged that these sites would silence President Trump’s voice.  In response, each immediately passed laws aiming to limit the scope of social media sites.   Although substantively different, the Texas and Florida laws are theoretically the same; they both seek to punish social media sites that regulate forms of conservative content that they argue liberal social media sites silence, regardless of whether the posted content violates the site’s published standards.

Shortly after each law’s adoption, two tech advocacy groups, NetChoice and Computer and Communication Industry Association, filed suits in federal district courts challenging the laws as violative of the First Amendment.  Each case has made its way through the federal courts on procedural grounds; the Eleventh Circuit upheld a lower court preliminary injunction prohibiting Florida from enforcing the statute until the case is decided on its merits.   In contrast, the Fifth Circuit overruled a lower court preliminary injunction.  Texas appealed the Fifth Circuit ruling to the Supreme Court of the United States, which, by a vote of 5-4, voted to reinstate the injunction.  The Supreme Court’s decision made clear that these cases are headed to the Supreme Court on the merits.

Criminals Beware the Internet Is Here!

Social media has now become a mainstay in our culture. We use social media to communicate and interact socially with our family and friends. Social media and the Internet allow us to share our whereabouts and latest experiences with just about everyone on the planet instantly with just a click of a button. The police department now understands this shift in culture and is using social media and the advancement in technology to their benefit. “Police are recognizing that a lot of present-day crimes are attached to social media. Even if the minuscule possibility existed that none of the persons involved were on social media, the crime would likely be discussed on social media by people who have become aware of it or the media organizations reporting it”.

Why social media is the New Police Investigative Tool?

Why are police so successful fighting crime with social media? It’s because a lot of us are addicted to social media and it’s our new form of communication. The addiction of social media has made it easier for police to catch criminals. Criminals tend to tell on them selves these days by simply not being able to stay off social media. We tell our friends confidential information through social media with the false narrative of thinking that what we say can’t be traced back to us. We even think that since we put our pages on private, that our information can’t be retrieved. However, that’s far from the truth. Bronx criminal Melvin Colon found this out the hard way.  Police authorities suspected Colon of crimes but lacked probable cause for a search. “Their solution: finding an informant who was friends with him on Facebook. There they gathered the bulk of the evidence needed for an indictment. Colon’s lawyers sought to use the Fourth Amendment to suppress that evidence, but the judge ruled that Colon’s legitimate expectation of privacy ended when he disseminated post to his friends. The court explained that Colons ‘friends’ were free to use the information however they wanted—including sharing it with the Government.” This illustrates that even information we think is private can still be accessed by police.

How Police use social media as an Investigative Tool?

“Most commonly, an officer views publicly available posts by searching for an individual, group, hashtag, or another search vector. Depending on the platform and the search, it may yield all the content responsive to the query or only a portion. When seeking access to more than is publicly available, police may use an informant (such as a friend of the target) or create an undercover account by posing as a fellow activist or alluring stranger”. This allows officers to communicate directly with the target and see content posted by both the target and their contacts that might otherwise be inaccessible to the public. Police also use social media to catch criminals through sting operations. “A sting operation is designed to catch a person in the act of committing a crime. Stings usually include a law enforcement officer playing the part as accessory to a crime, perhaps as a drug dealer or a potential customer of prostitution. After the crime is committed, the suspect is quickly arrested”. Another way social media is used as an investigative tool is through location tracking. “Location tracking links text, pictures and video to an exact geographical location and is a great tool for law enforcement to find suspects”. Due to location tagging, police can search for hot spots of crime and even gain instant photographic evidence from a crime. Social media is also used as an investigative public outreach tool. It helps the police connect with the public. It allows for police to communicate important announcements to the community and solicit tips on criminal investigations.

What does the law say about Police using social media?

There are few laws that specifically constrain law enforcement’s ability to engage in social media monitoring. “In the absence of legislation, the strongest controls over this surveillance tactic are often police departments’ individual social media policies and platform restrictions, such as Facebook’s real name policy and Twitter’s prohibition against using its API for surveillance”. Many people try to use fourth amendment as protections against police intrusion into their social media privacy. The Fourth Amendment guarantees the right of the people to be free from unreasonable searches and seizures. The inquiry against unreasonable searches and seizures is whether a person has a “reasonable expectation of privacy” and whether society recognizes that expectation as reasonable. The court states individuals do not have a recognized expectation of privacy in data publicly shared online. Law enforcement can also seek account information directly from social media companies. Under the Stored Communications Act, law enforcement can serve a warrant or subpoena on a social media company to get access to information about a person’s social media profile. The Stored Communications Act also permits service providers to voluntarily share user data without any legal process if delays in providing the information may lead to death or serious injury. “Courts have upheld warrants looking for IP logs to establish a suspect’s location, for evidence of communications between suspects, and to establish a connection between co-conspirators”.

 

Is Social Media Really Worth It?

 

Human beings are naturally social. We interact with one another every single day in many different ways. Current day, one of the most common ways we interact with one another is on social media.  Each year that goes by the number of individuals using social media increases. The number of social media users worldwide in 2019 was 3.484 billion, up 9% from 2018. The numbers increased dramatically during the 2020 Covid-19 pandemic. In 2020, the number of social media users jumped to 4.5 billion and it increases everyday.

Along with the increasing number of social media users, the number of individuals suffering from mental health issues is also increasing. Mental health is defined as a state of well-being in which people understand their abilities, solve everyday life problems, work well, and make a significant contribution to the lives of their communities. Its very interesting to think about how and why social media can effect an individuals mental state so greatly. The Displaced Behavior Theory may help explain why social media shows a connection with mental health. According to the theory, people who spend more time in sedentary behaviors such as social media use have less time for face-to-face social interaction, both of which have been proven to be protective against mental disorders . For example, the more time an individual spends using social media, the less time this individual spends on their own social relationships off screen.

Believe it or not, many studies have linked the use of Facebook in young adults to increased levels of anxiety, stress and depression.  I know based on my own personal experiences that life changed greatly when Facebook was introduced to my generation in Middle School. We went from going for walks around town, movie dates and phone calls to sitting in front of a computer screen for hours straight trying to figure out who posted the best profile picture that night or who received the most likes and comments on a post.  Based on my own experiences, I believe this is when cyberbullying became a huge issue.  Individuals, especially young teens, take into account everyone’s opinion’s and comments on social media sites like Facebook, Instagram and Snapchat. This why mental health is associated with the use of social media. Social media can create a lot of pressure to create the stereotype that others want to see, its almost like a popularity contest.

It makes me wonder how far is too far? When will Social Media platforms truly censor cyberbullying and put a stop to the rise of mental health issues associated with using these sites. Studies have proven that these platforms cause extreme mental health problems in individuals. The individuals who are mostly affected by this range from 12-17 years of age.  I believe that if we regulate the age groups allowed to join these sights it may be helpful to stop the detrimental affects these sights have on teenagers.  It boggles my mind to think many teenagers would still be alive if they did not download a social media platform or they would not suffer from mental health issues. We really have to think as parents, friends and family members if downloading social media platforms is really worth it.

Can you think of any solutions to this growing problem? At what age would you let your child use social media?

 

A Uniquely Bipartisan Push to Amend/Repeal CDA 230

Last month, I wrote a blog post about the history and importance of the Communications Decency Act, section 230 (CDA 230). I ended that blog post by acknowledging the recent push to amend or repeal section 230 of the CDA. In this blog post, I delve deeper into the politics behind the push to amend or repeal this legislation.

“THE 26 WORDS THAT SHAPED THE INTERNET”

If you are unfamiliar with CDA 230, it is the sole legislation that governs the internet world. Also known as “the 26 words that shaped the internet” Congress specifically articulated in the act that the internet is able to flourish, due to a “minimum of government regulation.” This language has resulted in an un-regulated internet, ultimately leading to problems concerning misinformation.

Additionally, CDA 230(c)(2) limits civil liability for posts that social media companies publish. This has caused problems because social media companies lack motivation to filter and censor posts that contain misinformation.

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” (47 U.S.C. § 230).

Section 230’s liability shade has been extended far beyond Congress’s original intent, which was to protect social media companies against defamation claims. The features of this legislation have resulted in a growing call to update section 230.

In this day and age, an idea or movement rarely gains bi-partisan support anymore. Interestingly, though, amending, or repealing section 230 has gained recent bipartisan support. As expected, however, each party has differing reasons as to why the law should be changed.

BIPARTISAN OPPOSITION

Although the two political parties are in agreement that the legislation should be amended, their reasoning behind it stems from differing places. Republicans tend to criticize CDA 230 for allowing social media companies to selectively censor conservative actors and posts. In contrast, democrats criticize the law for allowing social media companies to disseminate false, and deceptive information.

 DEMOCRATIC OPPOSITION

On the democratic side of the aisle, President Joe Biden has repeatedly called for Congress to repeal the law. In an interview with The New York Times, President Biden was asked about his personal view regarding CDA 230, in which he replied…

“it should be revoked. It should be revoked because it is not merely an internet company. It is propagating falsehoods they know to be false, and we should be setting standards not unlike the Europeans are doing relative to privacy. You guys still have editors. I’m sitting with them. Not a joke. There is no editorial impact at all on Facebook. None. None whatsoever. It’s irresponsible. It’s totally irresponsible.”

House Speaker Nancy Pelosi has also voiced opposition, calling CDA 230 “a gift” to the tech industry that could be taken away.

The law has often been credited by the left for fueling misinformation campaigns, like Trumps voter fraud theory, and false COVID information. In response, social media platforms began marking certain posts as unreliable.  This led to the reasoning behind republicans opposition to section 230.

REPUBLICAN OPPOSITION

Former President Trump has voiced his opposition to CDA 230 numerous times. He first started calling for the repeal of the legislation in May of 2020, after Twitter flagged two of his tweets regarding mail-in voting, with a warning label that stated “Get the facts about mail-in ballots.” In fact, in December, Donald Trump, the current President at the time, threatened to veto the National Defense Authorization Act annual defense funding bill, if CDA 230 was not revoked. The former presidents opposition was so strong, he issued an Executive Order in May of last year urging the government to re-visit CDA 230. Within the order, the former president wrote…

“Section 230 was not intended to allow a handful of companies to grow into titans controlling vital avenues for our national discourse under the guise of promoting open forums for debate, and then to provide those behemoths blanket immunity when they use their power to censor …”

The executive order also asked the Federal Communications Commission to write regulations that would remove protections for companies that “censored” speech online. Although the order didn’t technically affect CDA 230, and was later revoked by President Biden, it resulted in increased attention on this archaic legislation.

LONE SUPPORTERS

Support for the law has not completely vanished, however. As expected, many social media giants support leaving CDA 230 untouched. The Internet Association, an industry group representing some of the largest tech companies like Google, Facebook, Amazon, and Microsoft, recently announced that the “best of the internet would disappear” without section 230, warning that it would lead to numerous companies being subject to an array of lawsuits.

In a Senate Judiciary hearing in October 2020, Facebook CEO Mark Zuckerberg and Twitter CEO Jack Dorsey warned that revoking Section 230 could…

“collapse how we communicate on the Internet.”

However, Mark Zuckerberg took a more moderate position as the hearing continued, telling Congress that he thought lawmakers should update the law.

Facebook has taken a more moderate approach by acknowledging that 230 should be updated. This approach is likely in response to public pressure due to increased awareness. Irregardless, it signifies a likely chance that section 23o will be updated in the future, since Facebook represents one of the largest social media companies protected by 230. A complete repeal of this law would create such major impacts, however, that this scenerio seems unlikely to happen. Nevertheless, growing calls for change, and a Democratic controlled Congress points to a likelihood of future revision of the section.

DIFFERING OPINIONS

Although both sides of Washington, and even some social media companies, agree the law should be amended; the two sides differ greatly on how to change the law.

As mentioned before, President Biden has voiced his support for repealing CDA 230 altogether. Alternatively, senior members of his party, like Nancy Pelosi have suggested simply revising or updating the section.

Republican Josh Hawley recently introduced legislation to amend section 230. The proposed legislation would require companies to prove a “duty of good faith,” when moderating their sites, in order to receive section 230 immunity. The legislation included a $5,000 fee for companies that don’t comply with the legislation.

Adding to the confusion of the section 230 debate, many fear the possible implications of repealing or amending the law.

FEAR OF CHANGE

Because CDA 230 has been referred to as “the first amendment of the internet,” many people fear that repealing this section altogether would result in a limitation on free speech online. Although President Biden has voiced his support for this approach, it seems unlikely to happen, as it would result in massive implications.

One major implication of repealing or amending CDA 230 is that it could allow for numerous lawsuits against social media companies. Not only would major social media companies be affected by this, but even smaller companies like Slice, could become the subject of defamation litigation by allowing reviews to be posted on their website. This could lead to an existence of less social media platforms, as some would not be able to afford legal fees. Many fear that these companies would further censor online posts for fear of being sued. This may also result in higher costs for these platforms. In contrast, companies could react by allowing everything, and anything to be posted, which could result in an unwelcome online environment. This would be in stark contrast to the Congress’s original intent in the creation of the CDA, to protect children from seeing indecent posts on the internet.

FUTURE CHANGE..?

 

Because of the intricacy of the internet, and the archaic nature of CDA 230, there are many differing opinions as to how to successfully fix the problems the section creates. There are also many fears about the consequences of getting rid of the legislation. Are there any revisions you can think of that could successfully deal with the republicans main concern, censorship? Can you think of any solutions for dealing with the democrats concern of limiting the spread of misinformation. Do you think there is any chance that section 230 will be repealed altogether? If the legislation were to be repealed, would new legislation need to be created to replace CDA 230?

 

AI Avatars: Seeing is Believing

Have you ever heard of deepfake? The term deepfake comes from “deep learning,” a set of intelligent algorithms that can learn and make decisions on their own. By applying deep learning, deepfake technology replaces faces from the original images or videos with another person’s likeness.

What does deep learning have to do with switching faces?

Basically, deepfake allows AI to learn automatically from its data collection, which means the more people try deepfake, the faster AI learns, thereby making its content more real.

Deepfake enables anyone to create “fake” media.

How does Deepfake work?

First, an AI algorithm called an encoder collects endless face shots of two people. The encoder then detects similarities between the two faces and compresses the images so they can be delivered. A second AI algorithm called a decoder receives the package and recovers it to reconstruct the images to perform a face swap.

Another way deepfake uses to swap faces is GAN, or a generative adversarial network. A GAN adds two AI algorithms against each other, unlike the first method where encoder and decoder work hand in hand.
The first algorithm, the generator, is given random noise and converts it into an image. This synthetic image is then added to a stream of real photos like celebrities. This combination of images gets delivered to the second algorithm, the discriminator. After repeating this process countless times, the generator and discriminator both improve. As a result, the generator creates completely lifelike faces.

For instance, Artist Bill Posters used deepfake technology to create a fake video of Mark Zuckerberg , saying that Facebook’s mission is to manipulate its users.

Real enough?

How about this. Consider having Paris Hilton’s famous quote, “If you don’t even know what to say, just be like, ‘That’s hot,’” replaced by Vladimir Putin, President of Russia. Those who don’t know either will believe that Putin is a Playboy editor-in-chief.

Yes, we can all laugh at these fake jokes. But when something becomes overly popular, it has to come with a price.

Originally, deepfake was developed by an online user of the same name for the purpose of entertainment, as the user had put it.

Yes, Deepfake meant pornography.

The biggest problem of deepfake is that it is challenging to detect the difference and figure out which one is the original. It has become more than just superimposing one face onto another.

Researchers found that more than 95% of deepfake videos were pornographic, and 99% of those videos had faces replaced with female celebrities. Experts explained that these fake videos lead to the weaponization of artificial intelligence used against women, perpetuating a cycle of humiliation, harassment, and abuse.

How do you spot the difference?

As mentioned earlier, the algorithms are fast learners, so for every breath we take, deepfake media becomes more real. Luckily, research showed that deepfake faces do not blink normally or even blink at all. That sounds like one easy method to remember. Well, let’s not get ahead of ourselves just yet. When it comes to machine learning, nearly every problem gets corrected as soon as it gets revealed. That is how algorithms learn. So, unfortunately, the famous blink issue already had been solved.

But not so fast. We humans may not learn as quickly as machines, but we can be attentive and creative, which are some qualities that tin cans cannot possess, at least for now.
It only takes extra attention to detect Deepfake. Ask these questions to figure out the magic:

Does the skin look airbrushed?
Does the voice synchronize with the mouth movements?
Is the lighting natural, or does it make sense to have that lighting on that person’s face?

For example, the background may be dark, but the person may be wearing a pair of shiny glasses reflecting the sun’s rays.

Oftentimes, deepfake contents are labeled as deepfake because creators want to display themselves as artists and show off their works.
In 2018, a software named Deeptrace was developed to detect deepfake contents. A deeptrace lab reported that deepfake videos are proliferating online, and its rapid growth is “supported by the growing commodification of tools and services that lower the barrier for non-experts—from well-maintained open source libraries to cheap deepfakes-as-a-service websites.”

The pros and cons of deepfake

It may be self-explanatory to name the cons, but here are some other risks deepfake imposes:

  • Destabilization: the misuse of deepfake can destabilize politics and international relations by falsely implicating political figures in scandals.
  • Cybersecurity: the technology can also negatively influence cybersecurity by having fake political figures incite aggression.
  • Fraud: audio deepfake can clone voices to convince people to believe that they are talking to actual people and induce them into giving away private information.

Well then, are there any pros to deepfake technology other than having entertainment values? Surprisingly, a few:

  • Accessibility: deepfake creates various vocal personas that can turn text into speech, which can help with speech impediments.
  • Education: deepfake can deliver innovative lessons that are more engaging and interactive than traditional lessons. For example, deepfake can bring famous historical figures back to life and explain what happened during their time. Deepfake technology, when used responsibly, can be served as a better learning tool.
  • Creativity: instead of hiring a professional narrator, implementing artificial storytelling using audio deepfake can tell a captivating story and let its users do so only at a fraction of the cost.

If people use deepfake technology with high ethical and moral standards on their shoulders, it can create opportunities for everyone.

Case

In a recent custody dispute in the UK, the mother presented an audio file to prove that the father had no right to take away their child. In the audio, the father  was heard making a series of violent threats towards his wife.

The audio file was compelling evidence. When people thought the mother would be the one to walk out with a smile on her face, the father’s attorney thought something was not right. The attorney challenged the evidence, and it was revealed through forensic analysis that the audio was tailored using a deepfake technology.

This lawsuit is still pending. But do you see any other problems in this lawsuit? We are living in an era where evidence tampering is easily available to anyone with the Internet. It would require more scrutiny to figure out whether evidence is altered.

Current legislation on deepfake.

The National Defense Authorization Act for Fiscal Year 2021 (“NDAA”), which became law as Congress voted to override former President Trump’s veto, also requires the Department of Homeland Security (“DHS”) to issue an annual report for the next five years on manipulated media and deepfakes.

So far, only three states took action against deepfake technology.
On September 1, 2019, Texas became the first state to prohibit the creation and distribution of deepfake content intended to harm candidates for public office or influence elections.
Similarly, California also bans the creation of “videos, images, or audio of politicians doctored to resemble real footage within 60 days of an election.”
Also, in 2019, Virginia banned deepfake pornography.

What else does the law say?

Deep fakes are not illegal per se. But depending on the content, a deepfake can breach data protection law, infringe copyright and defamation. Additionally, if someone shares non-consensual content or commits a revenge porn crime, it is punishable depending on the state law. For example, in New York City, the penalties for committing a revenge porn crime are up to one year in jail and a fine of up to $1,000 in criminal court.

Henry Ajder, head of threat intelligence at Deeptrace, raised another issue: “plausible deniability,” where deepfake can wrongfully provide an opportunity for anyone to dismiss actual events as fake or cover them up with fake events.

What about the First Amendment rights?

The First Amendment of the U.S. Constitution states:
“Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.”

There is no doubt that injunctions against deepfakes are likely to face First Amendment challenges. The First Amendment will be the biggest challenge to overcome. Even if the lawsuit survives, lack of jurisdiction over extraterritorial publishers would inhibit their effectiveness, and injunctions will not be granted unless under particular circumstances such as obscenity and copyright infringement.

How does defamation law apply to deepfake?

How about defamation laws? Will it apply to deepfake?

Defamation is a statement that injures a third party’s reputation.  To prove defamation, a plaintiff must show all four:

1) a false statement purporting to be fact;

2) publication or communication of that statement to a third person;

3) fault amounting to at least negligence; and

4) damages, or some harm caused to the person or entity who is the subject of the statement.

As you can see, deepfake claims are not likely to succeed under defamation because it is difficult to prove that the content was intended to be a statement of fact. All that the defendant will need to protect themselves from defamation claims is to have the word “fake” somewhere in the content. To make it less of a drag, they can simply say that they used deep”fake” to publish their content.

Pursuing a defamation claim against nonconsensual deepfake pornography also poses a problem. The central theme of the claim is the nonconsensual part, and our current defamation law fails to address whether or not the publication was consented to by the victim.

To reflect our transformative impact of artificial intelligence, I would suggest making new legislation to regulate AI-backed technology like deepfake. Perhaps this could lower the hurdle that plaintiffs must face.
What are your suggestions in regards to deepfake? Share your thoughts!

 

 

 

 

 

Social Media: a pedophile’s digital playground.

A few years ago, as I got lost in the wormhole that is YouTube, I stumbled upon a family channel, “The ACE Family.” They had posted a cute video where the mom was playing a prank on her boyfriend by dressing their infant daughter in a cute and tiny crochet bikini. I didn’t think much of it at the time as it seemed innocent and cute, but then I pondered and thought about it. I stumbled on this video without any malintent, but how easy would it be for someone looking for content like this with other disgusting intent.

When you Google “Social media child pornography,” you get many articles from 2019. In 2019 a YouTuber using the name “MattsWhatItIs” posted a YouTube video titled “YouTube is Facilitating the Sexual Exploitation of Children, and it’s Being Monetized(2019)”; this video has 4,305,097 views to date and has not been removed from the platform. The author of the video discusses a potential child pornography ring on YouTube that was being facilitated due to a glitch in the algorithm. He demonstrates how with a brand-new account on a VPN, all it takes is two clicks to end up in this ring. The search started with a “Bikini Haul” search. After two clicks on the recommended videos section, he stumbles upon an innocent-looking homemade video. The video looks innocent, but he scrolls down to the comments to expose the dark side. Multiple random account comment time stamps, those timestamps are linked to slots on the video where the children are in comprising implicit sexual positions. The most disturbing part is that the algorithm glitches once you enter the wormhole, and you get stuck on these “child pornography” videos. Following the vast attention this video received, YouTube has created an algorithm that is supposed to catch this predatory behavior; when the video was posted, it didn’t seem to be doing much.

YouTube has since implemented a “Child Safety Policy,” which details all the content which the social media platform has aimed to protect. It also includes recommended steps for parents or agents posting content with children being the focus. “To protect minors on YouTube, content that doesn’t violate our policies but features children may have some features disabled at both the channel and video level. These features may include:

  • Comments
  • Live chat
  • Live streaming
  • Video recommendations (how and when your video is recommended)
  • Community posts”

Today when you look up the news on this topic, you don’t find much. There are forums and such exposing the many methods these predators use to get around the algorithms set up by platforms to detect their activity. Many predators leave links to child pornography in the comments section of specific videos. Others used generic terms with the initials “C.P.,” a common abbreviation for “child pornography,” and codes like “caldo de pollo,” which means “chicken soup” in Spanish. Many dedicated and concerned parents have formed online communities that scour the Internet for this disgusting content and report it to social media platforms. Online communities scan the Internet for this activity and report it, but why haven’t social media platforms created departments for this issue? Most technology companies use automated tools to detect images and videos that law enforcement has already categorized as child sexual exploitation material. Still, they struggle to identify new, previously unknown material and rely heavily on user reports.

The Child Rescue Coalition has created the “Child Protection System Software.” This tool has a growing database of more than a million hashed images and videos, which it uses to find computers that have downloaded them. The software can track I.P. addresses, which are shared by people connected to the same Wi-Fi network and individual devices. According to the Child Rescue Coalition, the system can follow widgets even if the owners move or use virtual private networks or VPNs to mask the I.P. addresses. Last year they expressed interest in partnering with social media platforms to combine resources to crack down on child pornography. Unfortunately, some are against this as it would allow social media companies access to this unregulated database of suspicious I.P. addresses. Thankfully, many law enforcement departments have partnered up and used this software and as the president of the Child Rescue Coalition said: “Our system is not open-and-shut evidence of a case. It’s for probable cause.”

The United States Department of Justice has created a “Citizen’s Guide to U.S. Federal Law on Child Pornography.” The first line on this page reads, “Images of child pornography are not protected under First Amendment rights, and are illegal contraband under federal law.” Commonly federal jurisdiction is applied if the child pornography offense occurred in interstate or foreign commerce. In today’s digital era, federal law almost always applies when the Internet is used to commit child pornography offenses. The United States has implemented multiple laws that define child pornography and what constitutes a crime related to child pornography.

Whose job is it to protect children from these predators? Should social media have to regulate this? Should parents be held responsible for contributing to the distribution of these media?

 

“Unfortunately, we’ve also seen a historic rise in the distribution of child pornography, in the number of images being shared online, and in the level of violence associated with child exploitation and sexual abuse crimes. Tragically, the only place we’ve seen a decrease is in the age of victims.

 This is – quite simply – unacceptable.”

-Attorney General Eric Holder Jr. speaks at the National Strategy Conference on Combating Child Exploitation in San Jose, California, May 19, 2011.

Is your data protected? By who? What rights do you have over your personal information once it has entered the world wide web?

  • – Who doesn’t protect your data?
  • – History of the “data” or personal information legislation 
  • – A July 2021 update on the start of legislation regarding data protection on the internet
  • – What you can do to protect your data for now.

Ever since the 2018 publicized Facebook data breach, I have been curious about what data exactly can be stored, used and “understood” by computer algorithms and what the legal implications may be. At first, I was excited  about this as a new tool. I tend to shop and look for things that are, at least branded as sustainably sourced and environmentally friendly. For me, the idea that I would only be advertised these types of items, no plastics that may off gas sounded great to me. It wasn’t until I heard some of my peers’ concerns before I seriously questioned the dangers of data collection and how this information could be used to harm. 

Social media websites, commerce websites and mobile apps have become integral parts in many of our everyday lives. We use them to connect with friends online, find like-minded people through virtual groups from across the world. These sites are used to share private, work, and “public” information. The data collected from social media can be looked at as a tool or an invasion of privacy. User data collection could give us access to knowledge which allows us to learn more about our human nature. For example, this data can tell us about different demographics and how users use  each platform. However, it also raises new issues on what should be private, and who owns the data created by user usage (the platform/company or individual using it).

What are our governments doing to protect our data – personal information- rights? Do individuals even have data rights over their personal information on the internet? If so, how will these rights be protected or regulated for? And how will legislation attempt to regulate businesses?  These are all questions that I have wondered about and hope to start to answer here. After watching Mark Zuckerberg explain to congressmen how companies make money on the internet, while remaining fre,  I had little faith that our legal system would catch up to how companies and computer programmers are using these new technologies. Many large social media companies remain free making money selling the data and virtual advertising space, which has its own legal issues. Would you rather pay for Facebook, Instagram, Twitter, Snapchat ect., or allow them to sell your data? If we demand regulation and privacy for our data we may need to make this choice. 

 Privacy on the Internet 

Federally in the United States, this area of law is unregulated territory, leaving it up to the tech and social media companies for now. However, some states are starting to create their own laws. See the pictures below.

US State Privacy Legislation Tracker

How has the government regulated these areas thus far? 

There are no general consumer privacy and security laws in the federal government legislation. However, as you may remember the US government imposed a whopping $5 billion dollar penalty for Facebook’s data breach.  The order also required “Facebook to restructure its approach to privacy…  and establishes strong new mechanisms to ensure that Facebook executives are accountable for the decisions they make about privacy, and that those decisions are subject to meaningful oversight” (FTC).  This was under the Federal Trade Commission Act (FTC).

This act, past in 1914, created a government agency and prohibited companies from engaging in “unfair or deceptive acts or practises” (section 5 FTC). It protected consumers from misleading or boldly false advertising by some of America’s largest leading consumer brands (Federal Trade Commission Overview)

Interesting here is why Facebook had to pay a settlement under the Federal Trade Commission act. Under the Federal Trade commission act only companies which, “boldly false advertise,” “mislead,” or “misrepresent.” Facebook told consumers that the site did not sell their data and that users could restrict access Facebook had to data if they set it up by clicking certain boxes. The opposite was true. Facebook did not violate any internet privacy laws (there weren’t any). In this case, a 20th century legislation created, in large, to protect consumers from companies selling fake merchandise.  If Facebook had said nothing about data privacy on their website they wouldn’t have been liable for anything. Since this case, more legal regulations have been introduced. 

Complete Guide to Privacy Laws in the US | Varonis

US Privacy Act of 1974 

 

In order to understand where the legal field will go it is important to understand the history of US Privacy Rights. This act restricted what data, of personal information, US government agencies could store on their (first) computer databases. This act also gave individual’s certain rights, such as the right to access any of the data that is held by government agencies, and the right to correct any errors. It also restricted what and how the information was shared between federal and non-federal agencies, allowing it only under specific circumstances. 

HIPAA, GLBA, COPPA

These three acts further protect individuals personal information. 

HIPAA, the Health Insurance portability and Accountability Act, was put in place to regulate health insurance and protect people’s personal health information. This act laid down certain ground rules for confidentiality requirements. (HIPAA for Professionals).

The Gramm-Leach- Bliley  Act (GLBA), passed in 1999, protects nonpublic personal information, defined as “any information collected about an individual in connection with providing a financial product or service, unless that information is otherwise publicly available.”

The Children’s Online Privacy Protection Act (CPPA), enacted in 1998,regulates the personal information that is collected from minors. The law “imposes certain requirements on operators of websites or online services directed to (or have actual knowledge of) children under 13 years of age.”

 

Worldwide Internet Data Privacy 

Currently, the US does not have any federal level consumer data privacy or security law. According to the “United Nations Conference on Trade and Development, 107 countries have data privacy rules in place including 66 developing nations.”

What does GDPR mean for me? An explainer

The European Union passed the General Data Protection Regulation in 2018. This law went through a long legislative process, the data privacy and security rights law was officially approved in 2016 and went into effect May 2018. It put specific obligations on data processors and the cloud. The regulation also hopes to give individuals the ability to sue processors of data directly for damages, limit and minimize the retention of data that is kept by default and give consumers the right to correct incorrect information. The GDPR also requires explicit consent when consumers give their data. Processing personal data is generally prohibited, unless it is expressly allowed by law, or the data subject has consented to the processing.”CCPA vs. GDPR - differences and similarities – Data Privacy Manager

THE U.S.’s strictest state so far:

CCPA rights and compliance requirements | DropsuiteSo far only three states, California, Colorado and Virginia have actually enacted comprehensive consumer data privacy laws according to the National Conference of State Legislatures as of July 22, 2021. The closest US law to the EU’s GDPR, is California’s Consumer Privacy Act (currently U.S.’s strictest regulation on internet data privacy). In California this act requires businesses to clearly state what types of personal data will be collected from consumers and how this information will be used, managed, shared, and sold by companies or entities doing business with and compiling information about California Residents (CCPA AND GDPR Comparison chart.) This “landmark law” secures new privacy rights for California consumers, including:

 

 

New York State Privacy Law Update June 2021 

 In the New York legislature there were a number of privacy bills that were pending, including the “It’s Your Data Act,” the “New York Privacy Act,” the “Digital Fairness Act,” and the “New York Data Accountability and Transparency Act.” Most of the bills never made it out of committee. 

US LEGISLATION TRACKER

The “It’s Your Data Act” proposed to provide protections and transparency in the collection, use, retention, and sharing of personal information. 

 

From the New York State Senate Summary:

 “The ‘NY Privacy Act’ proposed to enact would require companies to disclose their methods of identifying personal information, to place special safeguards around data sharing, and to allow consumers to obtain the names of all entities with whom their information is shared”, creating a special account to fund a new Office of Privacy and Data Protection. It is currently on the floor calendar, and no action has yet been taken on it. 

 

 The definition of personal information here is – “any information related to an identified or identifiable person” – includes a very extensive list of identifiers: biometric, email addresses, network information and more. 

How to balance your data privacy requirements with effective video security | Blog | Hikvision

What are Data Privacy Rights which have been identified thus far? 

Provisions in Chart

CONSUMER RIGHTS

  • The right of access to personal information collected or shared – The right for a consumer to access from a business/data controller the information or categories of information collected about a consumer, the information or categories of information shared with third parties, or the specific third parties or categories of third parties to which the information was shared; or, some combination of similar information.
  • The right to rectification — The right for a consumer to request that incorrect or outdated personal information be corrected but not deleted.
  • The right to deletion — The right for a consumer to request deletion of personal information about the consumer under certain conditions.
  • The right to restriction of processing — The right for a consumer to restrict a business’s ability to process personal information about the consumer.
  • The right to data portability — The right for a consumer to request personal information about the consumer be disclosed in a common file format.
  • The right to opt out of the sale of personal information — The right for a consumer to opt out of the sale of personal information about the consumer to third parties.
  • The right against automated decision making — A prohibition against a business making decisions about a consumer based solely on an automated process without human input.
  • A consumer private right of action — The right for a consumer to seek civil damages from a business for violations of a statute.

Fines Increase & Enforcements Fall in First Year of GDPR | Hrdots

BUSINESS OBLIGATIONS

While many rights and obligations are starting to be recognized, again, there is not yet legislation to protect them. 

 

What Is Data Privacy? | Laws and Best Practices for Businesses

So, what can you do to protect yourself?

    1. Update and Optimize Your Privacy Settings. 
  • Review what apps have access to facebook data and what they can do with the access 
  • Delete access for all apps you no longer use or need 
  1. Share with Care. Be aware that when you post a picture or message, you may be inadvertantly sharing personal details and sensitive data with strangers. 
  2. Block “supercookies” trails – Supercookies are bits of data that can be stored on your computer like advertising networks. They are a “a much more invasive type of behavior-tracking program than traditional cookies that is also harder to circumvent.supercookies are harder to detect and get rid of because they hide in various places and can’t be automatically deleted. A supercookie owner can capture a ton of your unique personal data like your identity, behavior, preferences, how long you’re online, when you’re most active and more. Supercookies can communicate across different websites, stitching together your personal data into a highly detailed profile.
  3. Set up Private email Identity 
  4. Update your softwares – many software companies release updates which patch bugs and vulnerabilities in the app when they are discovered 
  5. Use App lockers – App lockers provide an extra level of security for apps and work 
  6. Encrypt your data – There are free apps available to encrypt or scramble data so that it can not be read without a key. 
  7. Create long and unique passwords for all counts and use multi-factor authentication whenever possible”. This additional layer of security makes it harder for hackers to get into your accounts. (Data Privacy Senate). 

A computer science expert on the data privacy crisis | The University of Chicago Magazine

Can Social Media Be Regulated?

In 1996 Congress passed what is known as Section 230 of the Communications Decency Act (CDA) which provides immunity to website publishers for third-party content posted on their websites. The CDA holds that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This Act passed in 1996, was created in a different time and era, one that could hardly envision how fast the internet would grow in the coming years. In 1996, social media for instance consisted of a little-known social media website called Bolt, the idea of a global world wide web, was still very much in its infancy. The internet was still largely based on dial-up technology, and the government was looking to expand the reach of the internet. This Act is what laid the foundation for the explosion of Social Media, E-commerce, and a society that has grown tethered to the internet.

The advent of Smart-Phones in the late 2000s, coupled with the CDA, set the stage for a society that is constantly tethered to the internet and has allowed companies like Facebook, Twitter, YouTube, and Amazon to carve out niches within our now globally integrated society.   Facebook alone in the 2nd quarter of 2021 has averaged over 1.9 billion daily users.

Recent studs conducted by the Pew Research Center show that “[m]ore than eight in ten Americans get news from digital services”

Large majority of Americans get news on digital devices

While older members of society still rely on news media online, the younger generation, namely those 18-29 years of age, receive their news via social media.

Online, most turn to news websites except for the youngest, who are more likely to use social media

The role Social Media plays in the lives of the younger generation needs to be recognized. Social Media has grown at a far greater rate than anyone could imagine. Currently, Social Media operates under its modus operandi, completely free of government interference due to its classification as a private entity, and its protection under Section 230.

Throughout the 20th century when Television News Media dominated the scenes, laws were put into effect to ensure that television and radio broadcasters would be monitored by both the courts and government regulatory commissions. For example, “[t]o maintain a license, stations are required to meet a number of criteria. The equal-time rule, for instance, states that registered candidates running for office must be given equal opportunities for airtime and advertisements at non-cable television and radio stations beginning forty-five days before a primary election and sixty days before a general election.”

What these laws and regulations were put in place for was to ensure that the public interest in broadcasting was protected. To give substance to the public interest standard, Congress has from time to time enacted requirements for what constitutes the public interest in broadcasting. But Congress also gave the FCC broad discretion to formulate and revise the meaning of broadcasters’ public interest obligations as circumstances changed.

The Federal Communications Commission (FCC) authority is constrained by the first amendment but acts as an intermediary that can intervene to correct perceived inadequacies in overall industry performance, but it cannot trample on the broad editorial discretion of licensees. The Supreme Court has continuously upheld the public trustee model of broadcast regulation as constitutional. The criticisms of regulating social media center on the notion that they are purely private entities that do not fall under the purviews of the government, and yet these same issues are what presented themselves in the precedent-setting case of Red Lion Broadcasting Co. v. Federal Communications Commission (1969.  In this case, the court held that “rights of the listeners to information should prevail over those of the broadcasters.” The Court’s holding centered on the public right to information over the rights of a broadcast company to choose what it will share, this is exactly what is at issue today when we look at companies such as Facebook, Twitter, and Snapchat censuring political figures who post views that they feel may be inciteful of anger or violence.

In essence, what these organizations are doing is keeping information and views from the attention of the present-day viewer. The vessel for the information has changed, it is no longer found in television or radio but primarily through social media. Currently, television and broadcast media are restricted by Section 315(a) of the Communications Act and Section 73.1941 of the Commission’s rules which “require that if a station allows a legally qualified candidate for any public office to use its facilities (i.e., make a positive identifiable appearance on the air for at least four seconds), it must give equal opportunities to all other candidates for that office to also use the station.” This is a restriction that is nowhere to be found for Social Media organizations. 

This is not meant to argue for one side or the other but merely to point out that there is a political discourse being stifled by these social media entities, that have shrouded themselves in the veils of a private entity. However, what these companies fail to mention is just how political they truly are. For instance, Facebook proclaims itself to be an unbiased source for all parties, and yet what it fails to mention is that currently, Facebook employs one of the largest lobbyist groups in Washington D.C. Four Facebooks lobbyist have worked directly in the office of House Speaker Pelosi. Pelosi herself has a very direct connection to Facebook, she and her husband own between $550,000 to over $1,000,000 in Facebook stock. None of this is illegal, however, it raises the question of just how unbiased is Facebook.

If the largest source of news for the coming generation is not television, radio, or news publications themselves, but rather Social Media such as Facebook, then how much power should they be allowed to wield without there being some form of regulation? The question being presented here is not a new one, but rather the same question asked in 1969, simply phrased differently. How much information is a citizen entitled to, and at what point does access to that information outweigh the rights of the organization to exercise its editorial discretion? I believe that the answer to that question is the same now as it was in 1969 and that the government ought to take steps similar to those taken with radio and television. What this looks like is ensuring that through Social Media, that the public has access to a significant amount of information on public issues so that its members can make rational political decisions. At the end of that day that it was at stake, the public’s ability to make rational political decisions.

These large Social Media conglomerates such as Facebook and Twitter have long outgrown their place as a private entity, they have grown into a public medium that has tethered itself to the realities of billions of people. Certain aspects of it need to be regulated, mainly those that interfere with the Public Interest, there are ways to regulate this without interfering with the overall First Amendment right of Free Speech for all Americans. Where however Social Media blends being a private forum for all people to express their ideas under firmly stated “terms and conditions”, and being an entity that strays into the political field whether it be by censoring heads of state, or by hiring over $50,000,000 worth of lobbyist in Washington D.C, there need to be some regulations put into place that draw the line that ensures the public still maintains the ability to make rational political decisions. Rational decisions that are not influenced by anyone organization. The time to address this issue is now when there is still a middle ground on how people receive their news and formulate opinions.

Skip to toolbar