From Record Stores to FYPs: Social Media’s Impact on the Music Industry

Who remembers having to go out and buy a record or an 8 track or cassette tape? How about a CD or asking their parents if they can buy the newest songs on iTunes? I sure do, but today many kids and individuals turn to TikTok or other social media platforms to hear the latest songs. But what happens to the music that is used in these viral dances or over a post? Are they free to use just because everything is now digitized or are there still protections for artists and their music once it hits social media?

Social media, since its inception has played a role in musicians finding their big break online. Starting with Myspace in the early 2000s, huge stars like Calvin Harris, Adele and even Sean Kingston used Myspace to their advantage. They grew their fanbase, contacted record labels, and put their music out for the world to hear. One of the most well-known internet success stories for this generation is Justin Bieber and his discovery on YouTube. While covering a Chris Brown song at just 13 years old, caught the attention of music executive and the rest was history. Justin Bieber is one of the biggest household names of this generation being named 8th Greatest Pop Star of the 21st Century by Billboard Canada in 2024. Justin, however, wasn’t the only success story. Ed Sheeran, 5 Seconds of Summer, Charlie Puth, Tate McRae, and so many other artists found their success by posting covers, originals and other content on YouTube in the hopes of getting discovered like Justin Bieber had.

Following and alongside YouTube success, next came the wave of artists being discovered on the hit platform, Vine. Vine unlike YouTube could not have full videos on its platform. In 2012 Vine took the world by storm with only six-second videos. These videos were played on loops so that if you blinked…don’t worry it would play again. In 2013 many young aspiring stars again took to posting to the platform with the hopes of posting that one perfect video, but now they only had six seconds to impress. Shawn Mendes began posting on the app nearly at its inception. He began posting cover clips while he played the guitar.

“One Vine, Mendes posted a video of himself playing guitar while singing the hook to Bieber’s song “As Long As You Love Me” and received 10,000 likes overnight. He followed up with covers of Bruno Mars and other pop singers, and, by the spring, when Island and Massey came calling, he already assumed over 2.5 million followers on the service.”

Mendes soon got to record a hit song with Justin Bieber called “Monster” where the two got to show off their different styles and tell a story about the hardships that come with fame.

After Vine was shut down, artists turned back to other social media platforms to put out their music. And then the 2019 Covid Pandemic hit and TikTok entered the scene. Like Vine, TikTok had short videos that played on a loop. However, this time they were about 15-30 seconds when the app first started gaining traction in the US. Artists could post their videos of viral dances, cover music or even post daily get ready with me videos.

Again, TikTok produced up and coming stars who we know today such as Olivia Rodrigo, Lil Nas X and Alex Warren exploded once their songs became part of a viral trend or pick a song from the platforms “Trending” sounds in the sound library.

This is great, right!? All of these people using what is right at their fingertips to put themselves out there and make their dreams come true. But what happens when these viral songs are being used without the proper licensing or when they infringe on copyright law? This is an issue that has been on the rise in the exorbitant use of social media videos to promote companies, schools or in a popular video. So, let’s talk about it.

First what is copyright law?

“Copyright is a type of intellectual property that protects original works of authorship as soon as an author fixes the work in a tangible form of expression.”

This includes paintings, photographs, illustrations, musical compositions, sound recordings, computer programs, books, poems, blog posts, movies, architectural works and so much more!

So, what if you want to use a copyrighted work? Don’t panic! The Fair Use Doctrine explains that certain usage of these works is allowed.

“Fair use is a legal doctrine that promotes freedom of expression by promoting the unlicensed use of copyright-protected works in certain circumstances. Section 107 of the Copyright Act provides the statutory framework for determining whether something is a fair use and identifies certain types of uses—such as criticism, comment, news reporting, teaching, scholarship, and research—as examples of activities that may qualify as fair use.”

Section 107 calls for consideration of the following four factors in evaluating a question of fair use: purpose and character of the use, including whether the use is of a commercial nature or is for nonprofit educational purposes; nature of the copyrighted work; amount and substantiality of the portion used in relation to the copyrighted work as a whole; and effect of the use upon the potential market for or value of the copyrighted work.

However, even with these laws in place, there are still recent cases of music being used in commercials, TikTok videos and on the platform without proper licensing agreements in place. It is not only the big companies that are facing copyright infringement suits, but also the influencers posting the content on behalf of the brands.

In recent there have been several major cases. Here are a few.

Sony Music Entertainment v. Marriott. In this case, Sony alleged that Marriott’s social media pages featured hundreds of videos. Sony sought to hold Marriott liable for their own posts as well as posts made by influencers and Marriott-franchised hotels. Sony claimed that it was entitled to more than $139,000,000 in statutory damages, as well as an injunction. The case was eventually dismissed with prejudice.

Sony Music Entertainment v. Gymshark. Sony claimed unauthorized use of 297 works in online advertisements posted by Gymshark and influencers. This consisted of music by Harry Styles, Beyoncé and Britney Spears in its Instagram and TikTok posts. This case was also dismissed with prejudice.

Music Publishers v. NBA.

“In July of 2024, Kobalt Music Publishing America, Inc. and other music companies filed suit against 14 NBA teams in the US District Court for the Southern District of New York, in the latest ongoing battle between music publishers and organizations that allegedly use copyrighted material without proper authorization. These (teams) engaged in unauthorized use of copyrighted music in social media postings on Instagram, TikTok, X, Youtube, and Facebook and are seeking to protect their intellectual property rights and ensure that their works are not exploited without due compensation.”

Sony Music Entertainment v. USC. Sony had previously warned the university about its use of unauthorized music in their posts. These posts were gaining major traction helping the school promote different games and events on campus.

“The law suit … cited 283 videos with songs from musicians including Michael Jackson, Britney Spears and AC/DC that USC’s sports teams supposedly used in TikTok and Instagram posts without licenses. Sony Music asked for statutory copyright damages of $150,000 per song, amounting to tens of millions of dollars in damages.” This case is still ongoing.

Warner Music Group v. DSW. This case again involves the use of music by the company in its ads and on social media along with its influencers without the proper licensing in place. Warner said that the musical works that were allegedly infringed by DSW were “some of the most popular sound recordings and musical compositions in the world.”

Although influencer marketing has helped so many companies grow on social media through the years, without the proper licensing, it leaves these companies and influencers vulnerable to potential copyright infringement. However, Universal Music Group, one of the world’s largest record labels notably pulled all of its music from TikTok due to licensing issues with the social media platform. This impacted video’s featuring songs by Billie Eilish, Drake, Taylor Swift and other big-name artists. Eventually UMG and TikTok struck a deal however while they were working things out, TikTok went silent on these sounds for nearly three months. So, what can influencers and apps due to limit their liability and risk of infringement?

First, social media companies can update their terms of service, which TikTok has done, to help its users avoid suits. Influencers who are posting for promotional content such as an advertisement usually require two different kinds of licenses. Synchronization license and master use license.

A Synchronization or sync license is, “required to pair a musical composition (i.e. the song) with visual content. It must be obtained from the copyright holder, which is usually the music publisher… To make things more complicated, a commercial song can often be co-owned by multiple copyright holders, which is why brands often partner with specialist music clearance agencies to obtain the necessary rights.”

A master use license is “needed if the brand wishes to use a specific recording of a song. It must be obtained from the owner of the recording – usually, a record label.”

By obtaining the proper licensing prior to posting many influencers and brands can post freely without the risk of copyright infringement and potentially risk their post being taken down or even a lawsuit being filed against them. Platforms like TikTok license with record labels so that their songs can be used through their platform library once they are properly licensed.

So while social media has been the place where so many incredible artists have found their fame, once they’ve recorded their hit album, the platform must properly license with the record labels to use their music otherwise they risk being taken to court for copyright infringement not only impacting their platform but also its users, the artists, and labels.

 

 

 

 

Regulating the Scroll: How Lawmakers Are Redefining Social Media for Minors

In today’s digital world, the question is no longer if minors use social media but how they use it. 

Social media platforms don’t just host young users, they shape their experiences through algorithmic feeds and “addictive” design features that keep kids scrolling long after bedtime. As the mental health toll becomes increasingly clear, lawmakers are stepping in to limit how much control these platforms have over young minds.

What is an “addictive” feed and why target it? 

Algorithms don’t just show content, they promote it. By tracking what users click, watch, or like, these feeds are designed to keep attention flowing. For minors, that means endless scrolling and constant engagement which typically is at the expense of sleep, focus, and self-esteem.

Under New York’s Stop Addictive Feeds Exploitation (SAFE) for Kids Act, lawmakers found that:

 “social media companies have created feeds designed to keep minors scrolling for dangerously long periods of time.”

The Act defines an “addictive feed” as one that recommends or prioritizes content based on data linked to the user or their device.

The harms aren’t hypothetical. Studies link heavy social media use among teens with higher rates of depression, anxiety, and sleep disruption. Platforms often push notifications late at night or during school hours. Times when young users are most vulnerable. 

Features like autoplay, for you page, endless “you may also like” suggestions, and quick likes or comments can trap kids in an endless scroll. What begins as fun and harmless entertainment soon becomes a routine they struggle to escape.                              

 

Key Developments in Legislation 

It’s no surprise that minors exposure to social media algorithms sits at the center of today’s policy debates. Over the past two years, state and federal lawmakers have introduced laws seeking to rein in the “addictive” design features of online platforms. While many of these measures face ongoing rule making or constitutional challenges, together they signal a national shift toward stronger regulations of social media’s impact on youth. 

Let’s take a closer look at some of the major legal developments shaping this issue.

New York’s SAFE for Kids Act

New York’s Stop Addictive Feeds Exploitation (SAFE) for Kids Act represents one of the nation’s most ambitious efforts to regulate algorithmic feeds. The law prohibits platforms from providing “addictive feeds” to users under 18 unless the platform obtains verifiable parental consent or reasonably determines that the user is not a minor. It also bans push notifications and advertisements tied to those feeds between 12 a.m. and 6 a.m. unless parents explicitly consent. The rule making process remains ongoing, and enforcement will likely begin once these standards are finalized.

The Kids Off Social Media Act (KOSMA)

At the federal level, the Kids Off Social Media Act (KOSMA) seeks to create national baselines for youth protections online. Reintroduced to Congress, the bill would:

  • Ban social media accounts for children under 13.
  • Prohibit algorithmic recommendation systems for users under 17.
  • Restrict social media access in schools during instructional hours.

Supporters argue the bill is necessary to counteract the addictive nature of social media design. Critics, including digital rights advocates, question whether such sweeping restrictions could survive First Amendment scrutiny or prove enforceable at scale. 

KOSMA remains pending in Congress but continues to shape the national conversation about youth and online safety.

California’s SB 976 

California’s Protecting Our Kids from Social Media Addiction Act (SB 976) reflects a growing trend of regulating design features rather than content. The law requires platforms to:

  • Obtain parental consent before delivering addictive feeds to minors.
  • Mute notifications for minors between midnight and 6 a.m. and during school hours unless parents opt in.

The statute is currently under legal challenge for potential First Amendment violations, however, the Ninth Circuit allowed enforcement of key provisions to proceed suggesting that narrowly tailored design regulations aimed at protecting minors may survive early constitutional scrutiny.

Other State Efforts

Other states are following suit. According to the National Conference of State Legislatures (NCSL), at least 13 states have passed or proposed laws requiring age verification, parental consent, or restrictions on algorithmic recommendations for minors. Mississippi’s HB 1126, for example, requires both age verification and parental consent, and the U.S. Supreme Court allowed the law to remain in effect while litigation continues. 

Final Thoughts

We are at a pivotal moment. The era when children’s digital consumption went largely unregulated is coming to an end. The question now isn’t if  regulation is on the horizon, it’s how it will take shape, and whether it can strike the right balance between safety, free expression, and innovation.

As lawmakers, parents, and platforms navigate this evolving landscape, one challenge remains constant: ensuring that efforts to protect minors from harmful algorithmic design do not come at the expense of their ability to connect, learn, and express themselves online.

What do you think is the right balance between protecting minors from harmful algorithmic exposure and preserving their access to social media as a space for connection and expression?

AI in the Legal Field

What is AI? 

Photo Source

AI, or Artificial Intelligence, refers to a set of technologies that enables computers to simulate human intelligence and perform tasks that typically require human cognition. Examples of AI applications include ChatGPT, Harvey.AI, and Google Gemini. These systems are designed to think and learn like humans, continually improving as users interact with them. They are trained through algorithms of data to improve their performance, which allows them to enhance their performance over time without being explicitly programmed for every task. Unlike Google, which provides search results based on web queries, ChatGPT generates human-like answers to prompts through the process by which computers learn from examples.

Cost-Benefit Analysis of AI in the Legal Field 

Photo Source

The primary areas where AI is being applied in the legal field include; reviewing documents for discovery, which is generally referred to as technology-assisted review (TAR), legal research through automated searches of case law and statutes, contract and legal document analysis, proofreading, and document organization.  

One of the main reasons AI is used in the legal field is because it saves time. By having AI conduct routine tasks, such as proofreading, AI frees up attorneys’ time to focus on more complex tasks. This increased efficiency may also enable law firms to reduce their staff headcount and save money. For example, without AI, proofreading a document can take hours, but with AI, it can be completed in less than a minute, identifying and correcting errors instantly. As they say, “time is money.” AI is also valuable because it produces high-quality work. Since AI doesn’t get tired or become distracted, it can deliver flawless, error-free, and enhanced results. Tasks like document review, proofreading, and legal research can be tedious, but AI handles the initial “heavy lifting,” reducing stress and frustration for attorneys. As one saying goes, “No one said attorneys had to do everything themselves!  

While AI has the potential to save law firms money, I do not think the promised cost reduction always materialized in the way that one may anticipate. It may not be worth it for a law firm to use AI because the initial investment in AI technology can be substantial. The cost can range from $5,000 for simple models to over $500,000 for complex models. After law firms purchase the AI system, they then have to train their staff to use it effectively and upgrade the software regularly. “These costs can be substantial and may take time to recoup.” Law firms might consider doing a cost-benefit analysis before determining if using AI is the right decision for them.

Problems With AI in the Legal Field 

One issue with AI applications is that they can perform tasks, such as writing and problem-solving, in ways that closely mimic human work. This makes it difficult for others to determine whether the work was created by AI or a human. For example, drafting documents now requires less human input because AI can generate these documents automatically. This raises concerns about trust and reliability, as myself and others may prefer to have a human complete the work rather than relying on AI, due to skepticism about AI’s accuracy and dependability. 

A major concern with the shift towards AI use is the potential spread of misinformation. Lawyers who rely on AI to draft documents without thoroughly reviewing what is produced may unknowingly present “hallucinations” which are made-up or inaccurate information. This can potentially lead to serious legal errors. Another critical issue is the risk of confidential client information being compromised. When lawyers put sensitive client data into AI systems to generate legal documents, they are potentially handing that data over to large technology companies. These companies usually prioritize their commercial interests, and without proper regulation, they could misuse client data for profit, potentially compromising client confidentiality, enabling fraud, and threatening the integrity of the judicial system.

A Case Where Lawyers Misused ChatGPT in Court 

As a law student who hopes to become a lawyer one day, it is concerning to see lawyers facing consequences for using AI. However, it is also understandable that if a lawyer does not use AI carefully, they will get sanctioned. Two of the first lawyers to use AI in court and encounter “hallucinations” were Steven Schwartz and Peter LoDuca. The lawyers were representing a client in a personal injury lawsuit against an airline company. Schwartz used ChatGPT to help prepare a filing, allegedly unaware that the AI had fabricated several case citations. Specifically, AI cited at least six cases, including Varghese v. China Southern Airlines and Shaboon v. Egypt Air, the court found these cases didn’t exist. The court said these cases had “bogus judicial decisions with bogus quotes and bogus internal citations.” As a result, both attorneys were each fined $5,000. Judge P. Kevin Castel said he might not have punished the attorneys if they had come “clean” about using ChatGPT to find the purported cases the AI cited.

AI Limitations in Court

Photo Source

As of February 2024, about 2% of the more than 1,600 United States District and Magistrate judges have issued 23 standing orders addressing the use of AI. These standing orders mainly block or put guidelines on using AI due to concerns about technology accuracy issues. Some legal scholars have raised concerns that these orders might discourage attorneys and self-represented litigants from using AI tools. I think instead of completely banning the use of AI, one possible approach could be requiring attorneys to disclose to the court when they use AI for their work. For example, U.S. District Judge Leslie E. Kobayashi of Hawaii wrote in her order, “The court directs that any party, whether appearing pro se or through counsel, who utilizes any generative artificial intelligence (AI) tool in the preparation of any documents to be filed with the court, must disclose in the document that AI was used and the specific AI tool that was used. The unrepresented party or attorney must further certify in the document that the person has checked the accuracy of any portion of the document drafted by generative AI, including all citations and legal authority.” 

Ethicality of AI 

Judicial officers include judges, magistrates, and candidates for judicial office. Under the Model Code of Judicial Conduct (MCJC) Rule 2.5, judicial officers have a responsibility to maintain competence and stay up to date with technology.  Similarly, the Model Rules of Professional Conduct (MRPC) Rule 1.1 states that lawyers must provide competent representation to their clients which includes having technical competence.  

The National Counterintelligence and Security Center (NCSC) emphasizes that both judicial officers and lawyers must have a basic understanding of AI and be aware of the risks associated with using AI for research and document drafting. Furthermore, judicial officers must uphold their duty of confidentiality. This means they should be cautious when they or their staff are entering sensitive or confidential information into AI systems for legal research or document preparation, ensuring that the information is not being retained or misused by the AI platform. I was surprised to find out that while the National Cyber Security Center provides these guidelines, they are not legally binding, but are strongly recommended. 

Members of the legal field should also be aware that there may be additional state-specific rules and obligations depending on the state where they practice. For instance, in April 2024, the New York State Bar Association established a Task Force on AI and issued a Report and Recommendations. The New York guidance notes that “attorneys [have a duty] to understand the benefits, not just the risks, of AI in providing competent and ethical legal representation and allows the use of AI tools to be considered in the reasonableness of attorney fees.” In New Jersey, “although lawyers do not have to tell a client every time they use AI, they may have an obligation to disclose the use of AI if the client cannot make an informed decision without knowing.” I think lawyers and judicial officers should be aware of their state’s rules for AI and make sure they are not blindly using it. 

Disclosing the Use of AI 

Some clients have explicitly requested that their lawyers refrain from using AI tools in their legal representation. However, for the clients who do not express their wishes, lawyers wrestle with the question of whether they should inform their clients that they use AI in their case matters. While there is no clear answer, some lawyers have decided to discuss with their clients that they wish to use AI and before doing so obtain consent, which seems like a good idea.  

Photo Source

Rule 1.4(2) of the American Bar Association (ABA) Model Rules of Professional Conduct addresses attorney-client communication. It provides that a lawyer must “reasonably consult with the client about the means by which the client’s objectives are to be accomplished.” This raises the question of whether this rule covers the use of AI. If it does, how much AI assistance should be disclosed to clients? For instance, should using ChatGPT to draft a brief be disclosed, while using law students for the same task does not require disclosure? These are some of the ethical questions currently being debated in the legal field.

Skip to toolbar