The Public Domain Benefits Everyone – But Sometimes Copyright Holders Won’t Let Go

3 months 1 week ago

Every January, we celebrate the addition of formerly copyrighted works to the public domain. You’ve likely heard that this year’s crop of public domain newcomers includes Steamboat Willie, the 1928 cartoon that marked Mickey Mouse’s debut. When something enters the public domain, you’re free to copy, share, and remix it without fear of a copyright lawsuit. But the former copyright holders aren’t always willing to let go of their “property” so easily. That’s where trademark law enters the scene.

Unlike copyright, trademark protection has no fixed expiration date. Instead, it works on a “use it or lose it” model. With some exceptions, the law will grant trademark protection for as long as you keep using that mark to identify your products. This actually makes sense when you understand the difference between copyright and trademark. The idea behind copyright protection is to give creators a financial incentive to make new works that will benefit the public; that incentive needn’t be eternal to be effective. Trademark law, on the other hand, is about consumer protection. The function of a trademark is essentially to tell you who a product came from, which helps you make informed decisions and incentivizes quality control. If everyone were allowed to use that same mark after some fixed period, it would stop serving that function.

So, what’s the problem? Since trademarks don’t expire, we see former copyright holders of public domain works turn to trademark law as a way to keep exerting control. In one case we wrote about, a company claiming to own a trademark in the name of a public domain TV show called “You Asked For It” sent takedown demands targeting everything from episodes of the show, to remix videos using show footage, to totally unrelated uses of that common phrase. Other infamous examples include disputes over alleged trademarks in elements from Peter Rabbit and Tarzan. Now, with Steamboat Willie in the public domain, Disney seems poised to do the same. It’s already alluded to this in public statements, and in 2022, it registered a trademark for Walt Disney Animation Studios that incorporates a snippet from the cartoon.

The news isn’t all bad: trademark protection is in some ways more limited than copyright—it only applies to uses that are likely to confuse consumers about the use’s connection to the mark owner. And importantly, the U.S. Supreme Court has made clear that trademark law cannot be used to control the distribution of creative works, lest it spawn “a species of mutant copyright law” that usurps the public’s right to copy and use works in the public domain. (Of course, that doesn’t mean companies won’t try it.) So go forth and make your Steamboat Willie art, but beware of trademark lawyers waiting in the wings.

Cara Gagliano

The PRESS Act Will Protect Journalists When They Need It Most

3 months 1 week ago

Our government shouldn’t be spying on journalists. Nor should law enforcement agencies force journalists to identify their confidential sources or go to prison. 

To fix this, we need to change the law. Now, we’ve got our best chance in years. The House of Representatives has passed the Protect Reporters from Exploitive State Spying (PRESS) Act, H.R. 4250, and it’s one of the strongest federal shield bills for journalists we’ve seen. 

Take Action

Tell Congress To Pass the PRESS Act Now

The PRESS Act would do two critical things: first, it would bar federal law enforcement from surveilling journalists by gathering their phone, messaging, or email records. Secondly, it strictly limits when the government can force a journalist to disclose their sources. 

Since its introduction, the bill has had strong bipartisan support. And such “shield” laws for reporters have vast support across the U.S., with 49 states and the District of Columbia all having some type of law that prevents journalists from being forced to hand over their files to assist in criminal prosecutions, or even private lawsuits. 

While journalists are well protected in many states, federal law is currently lacking in protections. That’s had serious consequences for journalists, and for all Americans’ right to freely access information. 

Multiple Presidential Administrations Have Abused Laws To Spy On Journalists

The Congressional report on this bill details abuses against journalists by all of the past three Presidential administrations. Federal law enforcement officials improperly acquired reporters’ phone records on numerous occasions since 2004, under both Democratic and Republican administrations. 

On at least 12 occasions since 1990, law enforcement threatened journalists with jail or home confinement for refusing to give up their sources; some reporters served months in jail. 

Elected officials must do more about these abuses than preside over after-the-fact apologies. 

PRESS Act Protections

The PRESS Act bars the federal government from surveilling journalists through their phones, email providers, or other online services. These digital protections are critical because they reflect how journalists operate in the field today. The bill restricts subpoenas aimed not just at the journalists themselves, but their phone and email providers. Its exceptions are narrow and targeted. 

The PRESS Act also has an appropriately broad definition of the practice of journalism, covering both professional and citizen journalists. It applies regardless of a journalist’s political leanings or medium of publication. 

The government surveillance of journalists over the years has chilled journalists’ ability to gather news. It’s also likely discouraged sources from coming forward, because their anonymity isn’t guaranteed. We can’t know the important stories that weren’t published, or weren’t published in time, because of fear of retaliation on the part of journalists or their sources. 

In addition to EFF, the PRESS Act is supported by a wide range of press and rights groups, including the ACLU, the Committee to Protect Journalists, the Freedom of the Press Foundation, the First Amendment Coalition, the News Media Alliance, the Reporters Committee for Freedom of the Press, and many others. 

Our democracy relies on the rights of both professional journalists and everyday citizens to gather and publish information. The PRESS Act is a long overdue protection. We have sent Congress a clear message to pass it; please join us by sending your own email to the Senate using our links below. 

Take Action

Tell Congress To Pass the PRESS Act Now

Joe Mullin

It's Copyright Week 2024: Join Us in the Fight for Better Copyright Law and Policy

3 months 1 week ago

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, addressing what's at stake and what we need to do to make sure that copyright promotes creativity and innovation.

Copyright law affects so much of our daily lives, and new technologies have only helped make everyone more and more aware of it. For example, while 1998’s Digital Millennium Copyright Act helped spur the growth of platforms for creating and sharing art, music and literature, it also helped make the phrase “blocked due to a claim by the copyright holder” so ubiquitous.

Copyright law helps shape the movies we watch, the books we read, and the music we listen to. But it also impacts everything from who can fix a tractor to what information is available to us to when we communicate online. Given that power, it’s crucial that copyright law and policy serve everyone.

Unfortunately, that’s not the way it tends to work. Instead, copyright law is often treated as the exclusive domain of major media and entertainment industries. Individual artists don’t often find that copyright does what it is meant to do, i.e. “promote the progress of science and useful arts” by giving them a way to live off of the work they’ve done. The promise of the internet was to help eliminate barriers between creators and audiences, so that voices that traditional gatekeepers ignored could still find success. Through copyright, those gatekeepers have found ways to once again control what we see.

12 years ago, a diverse coalition of Internet users, non-profit groups, and Internet companies defeated the Stop Online Piracy Act (SOPA) and the PROTECT IP Act (PIPA), bills that would have forced Internet companies to blacklist and block websites accused of hosting copyright-infringing content. These were bills that would have made censorship very easy, all in the name of copyright protection.

We continue to fight for a version of copyright that truly serves the public interest. And so, every year, EFF and a number of diverse organizations participate in Copyright Week. Each year, we pick five copyright issues to highlight and promote a set of principles that should guide copyright law and policy. This year’s issues are:

  • Monday: Public Domain
    The public domain is our cultural commons and a crucial resource for innovation and access to knowledge. Copyright should strive to promote, and not diminish, a robust, accessible public domain.
  • Tuesday: Device and Digital Ownership 
    As the things we buy increasingly exist either in digital form or as devices with software, we also find ourselves subject to onerous licensing agreements and technological restrictions. If you buy something, you should be able to truly own it – meaning you can learn how it works, repair it, remove unwanted features, or tinker with it to make it work in a new way.
  • Wednesday: Copyright and AI
    The growing availability of AI, especially generative AI trained on datasets that include copyrightable material, has raised new debates about copyright law. It’s important to remember the limitations of copyright law in giving the kind of protections creators are looking for.
  • Thursday: Free Expression and Fair Use 
    Copyright policy should encourage creativity, not hamper it. Fair use makes it possible for us to comment, criticize, and rework our common culture.
  • Friday: Copyright Enforcement as a Tool of Censorship
    Freedom of expression is a fundamental human right essential to a functioning democracy. Copyright should encourage more speech, not act as a legal cudgel to silence it.

Every day this week, we’ll be sharing links to blog posts and actions on these topics at https://www.eff.org/copyrightweek and at #CopyrightWeek on X, formerly known as Twitter.

Katharine Trendacosta

Tools to Protect Your Privacy Online | EFFector 36.1

3 months 1 week ago

New year, but EFF is still here to keep you up to date with the latest digital rights happenings! Be sure to check out our latest newsletter, EFFector 36.1, which covers topics ranging from: our thoughts on AI watermarking, changes in the tech landscape we'd like to see in 2024, and updates to our Street Level Surveillance hub and Privacy Badger.

EFFector 36.1 is out now—you can read the full newsletter here, or subscribe to get the next issue in your inbox automatically! You can also listen to the audio version of the newsletter below:

LISTEN ON YouTube

EFFector 36.1 | Tools to Protect Your Privacy Online

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

The No AI Fraud Act Creates Way More Problems Than It Solves

3 months 2 weeks ago

Creators have reason to be wary of the generative AI future. For one thing, while GenAI can be a valuable tool for creativity, it may also be used to deceive the public and disrupt existing markets for creative labor. Performers, in particular, worry that AI-generated images and music will become deceptive substitutes for human models, actors, or musicians.

Existing laws offer multiple ways for performers to address this issue. In the U.S., a majority of states recognize a “right of publicity,” meaning, the right to control if and how your likeness is used for commercial purposes. A limited version of this right makes sense—you should be able to prevent a company from running an advertisement that falsely claims that you endorse its products—but the right of publicity has expanded well beyond its original boundaries, to potentially cover just about any speech that “evokes” a person’s identity.

In addition, every state prohibits defamation, harmful false representations, and unfair competition, though the parameters may vary. These laws provide time-tested methods to mitigate economic and emotional harms from identity misuse while protecting online expression rights.

But some performers want more. They argue that your right to control use of your image shouldn’t vary depending on what state you live in. They’d also like to be able to go after the companies that offer generative AI tools and/or host AI-generated “deceptive” content. Ordinary liability rules, including copyright, can’t be used against a company that has simply provided a tool for others’ expression. After all, we don’t hold Adobe liable when someone uses Photoshop to suggest that a president can’t read or even for more serious deceptions. And Section 230 immunizes intermediaries from liability for defamatory content posted by users and, in some parts of the country, publicity rights violations as well. Again, that’s a feature, not a bug; immunity means it’s easier to stick up for users’ speech, rather than taking down or preemptively blocking any user-generated content that might lead to litigation. It’s a crucial protection not just big players like Facebook and YouTube, but also small sites, news outlets, emails hosts, libraries, and many others.

Balancing these competing interests won’t be easy. Sadly, so far Congress isn’t trying very hard. Instead, it’s proposing “fixes” that will only create new problems.

Last fall, several Senators circulated a “discussion draft” bill, the NO FAKES Act. Professor Jennifer Rothman has an excellent analysis of the bill, including its most dangerous aspect: creating a new, and transferable, federal publicity right that would extend for 70 years past the death of the person whose image is purportedly replicated. As Rothman notes, under the law:

record companies get (and can enforce) rights to performers’ digital replicas, not just the performers themselves. This opens the door for record labels to cheaply create AI-generated performances, including by dead celebrities, and exploit this lucrative option over more costly performances by living humans, as discussed above.

In other words, if we’re trying to protect performers in the long run, just make it easier for record labels (for example) to acquire voice rights that they can use to avoid paying human performers for decades to come.

NO FAKES hasn’t gotten much traction so far, in part because the Motion Picture Association hasn’t supported it. But now there’s a new proposal: the “No AI FRAUD Act.” Unfortunately, Congress is still getting it wrong.

First, the Act purports to target abuse of generative AI to misappropriate a person’s image or voice, but the right it creates applies to an incredibly broad amount of digital content: any “likeness” and/or “voice replica” that is created or altered using digital technology, software, an algorithm, etc. There’s not much that wouldn’t fall into that category—from pictures of your kid, to recordings of political events, to docudramas, parodies, political cartoons, and more. If it involved recording or portraying a human, it’s probably covered. Even more absurdly, it characterizes any tool that has a primary purpose of producing digital depictions of particular people as a “personalized cloning service.” Our iPhones are many things, but even Tim Cook would likely be surprised to know he’s selling a “cloning service.”

Second, it characterizes the new right as a form of federal intellectual property. This linguistic flourish has the practical effect of putting intermediaries that host AI-generated content squarely in the litigation crosshairs. Section 230 immunity does not apply to federal IP claims, so performers (and anyone else who falls under the statute) will have free rein to sue anyone that hosts or transmits AI-generated content.

That, in turn, is bad news for almost everyone—including performers. If this law were enacted, all kinds of platforms and services could very well fear reprisal simply for hosting images or depictions of people—or any of the rest of the broad types of “likenesses” this law covers. Keep in mind that many of these service won’t be in a good position to know whether AI was involved in the generation of a video clip, song, etc., nor will they have the resources to pay lawyers to fight back against improper claims. The best way for them to avoid that liability would be to aggressively filter user-generated content, or refuse to support it at all.

Third, while the term of the new right is limited to ten years after death (still quite a long time), it’s combined with very confusing language suggesting that the right could extend well beyond that date if the heirs so choose. Notably, the legislation doesn’t preempt existing state publicity rights laws, so the terms could vary even more wildly depending on where the individual (or their heirs) reside.

Lastly, while the defenders of the bill incorrectly claim it will protect free expression, the text of the bill suggests otherwise. True, the bill recognizes a “First Amendment defense.” But every law that affects speech is limited by the First Amendment—that’s how the Constitution works. And the bill actually tries to limit those important First Amendment protections by requiring courts to balance any First Amendment interests “against the intellectual property interest in the voice or likeness.” That balancing test must consider whether the use is commercial, necessary for a “primary expressive purpose,” and harms the individual’s licensing market. This seems to be an effort to import a cramped version of copyright’s fair use doctrine as a substitute for the rigorous scrutiny and analysis the First Amendment (and even the Copyright Act) requires.

We could go on, and we will if Congress decides to take this bill seriously. But it shouldn’t. If Congress really wants to protect performers and ordinary people from deceptive or exploitative uses of their images and voice, it should take a precise, careful and practical approach that avoids potential collateral damage to free expression, competition, and innovation. The No AI FRAUD Act comes nowhere near the mark

Corynne McSherry

Companies Make it Too Easy for Thieves to Impersonate Police and Steal Our Data

3 months 2 weeks ago

For years, people have been impersonating police online in order to get companies to hand over incredibly sensitive personal information. Reporting by 404 Media recently revealed that Verizon handed over the address and phone logs of an individual to a stalker pretending to be a police officer who had a PDF of a fake warrant. Worse, the imposter wasn’t particularly convincing. His request was missing a form that is required for search warrants from his state. He used the name of a police officer that did not exist in the department he claimed to be from. And he used a Proton Mail account, which any person online can use, rather than an official government email address.

Likewise, bad actors have used breached law enforcement email accounts or domain names to send fake warrants, subpoenas, or “Emergency Data Requests” (which police can send without judicial oversight to get data quickly in supposedly life or death situations). Impersonating police to get sensitive information from companies isn’t just the realm of stalkers and domestic abusers; according to Motherboard, bounty hunters and debt collectors have also used the tactic.

We have two very big entwined problems. The first is the “collect it all” business model of too many companies, which creates vast reservoirs of personal information stored in corporate data servers, ripe for police to seize and thieves to steal. The second is that too many companies fail to prevent thieves from stealing data by pretending to be police.

Companies have to make it harder for fake “officers” to get access to our sensitive data. For starters, they must do better at scrutinizing warrants, subpoenas, and emergency data requests when they come in. These requirements should be spelled out clearly in a public-facing privacy policy, and all employees who deal with data requests from law enforcement should receive training in how to adhere to these requirements and spot fraudulent requests. Fake emergency data requests raise special concerns, because real ones depend on the discretion of both companies and police—two parties with less than stellar reputations for valuing privacy. 

Matthew Guariglia

EFF’s 2024 In/Out List

3 months 2 weeks ago

Since EFF was formed in 1990, we’ve been working hard to protect digital rights for all. And as each year passes, we’ve come to understand the challenges and opportunities a little better, as well as what we’re not willing to accept. 

Accordingly, here’s what we’d like to see a lot more of, and a lot less of, in 2024.
in-out-2024.png
IN

1. Affordable and future-proof internet access for all

EFF has long advocated for affordable, accessible, and future-proof internet access for all. We cannot accept a future where the quality of our internet access is determined by geographic, socioeconomic, or otherwise divided lines. As the online aspects of our work, health, education, entertainment, and social lives increase, EFF will continue to fight for a future where the speed of your internet connection doesn’t stand in the way of these crucial parts of life.

2. A privacy first agenda to prevent mass collection of our personal information

Many of the ills of today’s internet have a single thing in common: they are built on a system of corporate surveillance. Vast numbers of companies collect data about who we are, where we go, what we do, what we read, who we communicate with, and so on. They use our data in thousands of ways and often sell it to anyone who wants it—including law enforcement. So whatever online harms we want to alleviate, we can do it better, with a broader impact, if we do privacy first.

3. Decentralized social media platforms to ensure full user control over what we see online

While the internet began as a loose affiliation of universities and government bodies, the digital commons has been privatized and consolidated into a handful of walled gardens. But in the past few years, there's been an accelerating swing back toward decentralization as users are fed up with the concentration of power, and the prevalence of privacy and free expression violations. So, many people are fleeing to smaller, independently operated projects. We will continue walking users through decentralized services in 2024.

4. End-to-end encrypted messaging services, turned on by default and available always

Private communication is a fundamental human right. In the online world, the best tool we have to defend this right is end-to-end encryption. But governments across the world are trying to erode this by scanning for all content all the time. As we’ve said many times, there is no middle ground to content scanning, and no “safe backdoor” if the internet is to remain free and private. Mass scanning of peoples’ messages is wrong, and at odds with human rights. 

5. The right to free expression online with minimal barriers and without borders

New technologies and widespread internet access have radically enhanced our ability to express ourselves, criticize those in power, gather and report the news, and make, adapt, and share creative works. Vulnerable communities have also found space to safely meet, grow, and make themselves heard without being drowned out by the powerful. No government or corporation should have the power to decide who gets to speak and who doesn’t. 

OUT

1. Use of artificial intelligence and automated systems for policing and surveillance

Predictive policing algorithms perpetuate historic inequalities, hurt neighborhoods already subject to intense amounts of surveillance and policing, and quite simply don’t work. EFF has long called for a ban on predictive policing and we’ll continue to monitor the rapid rise of law enforcement utilizing machine learning. This includes harvesting the data other “autonomous” devices collect and by automating important decision-making processes that guide policing and dictate people’s futures in the criminal justice system.

2. Ad surveillance based on the tracking of our online behaviors 

Our phones and other devices process vast amounts of highly sensitive personal information that corporations collect and sell for astonishing profits. This incentivizes online actors to collect as much of our behavioral information as possible. In some circumstances, every mouse click and screen swipe is tracked and then sold to ad tech companies and the data brokers that service them. This often impacts marginalized communities the most. Data surveillance is a civil rights problem, and legislation to protect data privacy can help protect civil rights. 

3. Speech and privacy restrictions under the guise of "protecting the children"

For years, government officials have raised concerns that online services don’t do enough to tackle illegal content, particularly child sexual abuse material. Their solution? Bills that ostensibly seek to make the internet safer, but instead achieve the exact opposite by requiring websites and apps to proactively prevent harmful content from appearing on messaging services. This leads to the universal scanning of all user content, all the time, and functions as a 21st-century form of prior restraint—violating the very essence of free speech.

4. Unchecked cross-border data sharing disguised as cybercrime protections 

Personal data must be safeguarded against exploitation by any government to prevent abuse of power and transnational repression. Yet, the broad scope of the proposed UN Cybercrime Treaty could be exploited for covert surveillance of human rights defenders, journalists, and security researchers. As the Treaty negotiations approach their conclusion, we are advocating against granting broad cross-border surveillance powers for investigating any alleged crime, ensuring it doesn't empower regimes to surveil individuals in countries where criticizing the government or other speech-related activities are wrongfully deemed criminal.

5. Internet access being used as a bargaining chip in conflicts and geopolitical battles

Given the proliferation of the internet and its use in pivotal social and political moments, governments are very aware of their power in cutting off that access. The internet enables the flow of information to remain active and alert to new realities. In wartime, being able to communicate may ultimately mean the difference between life and death. Shutting down access aids state violence and deprives free speech. Access to the internet shouldn't be used as a bargaining chip in geopolitical battles.

Paige Collings
Checked
29 minutes 47 seconds ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed