EFF to Court: The DMCA Didn't Create a New Right of Attribution, You Shouldn't Either

7 hours 54 minutes ago

Amid a wave of lawsuits targeting how AI companies use copyrighted works to train large language models that generate new works, a peculiar provision of copyright law is suddenly in the spotlight: Section 1202 of the Digital Millennium Copyright Act (DMCA). Section 1202 restricts intentionally removing or changing copyright management information (CMI), such as a signature on a painting or attached to a photograph. Passed in 1998, the rule was supposed to help rightsholders identify potentially infringing uses of their works and encourage licensing.

Open AI and Microsoft used code from Github as part of the training data for their LLMs, along with billions of other works. A group of anonymous Github contributors sued, arguing that those LLMs generated new snippets of code that were substantially similar to theirs—but with the CMI stripped. Notably, they did not claim that the new code was copyright infringement—they are relying solely on Section 1202 of the DMCA. Their problem? The generated code is different from their original work, and courts across the US have adopted an “identicality rule,” on the theory that Section 1202 is supposed to apply only when CMI is removed from existing works, not when it’s simply missing from a new one.

It may sound like an obscure legal question, but the outcome of this battle—currently before the Ninth Circuit Court of Appeals—could have far-reaching implications beyond generative AI technologies. If the rightholders were correct, Section 1202 effectively creates a freestanding right of attribution, creating potential liability even for non-infringing uses, such as fair use, if those new uses simply omit the CMI. While many fair users might ultimately escape liability under other limitations built into Section 1202, the looming threat of litigation, backed by risk of high and unpredictable statutory penalties, will be enough to pressure many defendants to settle. Indeed, an entire legal industry of “copyright trolls” has emerged to exploit this dynamic, with no corollary benefit to creativity or innovation.

Fortunately, as we explain in a brief filed today, the text of Section 1202 doesn’t support such an expansive interpretation. The provision repeatedly refers to “works” and “copies of works”—not “substantially similar” excerpts or new adaptations—and its focus on “removal or alteration” clearly contemplates actions taken with respect to existing works, not new ones. Congress could have chosen otherwise and written the law differently. Wisely it did not, thereby ensuring that rightsholders couldn’t leverage the omission of CMI to punish or unfairly threaten otherwise lawful re-uses of a work.

Given the proliferation of copyrighted works in virtually every facet of daily life, the last thing any court should do is give rightsholders a new, freestanding weapon against fair uses. As the Supreme Court once observed, copyright is a “tax on readers for the purpose of giving a bounty to writers.” That tax—including the expense of litigation—can be an important way to encourage new creativity, but it should not be levied unless the Copyright Act clearly requires it.

Corynne McSherry

California A.B. 412 Stalls Out—A Win for Innovation and Fair Use

9 hours 41 minutes ago

A.B. 412, the flawed California bill that threatened small developers in the name of AI “transparency,” has been delayed and turned into a two-year bill. That means it won’t move forward in 2025—a significant victory for innovation, freedom to code, and the open web.

EFF opposed this bill from the start. A.B. 412 tried to regulate generative AI, not by looking at the public interest, but by mandating training data “reading lists” designed to pave the way for new copyright lawsuits, many of which are filed by large content companies. 

Transparency in AI development is a laudable goal. But A.B. 412 failed to offer a fair or effective path to get there. Instead, it gave companies large and small the impossible task of differentiating between what content was copyrighted and what wasn’t—with severe penalties for anyone who couldn’t meet that regulation. That would have protected the largest AI companies, but frozen out smaller and non-commercial developers who might want to tweak or fine-tune AI systems for the public good. 

The most interesting work in AI won’t necessarily come from the biggest companies. It will come from small teams, fine-tuning for accessibility, privacy, and building tools that identify AI harms. And some of the most valuable work will be done using source code under permissive licenses. 

A.B. 412 ignored those facts, and would have punished some of the most worthwhile projects. 

The Bill Blew Off Fair Use Rights

The question of whether—and how much—AI training qualifies as fair use is being actively litigated right now in federal courts. And so far, courts have found much of this work to be fair use. In a recent landmark AI case, Bartz v. Anthropic, for example, a federal judge found that AI training work is “transformative—spectacularly so.” He compared it to how search engines copy images and text in order to provide useful search results to users.

Copyright is federally governed. When states try to rewrite the rules, they create confusion—and more litigation that doesn’t help anyone.

If lawmakers want to revisit AI transparency, they need to do so without giving rights-holders a tool to weaponize copyright claims. That means rejecting A.B. 412’s approach—and crafting laws that protect speech, competition, and the public’s interest in a robust, open, and fair AI ecosystem. 

Joe Mullin

Amazon Ring Cashes in on Techno-Authoritarianism and Mass Surveillance

13 hours 53 minutes ago

Ring founder Jamie Siminoff is back at the helm of the surveillance doorbell company, and with him is the surveillance-first-privacy-last approach that made Ring one of the most maligned tech devices. Not only is the company reintroducing new versions of old features which would allow police to request footage directly from Ring users, it is also introducing a new feature that would allow police to request live-stream access to people’s home security devices. 

This is a bad, bad step for Ring and the broader public. 

Ring is rolling back many of the reforms it’s made in the last few years by easing police access to footage from millions of homes in the United States. This is a grave threat to civil liberties in the United States. After all, police have used Ring footage to spy on protestors, and obtained footage without a warrant or consent of the user. It is easy to imagine that law enforcement officials will use their renewed access to Ring information to find people who have had abortions or track down people for immigration enforcement

Siminoff has announced in a memo seen by Business Insider that the company will now be reimagined from the ground up to be “AI first”—whatever that means for a home security camera that lets you see who is ringing your doorbell. We fear that this may signal the introduction of video analytics or face recognition to an already problematic surveillance device. 

It was also reported that employees at Ring will have to show proof that they use AI in order to get promoted. 

Not to be undone with new bad features, they are also planning on rolling back some of the necessary reforms Ring has made: namely partnering with Axon to build a new tool that would allow police to request Ring footage directly from users, and also allow users to consent to letting police livestream directly from their device. 

After years of serving as the eyes and ears of police, the company was compelled by public pressure to make a number of necessary changes. They introduced end-to-end encryption, they ended their formal partnerships with police which were an ethical minefield, and they ended their tool that facilitated police requests for footage directly to customers. Now they are pivoting back to being a tool of mass surveillance. 

Why now? It is hard to believe the company is betraying the trust of its millions of customers in the name of “safety” when violent crime in the United States is reaching near-historically low levels. It’s probably not about their customers—the FTC had to compel Ring to take its users’ privacy seriously. 

No, this is most likely about Ring cashing in on the rising tide of techno-authoritarianism, that is, authoritarianism aided by surveillance tech. Too many tech companies want to profit from our shrinking liberties. Google likewise recently ended an old ethical commitment that prohibited it from profiting off of surveillance and warfare. Companies are locking down billion-dollar contracts by selling their products to the defense sector or police.

Shame on Ring.

Matthew Guariglia

We Support Wikimedia Foundation’s Challenge to UK’s Online Safety Act

1 day 6 hours ago

The Electronic Frontier Foundation and ARTICLE 19 strongly support the Wikimedia Foundation’s legal challenge to the categorization regulations of the United Kingdom’s Online Safety Act.

The Foundation – the non-profit that operates Wikipedia and other Wikimedia projects – announced its legal challenge earlier this year, arguing that the regulations endanger Wikipedia and the global community of volunteer contributors who create the information on the site. The High Court of Justice in London will hear the challenge on July 22 and 23.

EFF and ARTICLE 19 agree with the Foundation’s argument that, if enforced, the Category 1 duties - the OSA’s most stringent obligations – would undermine the privacy and safety of Wikipedia’s volunteer contributors, expose the site to manipulation and divert essential resources from protecting people and improving the site. For example, because the law requires Category 1 services to allow users to block all unverified users from editing any content they post, the law effectively requires the Foundation to verify the identity of many Wikipedia contributors. However, that compelled verification undermines the privacy that keeps the site’s volunteers safe.

Wikipedia is the world’s most trusted and widely used encyclopedia, with users across the word accessing its wealth of information and participating in free information exchange through the site. The OSA must not be allowed to diminish it and jeopardize the volunteers on which it depends.

Beyond the issues raised in Wikimedia’s lawsuit, EFF and ARTICLE 19 emphasize that the Online Safety Act poses a serious threat to freedom of expression and privacy online, both in the U.K. and globally. Several key provisions of the law become operational July 25, and some companies already are rolling out age-verification mechanisms which undermine free expression and privacy rights of both adults and minors.

David Greene

Radio Hobbyists, Rejoice! Good News for LoRa & Mesh

2 days 10 hours ago

A set of radio devices and technologies are opening the doorway to new and revolutionary forms of communication. These have the potential to break down the over-reliance on traditional network hierarchies, and present collaborative alternatives where resistance to censorship, control and surveillance are baked into the network topography itself. Here, we look at a few of these technologies and what they might mean for the future of networked communications.

The idea of what is broadly referred to as mesh networking isn’t new: the resilience and scalability of mesh technology has seen it adopted in router and IoT protocols for decades. What’s new is cheap devices that can be used without a radio license to communicate over (relatively) large distances, or LOng RAnge, thus the moniker LoRa.

Although using different operating frequencies in different countries, LoRa works in essentially the same way everywhere. It uses Chirp Spread Spectrum to broadcast digital communications across a physical landscape, with a range of several kilometers in the right environmental conditions. When other capable devices pick up a signal, they can then pass it along to other nodes until the message reaches its desitination—all without relying on a single centralized host. 

These communications are of very low bit-rate—often less than a few KBps (kilobytes per second) at a distance—and use very little power. You won’t be browsing the web or streaming video over LoRa, but it is useful for sending messages in a wide range of situations where traditional infrastructure is lacking or intermittent, and communication with others over dispersed or changing physical terrain is essential. For instance, a growing body of research is showing how Search and Rescue (SAR) teams can greatly benefit from the use of LoRa, specifically when coupled with GPS sensors, and especially when complimented by line-of-sight LoRa repeaters.

Meshtastic

The most popular of these indie LoRa communication systems is Meshtastic by far. For hobbyists just getting started in the world of LoRa mesh communications, it is the easiest way to get up, running, and texting with others in your area that also happen to have a Meshtastic-enabled device. It also facilitates direct communication with other nodes using end-to-end encryption. And by default, a Meshtastic device will repeat messages to others if originating from 3 or fewer nodes (or “hops”) away. This means messages tend to propagate farther with the power of the mesh collaborating to make delivery possible. As a single-application use of LoRa, it is an exciting experiment to take part in.

Reticulum

While Reticulum is often put into the same category as Meshtastic, and although both enable communication over LoRa, the comparison breaks down quickly after that. Reticulum is not a single application, but an entire network stack that can be arbitrarily configured to connect through existing TCP/IP, the anonymizing I2P network, directly through a local WiFi connection, or through LoRa radios. The Reticulum network’s LXMF transfer protocol allows arbitrary applications to be built on top of it, such as messaging, voice calls, file transfer, and light-weight, text-only browsing. And that’s only to name a few applications which have already been developed—the possibilities are endless.

Although there are a number of community hubs to join which are being run by Reticulum enthusiasts, you don’t have to join any of them, and can build your own Reticulum network with the devices and transports of you and your friends, locally over LoRa or remotely over traditional infrastructure, and bridge them as you please. Nodes themselves are universally addressed and sovereign, meaning they are free to connect anywhere and not lose the universally unique address which defines them. All communications between nodes are encrypted end-to-end, using a strong choice of cryptographic primitives. And although it’s been actively developed for over a decade, it recently reached the noteworthy milestone of a 1.0 release. It’s a very exciting ecosystem to be a part of, and we can’t wait to see the community develop it even further. A number of clients are available to start exploring.

Resilient Infrastructure

On a more somber note, let’s face it: we live in an uncertain world. With the frequency of environmental disasters, political polarization, and infrastructure attacks increasing, the stability of networks we have traditionally relied upon is far from assured.

Yet even with the world as it is, developers are creating new communications networks that have the potential to help in unexpected situations we might find ourselves in. Not only are these technologies built to be useful and resilient, they are also empowering individuals by circumventing censorship and platform control— allowing a way for people to empower each other through sharing resources.

In that way, it can be seen as a technological inheritor of the hopefulness and experimentation—and yes, fun!—that was so present in the early internet. These technologies offer a promising path forward for building our way out of tech dystopia.

Bill Budington

EFF and 80 Organizations Call on EU Policymakers to Preserve Net Neutrality in the Digital Networks Act

2 days 10 hours ago

As the European Commission prepares an upcoming proposal for a Digital Networks Act (DNA), a growing network of groups are raising serious concerns about the resurgence of “fair share” proposals from major telecom operators. The original idea was to introduce network usage fees on certain companies to pay ISPs. We have said it before and we’ll say it again: there is nothing fair about this “fair share” proposal, which could undermine net neutrality and hurt consumers by changing how content is delivered online. Now the EU Commission is toying with an alternative idea: the introduction of a dispute resolution mechanism to foster commercial agreements between tech firms and telecom operators.

EFF recently joined a broad group of more than 80 signatories, from civil society organizations to audio-visual companies in a joint statement aimed at preserving net neutrality in the DNA.

In the letter, we argue that the push to introduce a mandatory dispute resolution mechanism into EU law would pave the way for content and application providers (CAPs) to pay network fees for delivering traffic. These ideas, recycled from 2022, are being marketed as necessary for funding infrastructure, but the real cost would fall on the open internet, competition, and users themselves.

This isn't just about arcane telecom policy—it’s a battle over the future of the internet in Europe. If the DNA includes mechanisms that force payments from CAPs, we risk higher subscription costs, fewer services, and less innovation, particularly for European startups, creatives, and SMEs. Worse still, there’s no evidence of market failure to justify such regulatory intervention. Regulators like BEREC have consistently found that the interconnection market is functioning smoothly. What’s being proposed is nothing short of a power grab by legacy telecom operators looking to resurrect outdated, monopolistic business models. Europe has long championed an open, accessible internet—now’s the time to defend it.

Jillian C. York

🤕 A Surveillance Startup in Damage Control | EFFector 37.8

2 days 11 hours ago

We're a little over halfway through the year! Which... could be good or bad depending on your outlook... but nevermind that—EFF is here to keep you updated on the latest digital rights news, and we've got you covered with an all-new EFFector!

With issue 37.8, we're covering a recent EFF investigation into AI-generated police reports, a secret deal to sell flight passenger data to the feds (thanks data brokers), and why mass surveillance cannot be fixed with a software patch. 

Don't forget to also check out our audio companion to EFFector as well! We're interviewing staff about some of the important work that they're doing. This time, EFF's Associate Director of Activism Sarah Hamid explains the harms caused by ALPRs and what you can do to fight back. Listen now on YouTube or the Internet Archive.

Listen TO EFFECTOR

EFFECTOR 37.8 - A Surveillance Startup In Damage Control

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

Podcast Episode: Finding the Joy in Digital Security

2 days 21 hours ago

Many people approach digital security training with furrowed brows, as an obstacle to overcome. But what if learning to keep your tech safe and secure was consistently playful and fun? People react better to learning, and retain more knowledge, when they're having a good time. It doesn’t mean the topic isn’t serious – it’s just about intentionally approaching a serious topic with joy.

%3Ciframe%20height%3D%2252px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2F78eed2f8-094f-4ad7-980e-fb68168f32ba%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com

   

(You can also find this episode on the Internet Archive and on YouTube.) 

That’s how Helen Andromedon approaches her work as a digital security trainer in East Africa. She teaches human rights defenders how to protect themselves online, creating open and welcoming spaces for activists, journalists, and others at risk to ask hard questions and learn how to protect themselves against online threats. She joins EFF’s Cindy Cohn and Jason Kelley to discuss making digital security less complicated, more relevant, and more joyful to real users, and encouraging all women and girls to take online safety into their own hands so that they can feel fully present and invested in the digital world. 

In this episode you’ll learn about:

  • How the Trump Administration’s shuttering of the United States Agency for International Development (USAID) has led to funding cuts for digital security programs in Africa and around the world, and why she’s still optimistic about the work
  • The importance of helping women feel safe and confident about using online platforms to create positive change in their communities and countries
  • Cultivating a mentorship model in digital security training and other training environments
  • Why diverse input creates training models that are accessible to a wider audience
  • How one size never fits all in digital security solutions, and how Dungeons & Dragons offers lessons to help people retain what they learn 

Helen Andromedon – a moniker she uses to protect her own security – is a digital security trainer in East Africa who helps human rights defenders learn how to protect themselves and their data online and on their devices. She played a key role in developing the Safe Sisters project, which is a digital security training program for women. She’s also a UX researcher and educator who has worked as a consultant for many organizations across Africa, including the Association for Progressive Communications and the African Women’s Development Fund

Resources: 

What do you think of “How to Fix the Internet?” Share your feedback here.

Transcript

HELEN ANDROMEDON: I'll say it bluntly. Learning should be fun. Even if I'm learning about your tool, maybe you design a tutorial that is fun for me to read through, to look at. It seems like that helps with knowledge retention.
I've seen people responding to activities and trainings that are playful. And yet we are working on a serious issue. You know, we are developing an advocacy campaign, it's a serious issue, but we are also having fun.

CINDY COHN: That's Helen Andromedan talking about the importance of joy and play in all things, but especially when it comes to digital security training. I'm Cindy Cohn, the executive director of the Electronic Frontier Foundation.

JASON KELLEY: And I'm Jason Kelley, EFF's activism director. This is our podcast, How to Fix the Internet.

CINDY COHN: This show is all about envisioning a better digital world for everyone. Here at EFF, we often specialize in thinking about worst case scenarios and of course, jumping in to help when bad things happen. But the conversations we have here are an opportunity to envision the better world we can build if we start to get things right online.

JASON KELLEY: Our guest today is someone who takes a very active role in helping people take control of their digital lives and experiences.

CINDY COHN: Helen Andromedon - that's a pseudonym by the way, and a great one at that – is a digital security trainer in East Africa. She trains human rights defenders in how to protect themselves digitally. She's also a UX researcher and educator, and she's worked as a consultant for many organizations across Africa, including the Association for Progressive Communications and the African Women's Development Fund.
She also played a key role in developing the Safe Sisters project, which is a digital security training, especially designed for women. Welcome Helen. Thank you so much for joining us.

HELEN ANDROMEDON: Thanks for having me. I've been a huge fan of the tools that came out of EFF and working with Ford Foundation. So yeah, it's such a blast to be here.

CINDY COHN: Wonderful. So we're in a time when a lot of people around the world are thinking more seriously than ever about how to protect their privacy and security. and that's, you know, from companies, but increasingly from governments and many, many other potential bad actors.
You know, there's no one size fits all training, as we know. And the process of determining what you need to protect and from whom you need to protect it is different for everybody. But we're particularly excited to talk to you, Helen, because you know that's what you've been doing for a very long time. And we want to hear how you think about, you know, how to make the resources available to people and make sure that the trainings really fit them. So can you start by explaining what the Safe Sisters project is?

HELEN ANDROMEDON: It's a program that came out of a collaboration amongst friends, but friends who were also working in different organizations and also were doing trainings. In the past, what would have it would be, we would send out an application, Hey, there's a training going on. But there was a different number of women that would actually apply to this fellowship.
It would always be very unequal. So what we decided to do is really kind of like experimenting is say, what if we do a training but only invite, women and people who are activists, people who are journalists, people who are really high risk, and give them a space to ask those hard questions because there are so many different things that come out of suffering online harassment and going through that in your life, you, when you need to share it, sometimes you do need a space where you don't feel judged, where you can kind of feel free to engage in really, really traumatic topics. So this fellowship was created, it had this unique percentage of people that would apply and we started in East Africa.
I think now because of what has happened in the last I, I guess three months, it has halted our ability to run the program in as many. Regions that need it. Um, but Safe Sister, I think what I see, it is a tech community of people who are able to train others or help others solve a problem.
So what problems do, I mean, so for example, I, I think I left my, my phone in the taxi. So what do I do? Um, how do I find my phone? What happens to all my data? Or maybe it could be a case of online harassment where there's some sort of revenge from the other side, from the perpetrator, trying to make the life of the victim really, really difficult at the moment.
So we needed people to be able to have solutions available to talk about and not just say, okay, you are a victim of harassment. What should I do? There's nothing to do, just go offline. No, we need to respond, but many of us don't have the background in ICT, uh, for example, in my region. I think that it is possible now to get a, a good background in IT or ICT related courses, um, up to, um, you know, up to PhD level even.
But sometimes I've, in working with Safe Sister, I've noticed that even such people might not be aware of the dangers that they are facing. Even when they know OPSEC and they're very good at it. They might not necessarily understand the risks. So we decided to keep working on the content each year, every time we can run the program, work on the content: what are the issues, currently, that people are facing? How can we address them through an educational fellowship, which is very, very heavy on mentorship. So mentorship is also a thing that we put a lot of stress on because again, we know that people don't necessarily have the time to take a course or maybe learn about encryption, but they are interested in it. So we want to be able to serve all the different communities and the different threat models that we are seeing.

CINDY COHN: I think that's really great and I, I wanna, um, drill in a couple of things. So first thing you, uh, ICT, internet Communications Technologies. Um, but what I, uh, what I think is really interesting about your approach is the way the fellowship works. You know, you're kind of each one teach one, right?
You're bringing in different people from communities. And if you know, most of us, I think as a, as a model, you know, finding a trusted person who can give you good information is a lot easier than going online and finding information all by yourself. So by kind of seeding these different communities with people who've had your advanced training, you're really kind of able to grow who gets the information. Is that part of the strategy to try to have that?

HELEN ANDROMEDON: It's kind of like two ways. So there is the way where we, we want people to have the information, but also we want people to have the correct information.
Because there is so much available, you can just type in, you know, into your URL and say, is this VPN trusted? And maybe you'll, you'll find a result that isn't necessarily the best one.
We want people to be able to find the resources that are guaranteed by, you know, EFF or by an organization that really cares about digital rights.

CINDY COHN: I mean, that is one of the problems of the current internet. When I started out in the nineties, there just wasn't information. And now really the role of organizations like yours is sifting through the misinformation, the disinformation, just the bad information to really lift up, things that are more trustworthy. It sounds like that's a lot of what you're doing.

HELEN ANDROMEDON: Yeah, absolutely. How I think it's going, I think you, I mean, you mentioned that it's kind of this cascading wave of, you know, knowledge, you know, trickling down into the communities. I do hope that's where it's heading.
I do see people reaching out to me who have been at Safe Sisters, um, asking me, yo Helen, which training should I do? You know, I need content for this. And you can see that they're actively engaging still, even though they went through the fellowship like say four years ago. So that I think is like evidence that maybe it's kind of sustainable, yeah.

CINDY COHN: Yeah. I think so. I wanted to drill down on one other thing you said, which is of course, you mentioned the, what I think of as the funding cuts, right, the Trump administration cutting off money for a lot of the programs like Safe Sisters, around the world. and I know there are other countries in Europe that are also cutting, support for these kind of programs.
Is that what you mean in terms of what's happened in the last few months?

HELEN ANDROMEDON: Yeah. Um, it's really turned around what our expectations for the next couple of years say, yeah, it's really done so, but also there's an opportunity for growth to recreate how, you know, what kind of proposals to develop. It's, yeah, it's always, you know, these things. Sometimes it's always just a way to change.

CINDY COHN: I wanna ask one more question. I really will let Jason ask some at some point, but, um, so what does the world look like if we get it right? Like if your work is successful, and more broadly, the internet is really supporting these kind of communities right now, what does it look like for the kind of women and human rights activists who you work with?

HELEN ANDROMEDON: I think that most of them would feel more confident to use those platforms for their work. So that gives it an extra boost because then they can be creative about their actions. Maybe it's something, maybe they want, you know, uh, they are, they are demonstrating against, uh, an illegal and inhumane act that has passed through parliament.
So online platforms. If they could, if it could be our right and if we could feel like the way we feel, you know, in the real world. So there's a virtual and a real world, you're walking on the road and you know you can touch things.
If we felt ownership of our online spaces so that you feel confident to create something that maybe can change. So in, in that ideal world, it would be that the women can use online spaces to really, really boost change in their communities and have others do so as well because you can teach others and you inspire others to do so. So it's, like, pops up everywhere and really makes things go and change.
I think also for my context, because I've worked with people in very repressive regimes where it is, the internet can be taken away from you. So it's things like the shutdowns, it's just ripped away from you. Uh, you can no longer search, oh, I have this, you know, funny thing on my dog. What should I do? Can I search for the information? Oh, you don't have the internet. What? It's taken away from you. So if we could have a way where the infrastructure of the internet was no longer something that was, like, in the hands of just a few people, then I think – So there's a way to do that, which I've recently learned from speaking to people who work on these things. It's maybe a way of connecting to the internet to go on the main highway, which doesn't require the government, um, the roadblocks and maybe it could be a kind of technology that we could use that could make that possible. So there is a way, and in that ideal world, it would be that, so that you can always find out, uh, what that color is and find out very important things for your life. Because the internet is for that, it's for information.
Online harassment, that one. I, I, yeah, I really would love to see the end of that. Um, just because, so also acknowledging that it's also something that has shown us. As human beings also something that we do, which is not be very kind to others. So it's a difficult thing. What I would like to see is that this future, we have researched it, we have very good data, we know how to avoid it completely. And then we also draw the parameters, so that everybody, when something happens to you, doesn't make you feel good, which is like somebody harassing you that also you are heard, because in some contexts, uh, even when you go to report to the police and you say, look, this happened to me. Sometimes they don't take it seriously, but because of what happens to you after and the trauma, yes, it is important. It is important and we need to recognize that. So it would be a world where you can see it, you can stop it.

CINDY COHN: I hear you and what I hear is that, that the internet should be a place where it's, you know, always available, and not subject to the whims of the government or the companies. There's technologies that can help do that, but we need to make them better and more widely available. That speaking out online is something you can do. And organizing online is something you can do. Um, but also that you have real accountability for harassment that might come as a response. And that could be, you know, technically protecting people, but also I think that sounds more like a policy and legal thing where you actually have resources to fight back if somebody, you know, misuses technology to try to harass you.

HELEN ANDROMEDON: Yeah, absolutely. Because right now the cases get to a point where it seems like depending on the whim of the person in charge, maybe if they go to, to report it, the case can just be dropped or it's not taken seriously. And then people do harm to themselves also, which is on, like, the extreme end and which is something that's really not, uh, nice to happen and should, it shouldn't happen.

CINDY COHN: It shouldn't happen, and I think it is something that disproportionately affects women who are online or marginalized people. Your vision of an internet where people can freely gather together and organize and speak is actually available to a lot of people around the world, but, but some people really don't experience that without tremendous blowback.
And that's, um, you know, that's some of the space that we really need to clear out so that it's a safe space to organize and make your voice heard for everybody, not just, you know, a few people who are already in power or have the, you know, the technical ability to protect themselves.

JASON KELLEY: We really want to, I think, help talk to the people who listen to this podcast and really understand and are building a better future and a better internet. You know, what kind of things you've seen when you train people. What are you thinking about when you're building these resources and these curriculums? What things come up like over and over that maybe people who aren't as familiar with the problems you've seen or the issues you've experienced.

HELEN ANDROMEDON: yeah, I mean the, Hmm, I, maybe they could be a couple of, of reasons that I think, um. What would be my view is, the thing that comes up in trainings is of course, you know, hesitation. there's this new thing and I'm supposed to download it. What is it going to do to my laptop?
My God, I share this laptop. What is it going to do? Now they tell me, do this, do this in 30 minutes, and then we have to break for lunch. So that's not enough time to actually learn because then you have to practice or you could practice, you could throw in a practice of a session, but then you leave this person and that person is as with normal.
Forget very normal. It happens. So the issues sometimes it's that kind of like hesitation to play with the tech toys. And I think that it's, good to be because we are cautious and we want to protect this device that was really expensive to get. Maybe it's borrowed, maybe it's secondhand.
I won't get, you know, like so many things that come up in our day to day because of, of the cost of things.

JASON KELLEY: You mentioned like what do you do when you leave your phone in a taxi? And I'll say that, you know, a few days ago I couldn't find my phone after I went somewhere and I completely freaked out. I know what I'm doing usually, but I was like, okay, how do I turn this thing off?
And I'm wondering like that taxi scenario, is that, is that a common one? Are there, you know, others that people experience there? I, I know you mentioned, you know, internet shutoffs, which happen far too frequently, but a lot of people probably aren't familiar with them. Is that a common scenario? You have to figure out what to do about, like, what are the things that pop up occasionally that, people listening to this might not be as aware of.

HELEN ANDROMEDON: So losing a device or a device malfunctioning is like the top one and internet shutdown is down here because they are not, they're periodic. Usually it's when there's an election cycle, that's when it happens. After that, you know, you sometimes, you have almost a hundred percent back to access. So I think I would put losing a device, destroying a device.
Okay, now what do I do now for the case of the taxi? The phone in the taxi. First of all, the taxi is probably crowded. So you don't think that phone will not be returned most likely.
So maybe there's intimate photos. You know, there's a lot, there's a lot that, you know, can be. So then if this person doesn't have a great password, which is usually the case because there is not so much emphasis when you buy a device. There isn't so much emphasis on, Hey, take time to make a strong password now. Now it's better. Now obviously there are better products available that teach you about device security as you are setting up the phone. But usually you buy it, you switch it on, so you don't really have the knowledge. This is a better password than that. Or maybe don't forget to put a password, for example.
So that person responding to that case would be now asking if they had maybe the find my device app, if we could use that, if that could work, like as you were saying, there's a possibility that it might, uh, bing in another place and be noticed and for sure taken away. So there's, it has to be kind of a backwards, a learning journey to say, let's start from ground zero.

JASON KELLEY: Let's take a quick moment to say thank you to our sponsor. How to Fix The Internet is supported by the Alfred p Sloan Foundation's program in public understanding of science and technology enriching people's lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.
We also wanna thank EFF members and donors. You are the reason we exist.
You can become a member for just $25 and for a little more, you can get some great, very stylish gear. The more members we have, the more power we have in state houses, courthouses and on the streets.
EFF has been fighting for digital rights for decades, and that fight is bigger than ever. So please, if you like what we do, go to ff.org/pod to donate.
We also wanted to share that our friend Cory Doctorow has a new podcast. Listen to this.  [Who Broke the Internet trailer]
And now back to our conversation with Helen Andromedon.

CINDY COHN: So how do you find the people who come and do the trainings? How do you identify people who would be good fellows or who need to come in to do the training? Because I think that's its own problem, especially, you know, the Safe Sisters is very spread out among multiple countries.

HELEN ANDROMEDON: Right now it has been a combination of partners saying, Hey, we have an idea, and then seeing where the issues are.
As you know, a fellowship needs resources. So if there is an interest because of the methodology, at least, um, let's say it's a partner in Madagascar who is working on digital rights. They would like to make sure that their community, maybe staff and maybe people that they've given sub-grants to. So that entire community, they want to make sure that it is safe, they can communicate safely. Nothing, you know, is leaked out, they can work well. And they're looking for, how do we do this? We need trainers, we need content. we need somebody who understands also learning separate from the resources. So I think that the Safe Sister Fellowship also is something that, because it's like you can pick it up here and you can design it in, in whatever context you have.
I think that has made it like be stronger. You take it, you make it your own. So it has happened like that. So a partner has an interest. We have the methodology, we have the trainers, and then we have the tools as well. And then that's how it happens.

CINDY COHN: What I'm hearing here is that, you know, there's already a pretty strong network of partners across Africa and the communities you serve. there's groups and, you know, we know this from EFF, 'cause we hear from them as well ,that there are, there are actually a pretty well developed set of groups that are doing digital activism and human rights defenders using technology already across, uh, Africa and the rest of the communities. And that you have this network and you are the go-to people, uh, when people in the network realize they need a higher level of security thinking and training than they had. Does that sound right?

HELEN ANDROMEDON: sound right? Yeah. A higher level of our being aware And usually it comes down to how do we keep this information safe? Because we are having incidents. Yeah.

CINDY COHN: Do you have an incident that you could, you explain?

HELEN ANDROMEDON: Oh, um, queer communities, say, an incident of, executive director being kidnapped. And it was, we think, that it's probably got to do with how influential they were and what kind of message they were sending. So it, it's apparent. And then so shortly after that incident, there's a break-in into the, the office space. Now that one is actually quite common, um, especially in the civic space. So that one then, uh, if they, they were storing maybe case files, um, everything was in a hard copy. All the information was there, receipts, checks, um, payment details. That is very, very tragic in that case.
So in that, what we did, because this incident had happened in multiple places, we decided to run a program for all the staff that was, um, involved in their day to day. So we could do it like that and make sure that as a response to what happened, everybody gets some education. We have some quizzes, we have some tests, we have some community. We keep engaged and maybe. That would help. And yeah, they'll be more prepared in case it happens again.

CINDY COHN: Oh yeah. And this is such an old, old issue. You know, when we were doing the encryption fight in the nineties, we had stories of people in El Salvador and Guatemala where the office gets raided and the information gets in the hands of the government, whoever the opposition is, and then other people start disappearing and getting targeted too, because their identities are revealed in the information that gets seized. And that sounds like the very same pattern that you're still seeing.

HELEN ANDROMEDON: Yeah there's a lot to consider for that case. Uh, cloud saving, um, we have to see if there's somebody that can, there's somebody who can host their server. It's very, yeah, it's, it's interesting for that case.

CINDY COHN: Yeah. I think it's an ongoing issue and there are better tools than we had in the nineties, but people need to know about them and, and actually using them is not, it's not easy. It's, you, you have to actually think about it.

HELEN ANDROMEDON: Yeah, I, I don't know. I've seen a model that works, so if it's a tool, it's great. It's working well. I've seen it, uh, with I think the Tor project, because the, to project, has user communities. What it appears to be doing is engaging people with training, so doing safety trainings and then they get value from, from using your tool. because they get to have all this information, not only about your tool, but of safety. So that's a good model to build user communities and then get your tool used. I think this is also a problem.

CINDY COHN: Yeah. I mean, this is a, another traditional problem is that the trainers will come in and they'll do a training, but then nobody really is trained well enough to continue to use the tool.
And I see you, you know, building networks and building community and also having, you know, enough time for people to get familiar with and use these tools so that they won't just drop it after the training's over. It sounds like you're really thinking hard about that.

HELEN ANDROMEDON: Yeah. Um, yeah, I think that we have many opportunities and because the learning is so difficult to cultivate and we don't have the resources to make it long term. Um, so yes, you do risk having all the information forgotten. Yes.

JASON KELLEY: I wanna just quickly emphasize that some of the scenarios, Cindy, you've talked about, and Helen you just mentioned, I think a lot of: potential break-ins, harassment, kidnapping, and it's, it's really, it's awful, but I think this is one of the things that makes this kind of training so necessary. I know that this seems obvious to many people listening and, and to the folks here, but I think it's, it's really it. I. Just needs emphasized that these are serious issues. That, and that's why you can't make a one size fits all training because these are real problems that, you know, someone might not have to deal with in one country and they might have a regular problem with in another. Is there a kind of difference that you can just clarify about how you would train, for example, groups of women that are experiencing one thing when they, you know, need digital security advice or help versus let's say human rights defenders? Is the training completely different when you do that, or is it just really kind of emphasizing the same things about like protecting your privacy, protecting your data, using certain tools, things like that?

HELEN ANDROMEDON: Yeah. Jason, let me, let me first respond to your first comment about the tools. So one size fits all, obviously is wrong. Maybe get more people of diversity working on that tool and they'll give you their opinion because the development is a process. You don't just develop a tool - you have time to change, modify, test. Do I use that? Like if you had somebody like that in the room, they would tell you if you had two, that would be great because now you have two different points of evidence. And keep mixing. And then, um, I know it's like it's expensive. Like you have to do it one way and then get feedback, then do it another way. But I, I think just do more of that. Um, yeah. Um, how do I train? So the training isn't that different. There are some core concepts that we keep and then, so if it, if I had like five days, I would do like one or two days. The more technical, uh, concepts of digital safety, which everybody has to do, which is, look, this is my device, this is how it works, this is how I keep it safe. This is my account, this is how it works. This is how I keep it safe.
And then when you have more time, you can dive into the personas, let's say it's a journalist, so is there a resource for, and this is how then you pull a resource and then you show it is there a resource which identify specific tools developed for journalists? Oh, maybe there is, there is something that is like a panic button that one they need. So you then you start to put all these things together and in the remaining time you can kind of like hone into those differences.
Now for women, um, it would be … So if it's HRDs and it's mixed, I still would cover cyber harassment because it affects everyone. For women would, would be slightly different because maybe we could go into self-defense, we could go into how to deal, we could really hone into the finer points of responding to online harassment because for their their case, it's more likely because you did a threat model, it's more likely because of their agenda and because of the work that they do. So I think that would be how I would approach the two.

JASON KELLEY: And one, one quick thing that I just, I want to mention that you brought up earlier is, um, shared devices. There's a lot of, uh, solutionism in government, and especially right now with this sort of, assumption that if you just assume everyone has one device, if you just say everyone has their phone, everyone has their computer, you can, let's say, age verify people. You can say, well, kids who use this phone can't go to this website, and adults who use this other phone can go to this website. And this is a regular issue we've seen where there's not an awareness that people are buying secondhand devices a lot, people are sharing devices a lot.

HELEN ANDROMEDON: Yeah, absolutely. Shared devices is the assumption always. And then we do get a few people who have their own devices. So Jason, I just wanted to add one more factor that could be bad. Yeah. For the shared devices, because of the context, and the regions that I'm in, you have also the additional culture and religious norms, which sometimes makes it like you don't have liberty over your devices. So anybody at any one time, if they're your spouse or your parent, they can just take it from you, and demand that you let them in. So it's not necessarily that you could all have your own device, but the access to that device, it can be shared.

CINDY COHN: So as you look at the world of, kind of, tools that are available, where are the gaps? Where would you like to see better tools or different tools or tools at all, um, to help protect and empower the communities you work with?

HELEN ANDROMEDON: We need a solution for the internet shutdowns because, because sometimes it could have an, it could have health repercussions, you could have a need, a serious need, and you don't have access to the internet. So I don't know. We need to figure that one out. Um, the technology is there, as you mentioned earlier, before, but you know, it needs to be, like, more developed and tested. It would be nice to have technology that responds or gives victim advice. Now I've seen interventions. By case. Case by case. So many people are doing them now. Um, you, you know, you, you're right. They verify, then they help you with whatever. But that's a slow process.
Um, you're processing the information. It's very traumatic. So you need good advice. You need to stay calm, think through your options, and then make a plan, and then do the plan. So that's the kind of advice. Now I think there are apps because maybe I'm not using them or I don't, maybe that means they're not well known as of now.
Yeah. But that's technology I would like to see. Um, then also every, every, everything that is available. The good stuff. It's really good. It's really well written. It's getting better – more visuals, more videos, more human, um, more human like interaction, not that text. And mind you, I'm a huge fan of text, um, and like the GitHub text.
That's awesome. Um, but sometimes for just getting into the topic you need a different kind of, uh, ticket. So I don't know if we can invest in that, but the content is really good.
Practice would be nice. So we need practice. How do we get practice? That's a question I would leave to you. How do you practice a tool on your own? It's good for you, how do you practice it on your own? So it's things like that helping the, the person onboard, doing resources to help that transition. You want people to use it at scale.

JASON KELLEY: I wonder if you can talk a bit about that moment when you're training someone and you realize that they really get it. Maybe it's because it's fun, or maybe it's because they just sort of finally understand like, oh, that's how this works. Is that something, you know, I assume it's something you see a lot because you're clearly, you know, an experienced and successful teacher, but it's, it's just such a lovely moment when you're trying to teach someone

HELEN ANDROMEDON: when trying to teach someone something. Yeah, I mean, I can't speak for everybody, but I'll speak to myself. So there are some things that surprise me sitting in a class, in a workshop room, or reading a tutorial or watching how the internet works and reading about the cables, but also reading about electromagnetism. All those things were so different from, what were we talking about? Which is like how internet and civil society, all that stuff. But that thing, the science of it, the way it is, that should, for me, I think that it's enough because it's really great.
But then, um. So say we are, we are doing a session on how the internet works in relation to internet shutdowns. Is it enough to just talk about it? Are we jumping from problem to solution, or can we give some time? So that the person doesn't forget, can we give some time to explain the concept? Almost like moving their face away from the issue for a little bit and like, it's like a deception.
So let's talk about electromagnetism that you won't forget. Maybe you put two and two together about the cyber optic cables. Maybe you answer the correction, the, the right, uh, answer to a question in, at a talk. So it's, it's trying to make connections because we don't have that background. We don't have a tech background.
I just discovered Dungeons and Dragons at my age. So we don't have that tech liking tech, playing with it. We don't really have that, at least in my context. So get us there. Be sneaky, but get us there.

JASON KELLEY: You have to be a really good dungeon master. That's what I'm hearing. That's very good.

HELEN ANDROMEDON: yes.

CINDY COHN: I think that's wonderful and, and I agree with you about, like, bringing the joy, making it fun, and making it interesting on multiple levels, right?
You know, learning about the science as well as, you know, just how to do things that just can add a layer of connection for people that helps keep them engaged and keeps them in it. And also when stuff goes wrong, if you actually understand how it works under the hood, I think you're in a better position to decide what to do next too.
So you've gotta, you know, it not only makes it fun and interesting, it actually gives people a deeper level of understanding that can help 'em down the road.

HELEN ANDROMEDON: Yeah, I agree. Absolutely.

JASON KELLEY: Yeah, Helen, thanks so much for joining us – this has been really helpful and really fun.
Well, that was really fun and really useful for people I think, who are thinking about digital security and people who don't spend much time thinking about digital security, but maybe should start, um, something that she mentioned that, that, that you talked about, the Train the Trainer model, reminded me that we should mention our surveillance self-defense guides that, um, are available@ssd.ff.org.
That we talked about a little bit. They're a great resource as well as the Security Education companion website, which is security education companion.org.
Both of these are great things that came up and that people might want to check out.

CINDY COHN: Yeah, it's wonderful to hear someone like Helen, who's really out there in the field working with people, say that these guides help her. Uh, we try to be kind of the brain trust for people all over the world who are doing these trainings, but also make it easy if. If you're someone who's interested in learning how to do trainings, we have materials that'll help you get started. Um, and as, as we all know, we're in a time when more people are coming to us and other organizations seeking security help than ever before.

JASON KELLEY: Yeah, and unfortunately there's less resources now, so I think we, you know, in terms of funding, right, there's less resources in terms of funding. So it's important that people have access to these kinds of guides, and that was something that we talked about that kind of surprised me. Helen was really, I think, optimistic about the funding cuts, not obviously about them themselves, but about what the opportunities for growth could be because of them.

CINDY COHN: Yeah, I think this really is what resilience sounds like, right? You know, you get handed a situation in which you lose, you know, a lot of the funding support that you're gonna do, and she's used to pivoting and she pivots towards, you know, okay, these are the opportunities for us to grow, for us to, to build new baselines for the work that we do. And I really believe she's gonna do that. The attitude just shines through in the way that she approaches adversity.

JASON KELLEY: Yeah. Yeah. And I really loved, while we're thinking about the, the parts that we're gonna take away from this, I really loved the way she brought up the need for people to feel ownership of the online world. Now, she was talking about infrastructure specifically in that moment, but this is something that's come up quite a bit in our conversations with people.

CINDY COHN: Yeah, her framing of how important the internet is to people all around the world, you know, the work that our friends at Access now and others do with the Keep It On Coalition to try to make sure that the internet doesn't go down. She really gave a feeling for like just how vital and important the internet is, for people all over the world.

JASON KELLEY: Yeah. And even though, you know, some of these conversations were a little bleak in the sense of, you know, protecting yourself from potentially bad things, I was really struck by how she sort of makes it fun in the training and sort of thinking about, you know, how to get people to memorize things. She mentioned magnetism and fiber optics, and just like the science behind it. And it really made me, uh, think more carefully about how I'm gonna talk about certain aspects of security and, and privacy, because she really gets, I think, after years of training what sticks in people's mind.

CINDY COHN: I think that's just so important. I think that people like Helen are this really important kind of connective tissue between the people who are deep in the technology and the people who need it. And you know that this is its own skill and she just, she embodies it. And of course, the joy she brings really makes it alive.

JASON KELLEY: And that's our episode for today. Thanks so much for joining us. If you have feedback or suggestions, we'd love to hear from you. Visit ff.org/podcast and click on listen or feedback. And while you're there, you can become a member and donate, maybe even pick up some of the merch and just see what's happening in digital rights this week and every week.
Our theme music is by Nat Keefe of Beat Mower with Reed Mathis, and How to Fix the Internet is supported by the Alfred Peace Loan Foundation's program and public understanding of science and technology. We'll see you next time. I'm Jason Kelly.

CINDY COHN: And I'm Cindy Cohn.

MUSIC CREDITS: This podcast is licensed creative commons attribution 4.0 international, and includes the following music licensed creative commons attribution 3.0 unported by its creators: Drops of H2O, The Filtered Water Treatment by Jay Lang. Sound design, additional music and theme remixes by Gaetan Harris.

 

Josh Richman

Despite Supreme Court Setback, EFF Fights On Against Online Age Mandates

3 days 10 hours ago

The Supreme Court’s recent decision in Free Speech Coalition v. Paxton did not end the legal debate over age-verification mandates for websites. Instead, it’s a limited decision: the court’s legal reasoning only applies to age restrictions on sexual materials that minors do not have a legal right to access. Although the ruling reverses decades of First Amendment protections for adults to access lawful speech online, the decision does not allow states or the federal government to impose broader age-verification mandates on social media, general audience websites, or app stores.

At EFF, we continue to fight age-verification mandates in the many other contexts in which we see them throughout the country and the world. These “age gates” remain a threat to the free speech and privacy rights of both adults and minors.

Importantly, the Supreme Court’s decision does not approve of age gates when they are imposed on speech that is legal for minors and adults.

The court’s legal reasoning in Free Speech Coalition v. Paxton depends in all relevant parts on the Texas law only blocking minors’ access to speech to which they had no First Amendment right to access in the first place—what has been known since 1968 as “harmful to minors” sexual material. Although laws that limit access to certain subject matters are typically required to survive “strict scrutiny,” the Texas law was subject instead to the less demanding “intermediate scrutiny” only because the law was denying minors access to this speech that was unprotected for them. The Court acknowledged that having to prove age would create an obstacle for adults to access speech that is protected for them. But this obstacle was merely “incidental” to the lawful restriction on minors’ access. And “incidental” restrictions on protected speech need only survive intermediate scrutiny.

To be clear, we do not agree with this result, and vigorously fought against it. The Court wrongly downplayed the very real and significant burdens that age verification places on adults. And we disagree with numerous other doctrinal aspects of the Court’s decision. The court had previously recognized that age-verification schemes significantly burden adult’s First Amendment rights and had protected adults’ constitutional rights. So Paxton is a significant loss of internet users’ free speech rights and a marked retreat from the court’s protections for online speech.

The decision does not allow states or the federal government to impose broader age-verification mandates

But the decision is limited to the specific context in which the law seeks to restrict access to sexual materials. The Texas law avoided strict scrutiny only because it directly targeted speech that is unprotected as to minors. You can see this throughout the opinion:

  • The foundation of the Court’s decision was the history, tradition, and precedent that allows states to “prevent children from accessing speech that is obscene to children, rather than a more generalized concern for child welfare.
  • The Court’s entire ruling rested on its finding that “no person – adult or child –has a First Amendment right to access speech that is obscene to minors without first submitting proof of age.”
  • The Court explained that “because the First Amendment permits States to prohibit minors from accessing speech that is obscene to them, it likewise permits States to employ the ordinary and appropriate means of enforcing such a prohibition.” The permissibility of the age verification requirement was thus dependent on the unprotected nature of the speech.
  • The only reason the law could be justified without reference to protected speech, a requirement for a content-neutral law subject to only intermediate scrutiny, is that it did not “regulate the content of protected speech” either “‘on its face’ or in its justification.” As the Court explained, “where the speech in question is unprotected, States may impose “restrictions” based on “content” without triggering strict scrutiny.”
  • Intermediate scrutiny was applied only because “[a]ny burden experienced by adults is therefore only incidental to the statute's regulation of activity that is not protected by the First Amendment.”
  • But strict scrutiny remains “the standard for reviewing the direct targeting of fully protected speech.”

Only one sentence in Free Speech Coalition v. Paxton addressing the restriction of First Amendment rights is not cabined by the language of unprotected harmful to minors speech. The Court wrote: “And, the statute does not ban adults from accessing this material; it simply requires them to verify their age before accessing it on a covered website.” But that sentence was entirely surrounded by and necessarily referred to the limited situation of a law burdening only access to harmful to minors sexual speech.

We and the others fighting online age restrictions still have our work cut out for us. The momentum to widely adopt and normalize online age restrictions is strong. But Free Speech Coalition v. Paxton did not approve of age gates when they are imposed on speech that adults and minors have a legal right to access. And EFF will continue to fight for all internet users’ rights to speak and receive information online.

David Greene

EFF Tells Virginia Court That Constitutional Privacy Protections Forbid Cops from Finding out Everyone Who Searched for a Keyword

1 week 1 day ago

This post was co-authored by EFF legal intern Noam Shemtov.

We are in a constant dialogue with Internet search engines, ranging from the mundane to the confessional. We ask search engines everything: What movies are playing (and which are worth seeing)? Where’s the nearest clinic (and how do I get there)? Who’s running in the sheriff’s race (and what are their views)? These online queries can give insight into our private details and innermost thoughts, but police increasingly access them without adhering to longstanding limits on government investigative power.

A Virginia appeals court is poised to review such a request in a case called Commonwealth v. Clements. In Clements, police sought evidence under a “reverse-keyword warrant,” a novel court order that compels search engines like Google to hand over information about every person who has looked up a word or phrase online. While the trial judge correctly recognized the privacy interest in our Internet queries, he overlooked the other wide-ranging harms that keyword warrants enable and upheld the search.

But as EFF and the ACLU explained in our amicus brief on appeal, reverse keyword warrants simply cannot be conducted in a lawful way. They invert privacy protections, threaten free speech and inquiry, and fundamentally conflict with the principles underlying the Fourth Amendment and its analog in the Virginia Constitution. The court of appeals now has a chance to say so and protect the rights of Internet users well beyond state lines.

To comply with a keyword warrant, a search engine has to trawl through its entire database of user queries to pinpoint the accounts or devices that made a responsive search. For a dominant service like Google, that means billions of records. Such a wide dragnet will predictably pull in people with no plausible connection to a crime under investigation if their searches happened to include keywords police are interested in.

Critically, investigators seldom have a suspect in mind when they seek a reverse-keyword warrant. That isn’t surprising. True to their name, these searches “work in reverse” from the traditional investigative process. What makes them so useful is precisely their ability to identify Internet users on the sole basis of what they searched online. But what makes a search technique convenient to the government does not always make it constitutional. Quite the opposite: the constitution is anathema to inherently suspicionless dragnets.

The Fourth Amendment forbids “exploratory rummaging”—in fact, it was drafted in direct response to British colonial soldiers’ practice of indiscriminately searching people’s homes and papers for evidence of their opposition to the Crown. To secure a lawful warrant, police must have a specific basis to believe evidence will be found in a given location. They must also describe that location in some detail and say what evidence they expect to find there. It’s hard to think of a less specific description than “all the Internet searches in the world” or a weaker hunch than “whoever committed the crime probably looked up search term x.” Because those airy assertions are all law enforcement can martial in support of keyword warrants, they are “tantamount to high-tech versions of the reviled ‘general warrants’ that first gave rise to the . . . Fourth Amendment” and Virginia’s even stronger search-and-seizure provision.

What’s more, since keyword warrants compel search engine companies to hand over records about anyone anywhere who looked up a particular search term within a given timeframe, they effectively make a suspect out of every person whose online activity falls within the warrant’s sweep. As one court has said about related geofences, this approach “invert[s] probable cause” and “cannot stand.”

Keyword warrants’ fatal flaws are even more drastic considering that privacy rights apply with special force to searches of items—like diaries, booklists, and Internet search queries—that reflect a person’s free thought and expression. As both law and lived experience affirm, the Internet is “the most important place[] . . . for the exchange of views.” Using it—and using keyword searches to navigate the practical infinity of its contents—is “indispensable to participation in modern society.” We shouldn’t have to engage in that core endeavor with the fear that our searches will incriminate us, subject to police officers’ discretion about what keywords are worthy of suspicion. That outcome would predictably chill people from accessing information about sensitive and important topics like reproductive health, public safety, or events in the news that could be relevant to a criminal investigation.

The Virginia Court of Appeals now has the opportunity in Clements to protect privacy and speech rights by affirming that keyword warrants can’t be reconciled with constitutional protections guaranteed at the federal or state level. We hope it does so.

Andrew Crocker

No Face, No Case: California’s S.B. 627 Demands Cops Show Their Faces

1 week 1 day ago

Across the country, people are collecting and sharing footage of masked law enforcement officers from both federal and local agencies deputized to do so-called immigration enforcement: arresting civilians, in some cases violently and/or warrantlessly. That footage is part of a long tradition of recording law enforcement during their operations to ensure some level of accountability if people observe misconduct and/or unconstitutional practices. However, as essential as recording police can be in proving allegations of misconduct, the footage is rendered far less useful when officers conceal their badges and/or faces. Further, lawyers, journalists, and activists cannot then identify officers in public records requests for body-worn camera footage to view the interaction from the officers’ point of view. 

In response to these growing concerns, California has introduced S.B. 627 to prohibit law enforcement from covering their faces during these kinds of public encounters. This builds on legislation (in California and some other states and municipalities) that requires police, for example, “to wear a badge, nameplate, or other device which bears clearly on its face the identification number or name of the officer.” Similarly, police reform legislation passed in 2018 requires greater transparency by opening individual personnel files of law enforcement to public scrutiny when there are use of force cases or allegations of violent misconduct.

But in the case of ICE detentions in 2025, federal and federally deputized officers are not only covering up their badges—they're covering their faces as well. This bill would offer an important tool to prevent this practice, and to ensure that civilians who record the police can actually determine the identity of the officers they’re recording, in case further investigation is warranted. The legislation explicitly includes “any officer or anyone acting on behalf of a local, state, or federal law enforcement agency.” 

This is a necessary move. The right to record police, and to hold government actors accountable for their actions, requires that we know who the government actors are in the first place. The new legislation seeks to cover federal officers in addition to state and local officials, protecting Californians from otherwise unaccountable law enforcement activity. 

As EFF has stood up for the right to record police, we also stand up for the right to be able to identify officers in those recordings. We have submitted a letter to the state legislature to that effect. California should pass S.B. 627, and more states should follow suit to ensure that the right to record remains intact. 

José Martinez

Axon’s Draft One is Designed to Defy Transparency

1 week 1 day ago

Axon Enterprise’s Draft One — a generative artificial intelligence product that writes police reports based on audio from officers’ body-worn cameras — seems deliberately designed to avoid audits that could provide any accountability to the public, an EFF investigation has found.

Our review of public records from police agencies already using the technology — including police reports, emails, procurement documents, department policies, software settings, and more — as well as Axon’s own user manuals and marketing materials revealed that it’s often impossible to tell which parts of a police report were generated by AI and which parts were written by an officer.

You can read our full report, which details what we found in those documents, how we filed those public records requests, and how you can file your own, here

Everyone should have access to answers, evidence, and data regarding the effectiveness and dangers of this technology. Axon and its customers claim this technology will revolutionize policing, but it remains to be seen how it will change the criminal justice system, and who this technology benefits most.

For months, EFF and other organizations have warned about the threats this technology poses to accountability and transparency in an already flawed criminal justice system.  Now we've concluded the situation is even worse than we thought: There is no meaningful way to audit Draft One usage, whether you're a police chief or an independent researcher, because Axon designed it that way. 

Draft One uses a ChatGPT variant to process body-worn camera audio of public encounters and create police reports based only on the captured verbal dialogue; it does not process the video. The Draft One-generated text is sprinkled with bracketed placeholders that officers are encouraged to add additional observations or information—or can be quickly deleted. Officers are supposed to edit Draft One's report and correct anything the Gen AI misunderstood due to a lack of context, troubled translations, or just plain-old mistakes. When they're done, the officer is prompted to sign an acknowledgement that the report was generated using Draft One and that they have reviewed the report and made necessary edits to ensure it is consistent with the officer’s recollection. Then they can copy and paste the text into their report. When they close the window, the draft disappears.

Any new, untested, and problematic technology needs a robust process to evaluate its use by officers. In this case, one would expect police agencies to retain data that ensures officers are actually editing the AI-generated reports as required, or that officers can accurately answer if a judge demands to know whether, or which part of, reports used by the prosecution were written by AI. 

"We love having new toys until the public gets wind of them."

One would expect audit systems to be readily available to police supervisors, researchers, and the public, so that anyone can make their own independent conclusions. And one would expect that Draft One would make it easy to discern its AI product from human product – after all, even your basic, free word processing software can track changes and save a document history.

But Draft One defies all these expectations, offering meager oversight features that deliberately conceal how it is used. 

So when a police report includes biased language, inaccuracies, misinterpretations, or even outright lies, the record won't indicate whether the officer or the AI is to blame. That makes it extremely difficult, if not impossible, to assess how the system affects justice outcomes, because there is little non-anecdotal data from which to determine whether the technology is junk. 

The disregard for transparency is perhaps best encapsulated by a short email that an administrator in the Frederick Police Department in Colorado, one of Axon's first Draft One customers, sent to a company representative after receiving a public records request related to AI-generated reports. 

"We love having new toys until the public gets wind of them," the administrator wrote.

No Record of Who Wrote What

The first question anyone should have about a police report written using Draft One is which parts were written by AI and which were added by the officer. Once you know this, you can start to answer more questions, like: 

  • Are officers meaningfully editing and adding to the AI draft? Or are they reflexively rubber-stamping the drafts to move on as quickly as possible? 
  • How often are officers finding and correcting errors made by the AI, and are there patterns to these errors? 
  • If there is inappropriate language or a fabrication in the final report, was it introduced by the AI or the officer? 
  • Is the AI overstepping in its interpretation of the audio? If a report says, "the subject made a threatening gesture," was that added by the officer, or did the AI make a factual assumption based on the audio? If a suspect uses metaphorical slang, does the AI document literally? If a subject says "yeah" through a conversation as a verbal acknowledgement that they're listening to what the officer says, is that interpreted as an agreement or a confession?

"So we don’t store the original draft and that’s by design..."

Ironically, Draft One does not save the first draft it generates. Nor does the system store any subsequent versions. Instead, the officer copies and pastes the text into the police report, and the previous draft, originally created by Draft One, disappears as soon as the window closes. There is no log or record indicating which portions of a report were written by the computer and which portions were written by the officer, except for the officer's own recollection. If an officer generates a Draft One report multiple, there's no way to tell whether the AI interprets the audio differently each time.

Axon is open about not maintaining these records, at least when it markets directly to law enforcement.

In this video of a roundtable discussion about the Draft One product, Axon’s senior principal product manager for generative AI is asked (at the 49:47 mark) whether or not it’s possible to see after-the-fact which parts of the report were suggested by the AI and which were edited by the officer. His response (bold and definition of RMS added): 

So we don’t store the original draft and that’s by design and that’s really because the last thing we want to do is create more disclosure headaches for our customers and our attorney’s offices—so basically the officer generates that draft, they make their edits, if they submit it into our Axon records system then that’s the only place we store it, if they copy and paste it into their third-party RMS [records management system] system as soon as they’re done with that and close their browser tab, it’s gone. It’s actually never stored in the cloud at all so you don’t have to worry about extra copies floating around.”

To reiterate: Axon deliberately does not store the original draft written by the Gen AI, because "the last thing" they want is for cops to have to provide that data to anyone (say, a judge, defense attorney or civil liberties non-profit). 

Following up on the same question, Axon's Director of Strategic Relationships at Axon Justice suggests this is fine, since a police officer using a word processor wouldn't be required to save every draft of a police report as they're re-writing it. This is, of course, misdirection and not remotely comparable. An officer with a word processor is one thought process and a record created by one party; Draft One is two processes from two parties–Axon and the officer. Ultimately, it could and should be considered two records: the version sent to the officer from Axon and the version edited by the officer.

The days of there being unexpected consequences of police departments writing reports in word processors may be over, but Draft One is still unproven. After all, every AI-evangelist, including Axon, claims this technology is a game-changer. So, why wouldn't an agency want to maintain a record that can establish the technology’s accuracy? 

It also appears that Draft One isn't simply hewing to long-established norms of police report-writing; it may fundamentally change them. In one email, the Campbell Police Department's Police Records Supervisor tells staff, “You may notice a significant difference with the narrative format…if the DA’s office has comments regarding our report narratives, please let me know.” It's more than a little shocking that a police department would implement such a change without fully soliciting and addressing the input of prosecutors. In this case, the Santa Clara County District Attorney had already suggested police include a disclosure when Axon Draft One is used in each report, but Axon's engineers had yet to finalize the feature at the time it was rolled out. 

One of the main concerns, of course, is that this system effectively creates a smokescreen over truth-telling in police reports. If an officer lies or uses inappropriate language in a police report, who is to say that the officer wrote it or the AI? An officer can be punished severely for official dishonesty, but the consequences may be more lenient for a cop who blames it on the AI. There has already been an occasion when engineers discovered a bug that allowed officers on at least three occasions to circumvent the "guardrails" that supposedly deter officers from submitting AI-generated reports without reading them first, as Axon disclosed to the Frederick Police Department.

To serve and protect the public interest, the AI output must be continually and aggressively evaluated whenever and wherever it's used. But Axon has intentionally made this difficult. 

What the Audit Trail Actually Looks Like 

You may have seen news stories or other public statements asserting that Draft One does, indeed, have auditing features. So, we dug through the user manuals to figure out what that exactly means. 

The first thing to note is that, based on our review of the documentation, there appears to be  no feature in Axon software that allows departments to export a list of all police officers who have used Draft One. Nor is it possible to export a list of all reports created by Draft One, unless the department has customized its process (we'll get to that in a minute). 

This is disappointing because, without this information, it's near impossible to do even the most basic statistical analysis: how many officers are using the technology and how often. 

Based on the documentation, you can only export two types of very basic logs, with the process differing depending on whether an agency uses Evidence or Records/Standards products. These are:

  1. A log of basic actions taken on a particular report. If the officer requested a Draft One report or signed the Draft One liability disclosure related to the police report, it will show here. But nothing more than that.
  2.  A log of an individual officer/user's basic activity in the Axon Evidence/Records system. This audit log shows things such as when an officer logs into the system, uploads videos, or accesses a piece of evidence. The only Draft One-related activities this tracks are whether the officer ran a Draft One request, signed the Draft One liability disclosure, or changed the Draft One settings. 

This means that, to do a comprehensive review, an evaluator may need to go through the record management system and look up each officer individually to identify whether that officer used Draft One and when. That could mean combing through dozens, hundreds, or in some cases, thousands of individual user logs. 

An example of Draft One usage in an audit log.

An auditor could also go report-by-report as well to see which ones involved Draft One, but the sheer number of reports generated by an agency means this method would require a massive amount of time. 

But can agencies even create a list of police reports that were co-written with AI? It depends on whether the agency has included a disclosure in the body of the text, such as "I acknowledge this report was generated from a digital recording using Draft One by Axon." If so, then an administrator can use "Draft One" as a keyword search to find relevant reports.

Agencies that do not require that language told us they could not identify which reports were written with Draft One. For example, one of those agencies and one of Axon's most promoted clients, the Lafayette Police Department in Indiana, told us: 

"Regarding the attached request, we do not have the ability to create a list of reports created through Draft One. They are not searchable. This request is now closed."

Meanwhile, in response to a similar public records request, the Palm Beach County Sheriff's Office, which does require a disclosure at the bottom of each report that it had been written by AI, was able to isolate more than 3,000 Draft One reports generated between December 2024 and March 2025.

They told us: "We are able to do a keyword and a timeframe search. I used the words draft one and the system generated all the draft one reports for that timeframe."

We have requested further clarification from Axon, but they have yet to respond. 

However, as we learned from email exchanges between the Frederick Police Department in Colorado and Axon, Axon is tracking police use of the technology at a level that isn't available to the police department itself. 

In response to a request from Politico's Alfred Ng in August 2024 for Draft One-generated police reports, the police department was struggling to isolate those reports. 

An Axon representative responded: "Unfortunately, there’s no filter for DraftOne reports so you’d have to pull a User’s audit trail and look for Draft One entries. To set expectations, it’s not going to be graceful, but this wasn’t a scenario we anticipated needing to make easy."

But then, Axon followed up: "We track which reports use Draft One internally so I exported the data." Then, a few days later, Axon provided Frederick with some custom JSON code to extract the data in the future. 


What is Being Done About Draft One

The California Assembly is currently considering SB 524, a bill that addresses transparency measures for AI-written police reports. The legislation would require disclosure whenever police use artificial intelligence to partially or fully write official reports, as well as “require the first draft created to be retained for as long as the final report is retained.” Because Draft One is designed not to retain the first or any previous drafts of a report, it cannot comply with this common-sense and first-step bill,  and any law enforcement usage would be unlawful.

Axon markets Draft One as a solution to a problem police have been complaining about for at least a century: that they do too much paperwork. Or, at least, they spend too much time doing paperwork. The current research on whether Draft One remedies this issue shows mixed results, from some agencies claiming it has no real-time savings, with others agencies extolling its virtues (although their data also shows that results vary even within the department).

In the justice system, police must prioritize accuracy over speed. Public safety and a trustworthy legal system demand quality over corner-cutting. Time saved should not be the only metric, or even the most important one. It's like evaluating a drive-through restaurant based only on how fast the food comes out, while deliberately concealing the ingredients and nutritional information and failing to inspect whether the kitchen is up to health and safety standards. 

Given how untested this technology is and how much the company is in a hurry to sell Draft One, many local lawmakers and prosecutors have taken it upon themselves to try to regulate the product’s use. Utah is currently considering a bill that would mandate disclosure for any police reports generated by AI, thus sidestepping one of the current major transparency issues: it’s nearly impossible to tell which finished reports started as an AI draft. 

In King County, Washington, which includes Seattle, the district attorney’s office has been clear in their instructions: police should not use AI to write police reports. Their memo says

We do not fear advances in technology – but we do have legitimate concerns about some of the products on the market now... AI continues to develop and we are hopeful that we will reach a point in the near future where these reports can be relied on. For now, our office has made the decision not to accept any police narratives that were produced with the assistance of AI.

We urge other prosecutors to follow suit and demand that police in their jurisdiction not unleash this new, unaccountable, and intentionally opaque AI product. 

Conclusion

Police should not be using AI to write police reports. There are just too many unanswered questions about how AI would translate the audio of situations and whether police will actually edit those drafts, while simultaneously, there is no way for the public to reliably discern what was written by a person and what was written by a computer. This is before we even get to the question of how these reports might compound and exacerbate existing problems or create new ones in an already unfair and untransparent criminal justice system. 

EFF will continue to research and advocate against the use of this technology but for now, the lesson is clear: Anyone with control or influence over police departments, be they lawmakers or people in the criminal justice system, has a duty to be informed about the potential harms and challenges posed by AI-written police reports.  

Matthew Guariglia

EFF's Guide to Getting Records About Axon's Draft One AI-Generated Police Reports

1 week 1 day ago

The moment Axon Enterprise announced a new product, Draft One, that would allow law enforcement officers to use artificial intelligence to automatically generate incident report narratives based on body-worn camera audio, everyone in the police accountability community immediately started asking the same questions

What do AI-generated police reports look like? What kind of paper trail does this system leave? How do we get a hold of documentation using public records laws? 

Unfortunately, obtaining these records isn't easy. In many cases, it's straight-up impossible. 

Read our full report on how Axon's Draft One defies transparency expectations by design here

In some jurisdictions, the documents are walled off behind government-created barriers. For example, California fully exempts police narrative reports from public disclosure, while other states charge fees to access individual reports that become astronomical if you want to analyze the output in bulk. Then there are technical barriers: Axon's product itself does not allow agencies to isolate reports that contain an AI-generated narrative, although an agency can voluntarily institute measures to make them searchable by a keyword.  

This spring, EFF tested out different public records request templates and sent them to dozens of law enforcement agencies we believed are using Draft One. 

We asked each agency for the Draft One-generated police reports themselves, knowing that in most cases this would be a long shot. We also dug into Axon's user manuals to figure out what kind of logs are generated and how to carefully phrase our public records request to get them. We asked for the current system settings for Draft One, since there are a lot of levers police administrators can pull that drastically change how and when officers can use the software. We also requested the standard records that we usually ask for when researching new technologies: procurement documents, agreements, training manuals, policies, and emails with vendors. 

Like all mass public records campaigns, the results were… mixed. Some agencies were refreshingly open with their records. Others assessed us records fees well outside the usual range for a non-profit organization. 

What we learned about the process is worth sharing. Axon has thousands of clients nationwide that use its Tasers, body-worn cameras and bundles of surveillance equipment, and the company is using those existing relationships to heavily promote Draft One.  We expect many more cities to deploy the technology over the next few years. Watchdogging police use of AI will require a nationwide effort by journalists, advocacy organizations and community volunteers.

Below we’re sharing some sample language you can use in your own public records requests about Draft One — but be warned. It’s likely that the more you include, the longer it might take and the higher the fees will get. The template language and our suggestions for filing public records requests are not legal advice. If you have specific questions about a public records request you filed, consult a lawyer.

1. Police Reports

Language to try in your public records request:

  • All police report narratives, supplemental report narratives, warrant affidavits, statements, and other narratives generated using Axon Draft One to document law enforcement-related incidents for the period between [DATE IN THE LAST FEW WEEKS] and the date this request is received. If your agency requires a Draft One disclosure in the text of the message, you can use "Draft One" as a keyword search term.

Or

  • The [NUMBER] most recent police report narratives that were generated using Axon Draft One between [DATE IN THE LAST FEW WEEKS] and the date this request is received.

If you are curious about a particular officer's Draft One usage, you can also ask for their reports specifically. However it may be helpful to obtain their usage log first (see section 2).

  • All police report narratives, supplemental report narratives, warrant affidavits, statements, and other narratives generated by [OFFICER NAME] using Axon Draft One to document law enforcement-related incidents for the period between [DATE IN THE LAST FEW WEEKS] and the date this request is received.

We suggest using weeks, not months, because the sheer number of reports can get costly very quickly.

As an add-on to Axon's evidence and records management platforms, Draft One uses ChatGPT to convert audio taken from Axon body-worn cameras into the so-called first draft of the narrative portion of a police report. 

When Politico surveyed seven agencies in September 2024, reporter Alfred Ng found that police administrators did not have the technical ability to identify which reports contained AI-generated language. As Ng reported. There is no way for us to search for these on our end,” a Lafayette, IN police captain told Ng. Six months later, EFF received the same no-can-do response from the Lafayette Police Department.

Although Lafayette Police could not create a list on their own, it turns out that Axon's engineers can generate these reports for police if asked. When the Frederick Police Department in Colorado received a similar request from Ng, the agency contacted Axon for help. The company does internally track reports written with Draft One and was able to provide a spreadsheet of Draft One reports (.csv) and even provided Frederick Police with computer code to allow the agency to create similar lists in the future. Axon told them they would look at making this a feature in the future, but that appears not to have happened yet. 

But we also struck gold with two agencies: the Palm Beach County Sheriff's Office (PBCSO) in Florida and the Lake Havasu City Police Department in Arizona. In both cases, the agencies require officers to include a disclosure that they used Draft One at the end of the police narrative. Here's a slide from the Palm Beach County Sheriff's Draft One training:

And here's the boilerplate disclosure: 

I acknowledge this report was generated from a digital recording using Draft One by Axon. I further acknowledge that I have I reviewed the report, made any necessary edits, and believe it to be an accurate representation of my recollection of the reported events. I am willing to testify to the accuracy of this report.

As small a gesture as it may seem, that disclosure makes all the difference when it comes to responding to a public records request. Lafayette Police could not isolate the reports because its policy does not require the disclosure. A Frederick Police Department sergeant noted in an email to Axon that they could isolate reports when the auto-disclosure was turned on, but not after they decided to turn it off. This year, Utah legislators introduced a bill to require this kind of disclosure on AI-generated reports.

As the PBCSO records manager told us: "We are able to do a keyword and a timeframe search. I used the words ‘Draft One’ and the system generated all the Draft One reports for that timeframe." In fact, in Palm Beach County and Lake Havasu, records administrators dug up huge numbers of records. But, once we saw the estimated price tag, we ultimately narrowed our request to just 10 reports.

Here is an example of a report from PBCSO, which only allows Draft One to be used in incidents that don't involve a criminal charge. As a result, many of the reports were related to mental health or domestic dispute responses.  

A machine readable text version of this report is available here. Full version here.

And here is an example from the Lake Havasu City Police Department, whose clerk was kind enough to provide us with a diverse sample of requests.

A machine readable text version of this report is available here. Full version here.

EFF redacted some of these records to protect the identity of members of the public who were captured on body-worn cameras. Black-bar redactions were made by the agencies, while bars with X's were made by us. You can view all the examples we received below: 

We also received police reports (perhaps unintentionally) from two other agencies that were contained as email attachments in response to another part of our request (see section 7).

2. Audit Logs

Language to try in your public records request:

Note: You can save time by determining in advance whether the agency uses Axon Evidence or Axon Records and Standards, then choose the applicable option below. If you don't know, you can always request both.

Audit logs from Axon Evidence

  • Audit logs for the period December 1, 2024 through the date this request is received, for the 10 most recently active users.
    According to Axon's online user manual, through Axon Evidence agencies are able to view audit logs of individual officers to ascertain whether they have requested the use of Draft One, signed a Draft One liability disclosure or changed Draft One settings (https://my.axon.com/s/article/View-the-audit-trail-in-Axon-Evidence-Draft-One?language=en_US). In order to obtain these audit logs, you may follow the instructions on this Axon page: https://my.axon.com/s/article/Viewing-a-user-audit-trail?language=en_US.
    In order to produce a list of the 10 most recent active users, you may click the arrow next to "Last Active" then select the most 10 recent. The [...] menu item allows you to export the audit log. We would prefer these audits as .csv files if possible.
    Alternatively, if you know the names of specific officers, you can name them rather than selecting the most recent.

Or

Audit logs from Axon Records and Axon Standards

  • According to Axon's online user manual, through Axon Records and Standards, agencies are able to view audit logs of individual officers to ascertain whether they have requested a Draft One draft or signed a Draft One liability disclosure. https://my.axon.com/s/article/View-the-audit-log-in-Axon-Records-and-Standards-Draft-One?language=en_US
    To obtain these logs using the Axon Records Audit Tool, follow these instructions: https://my.axon.com/s/article/Audit-Log-Tool-Axon-Records?language=en_US
    a. Audit logs for the period December 1, 2024 through the date this request is received for the first user who comes up when you enter the letter "M" into the audit tool. If no user comes up with M, please try "Mi."
    b. Audit logs for the period December 1, 2024 through the date this request is received for the first user who comes up when you enter the letter "J" into the audit tool. If no user comes up with J, please try "Jo."
    c. Audit logs for the period December 1, 2024 through the date this request is received for the first user who comes up when you enter the letter "S" into the audit tool. If no user comes up with S, please try "Sa."

You could also tell the agency you are only interested in Draft One related items, which may save the agency time in reviewing and redacting the documents.

Generally, many of the basic actions a police officer takes using Axon technology — whether it's signing in, changing a password, accessing evidence or uploading BWC footage — is logged in the system. 

This also includes some actions when an officer uses Draft One. However, the system only logs three types of activities: requesting that Draft One generate a report, signing a Draft One liability disclosure, or changing Draft One's settings. And these reports are one of the only ways to identify which reports were written with AI and how widely the technology is used. 

Unfortunately, Axon appears to have designed its system so that administrators cannot create a list of all Draft One activities taken by the entire police force. Instead, all they can do is view an individual officer's audit log to see when they used Draft One or look at the log for a particular piece of evidence to see if Draft One was used. These can be exported as a spreadsheet or a PDF. (When the Frederick Police Department asked Axon how to create a list of Draft One reports, the Axon rep told them that feature wasn't available and they would have to follow the above method. "To set expectations, it’s not going to be graceful, but this wasn’t a scenario we anticipated needing to make easy," Axon wrote in August 2024, then suggested it might come up with a long-term solution. We emailed Axon back in March to see if this was still the case, but they did not provide a response.) 

Here's an excerpt from a PDF version from the Bishop Police Department in California:

Here are some additional audit log examples: 

If you know the name of an individual officer, you can try to request their audit logs to see if they used Draft One. Since we didn't have a particular officer in mind, we had to get creative. 

An agency may manage their documents with one of a few different Axon offerings: Axon Evidence, Axon Records, or Axon Standards. The process for requesting records is slightly different depending on which one is used. We dug through the user manuals and came up with a few ways to export a random(ish) example. We also linked the manuals and gave clear instructions for the records officers.

With Axon Evidence, an administrator can simply sort the system to show the 10 most recent users then export their usage logs. With Axon Records/Standard, the administrator has to start typing in a name and then it auto-populates with suggestions. So, we ask them to export the audit logs for the first few users who came up when they typed the letters M, J, and S into the search (since those letters are common at the beginning of names). 

Unfortunately, this method is a little bit of a gamble. Many officers still aren't using Draft One, so you may end up with hundreds of pages of logs that don't mention Draft One at all (as was the case with the records we received from Monroe County, NY).

3. Settings

Language to try in your public records request: 

  • A copy of all settings and configurations made by this agency in its use of the Axon Draft One platform, including all opt-in features that the department has elected to use and the incident types for which the software can be used. A screen capture of these settings will suffice.

We knew the Draft One system offers department managers the option to customize how it can be used, including the categories of crime for which reports can be generated and whether or not there is a disclaimer automatically added to the bottom of the report disclosing the use of AI in its generation. So we asked for a copy of these settings and configurations. In some cases, agencies claimed this was exempted from their public records laws, while other agencies did provide the information. Here is an example from the Campbell Police Department in California: 

(It's worth noting that while Campbell does require each police report to contain a disclosure that Draft One was used, the California Public Records Act exempts police reports from being released.)

Examples of settings: 

4. Procurement-related Documents and Agreements

Language to try in your public records request:

  • All contracts, memorandums of understanding, and any other written agreements between this agency and Axon related to the use of Draft One, Narrative Assistant, or any other AI-assisted report generation tool provided by Axon. Responsive records include all associated amendments, exhibits, and supplemental and supporting documentation, as well as all relevant terms of use, licensing agreements, and any other guiding materials. If access to Draft One or similar tools is being provided via an existing contract or through an informal agreement, please provide the relevant contract or the relevant communication or agreement that facilitated the access. This includes all agreements, both formal and informal, including all trial access, even if that access does not or did not involve financial obligations.

It can be helpful to know how much Draft One costs, how many user licenses the agency paid for, and what the terms of the agreement are. That information is often contained in records related to the contracting process. Agencies will often provide these records with minimal pushback or redactions. Many of these records may already be online, so a requester can save time and effort by looking around first. These are often found in city council agenda packets. Also, law enforcement agencies often will bump these requests to the city or county clerk instead. 

Here's an excerpt from the Monroe County Sheriff's Office in New York:

These kinds of procurement records describe the nature and cost of the relationship between the police department and the company. They can be very helpful for understanding how much a continuing service subscription will cost and what else was bundled in as part of the purchase. Draft One, so far, is often accessed as an additional feature along with other Axon products. 

We received too many documents to list them all, but here is a representative example of some of the other documents you might receive, courtesy of the Dacono Police Department in Colorado.

5. Training, Manuals and Policies

All training materials relevant to Draft One or Axon Narrative Assistant generated by this agency, including but not limited to:

  • All training material provided by Axon to this agency regarding its use of Draft One;
  • All internal training materials regarding the use of Draft One;
  • All user manuals, other guidance materials, help documents, or related materials;
  • Guides, safety tests, and other supplementary material that mention Draft One provided by Axon from January 1, 2024 and the date this request is received;
  • Any and all policies and general orders related to the use of Draft One, the Narrative Assistant, or any other AI-assisted report generation offerings provided by Axon (An example of one such policy can be found here: https://cdn.muckrock.com/foia_files/2024/11/26/608_Computer_Software_and_Transcription-Assisted_Report_Generation.pdf).

In addition to seeing when Draft One was used and how it was acquired, it can be helpful to know what rules officers must follow, what directions they're given for using it, and what features are available to users. That's where manuals, policies and training materials come in handy. 

User manuals are typically going to come from Axon itself. In general, if you can get your hands on one, this will help you to better understand the mechanisms of the system, and it will help you align the way you craft your request with the way the system actually works. Luckily, Axon has published many of the materials online and we've already obtained the user manual from multiple agencies. However, Axon does update the manual from time to time, so it can be helpful to know which version the agency is working from.

Here's one from December 2024:

Policies are internal police department guidance for using Draft One. Not all agencies have developed a policy, but the ones they do have may reveal useful information, such as other records you might be able to request. Here are some examples: 

Training and user manuals also might reveal crucial information about how the technology is used. In some cases these documents are provided by Axon to the customer. These records may illuminate the specific direction that departments are emphasizing about using the product.

Here are a few examples of training presentations:

 6. Evaluations

Language to try in your public records request:

  • All final reports, evaluations, reports, or other documentation concluding or summarizing a trial or evaluation period or pilot project

Many departments are getting access to Draft One as part of a trial or pilot program. The outcome of those experiments with the product can be eye-opening or eyebrow-raising. There might also be additional data or a formal report that reviews what the department was hoping to get from the experience, how they structured any evaluation of its time-saving value for the department, and other details about how officers did or did not use Draft One. 

Here are some examples we received: 

7. Communications

Language to try in your public records request:

• All communications sent or received by any representative of this agency with individuals representing Axon referencing the following term, including emails and attachments:

  • Draft One
  • Narrative Assistant
  • AI-generated report

• All communications sent to or received by any representative of this agency with each of the following email addresses, including attachments:

  • [INSERT EMAIL ADDRESSES]

Note: We are not including the specific email addresses here that we used, since they are subject to change when employees are hired, promoted, or find new gigs. However, you can find the emails we used in our requests on MuckRock.

The communications we wanted were primarily the emails between Axon and the law enforcement agency. As you can imagine, these emails could reveal the back-and-forth between the company and its potential customers, and these conversations could include the marketing pitch made to the department, the questions and problems police may have had with it, and more. 

In some cases, these emails reveal cozy relationships between salespeople and law enforcement officials. Take, for example, this email exchange between the Dickinson Police Department and an Axon rep:

Or this email between a Frederick Police Department sergeant and an Axon representative, in which a sergeant describes himself as "doing sales" for Axon by providing demos to other agencies.

A machine readable text version of this email is available here.

Emails like this also show what other agencies are considering using Draft One in the future. For example, in this email we received from the Campbell Police Department shows that the San Francisco Police Department was testing Draft One as early as October 2024 (the usage was confirmed in June 2025 by the San Francisco Standard).

 

A machine readable text version of this email is available here.

Your mileage will certainly vary for these email requests, in part because the ability for agencies to search their communications can vary. Some agencies can search by a keyword like "Draft One” or "Axon" and while other agencies can only search by the specific email address. 

Communications can be one of the more expensive parts of the request. We've found that adding a date range and key terms or email addresses has helped limit these costs and made our requests a bit clearer for the agency. Axon sends a lot of automated emails to its subscribers, so the agency may quote a large fee for hundreds or thousands of emails that aren't particularly interesting. Many agencies respond positively if a requester reaches out to say they're open to narrowing or focusing their request. 

Asking for Body-Worn Camera Footage 

One of the big questions is how do the Draft One-generated reports compare to the BWC audio the narrative is based on? Are the reports accurate? Are they twisting people's words? Does Draft One hallucinate?

Finding these answers requires both obtaining the police report and the footage of the incident that was fed into the system. The laws and process for obtaining BWC footage vary dramatically state to state, and even department to department. Depending on where you live, it can also get expensive very quickly, since some states allow agencies to charge you not only for the footage but the time it takes to redact the footage. So before requesting footage, read up on your state’s public access laws or consult a lawyer.

However, once you have a copy of a Draft One report, you should have enough information to file a follow-up request for the BWC footage. 

So far, EFF has not requested BWC footage. In addition to the aforementioned financial and legal hurdles, the footage can implicate both individual privacy and transparency regarding police activity. As an organization that advocates for both, we want to make sure we get this balance right. Afterall, BWCs are a surveillance technology that collects intelligence on suspects, victims, witnesses, and random passersby. When the Palm Beach County Sheriff's Office gave us an AI-generated account of a teenager being hospitalized for suicidal ideations, we of course felt that the minor's privacy outweighed our interest in evaluating the AI. But do we feel the same way about a Draft One-generated narrative about a spring break brawl in Lake Havasu? 

Ultimately, we may try to obtain a limited amount of BWC footage, but we also recognize that we shouldn't make the public wait while we work it out for ourselves. Accountability requires different methods, different expertise, and different interests, and with this guide we hope to not only shine light on Draft One, but to provide the schematics for others–including academics, journalists, and local advocates–to build their own spotlights to expose police use of this problematic technology.

Where to Find More Docs 

Despite the variation in how agencies responded, we did have some requests that proved fruitful. You can find these requests and the documents we got via the linked police department names below.

Please note that we filed two different types of requests, so not all the elements above may be represented in each link.

Via Document Cloud (PDFs)

Via MuckRock (Assorted filetypes)

Special credit goes to EFF Research Assistant Jesse Cabrera for public records request coordination. 

Dave Maass

It's EFF's 35th Anniversary (And We're Just Getting Started)

1 week 1 day ago

Today we celebrate 35 years of EFF bearing the torch for digital rights against the darkness of the world, and I couldn’t be prouder. EFF was founded at a time when governments were hostile toward technology and clueless about how it would shape your life. While threats from state and commercial forces grew alongside the internet, so too did EFF’s expertise. Our mission has become even larger than pushing back on government ignorance and increasingly dangerous corporate power. In this moment, we're doing our part to preserve the necessities of democracy: privacy, free expression, and due process. It's about guarding the security of our society, along with our loved ones and the vulnerable communities around us.

With the support of EFF’s members, we use law, technology, and activism to create the conditions for human rights and civil liberties to flourish, and for repression to fail.

In this moment, we're doing our part to preserve the necessities of democracy: privacy, free expression, and due process.

EFF believes in commonsense freedom and fairness. We’re working toward an environment where your technology works the way you want it to; you can move through the world without the threat of surveillance; and you can have private conversations with the people you care about and support the causes you believe in. We’ve won many fights for encryption, free expression, innovation, and your personal data throughout the years. The opposition is tough, but—with a powerful vision for a better future and you on our side—EFF is formidable.

Throughout EFF’s year-long 35th Anniversary celebration, our dedicated activists, investigators, technologists, and attorneys will share the lessons from EFF’s long and rich history so that we can help overcome the obstacles ahead. Thanks to you, EFF is here to stay.

Together for the Digital Future

As a member-supported nonprofit, everything EFF does depends on you. Donate to help fuel the fight for privacy, free expression, and a future where we protect digital freedom for everyone.

JOIN EFF

Powerful forces may try to chip away at your rights—but when we stand together, we win.


Watch Today: EFFecting Change Live

Just hours from now, join me for the 35th Anniversary edition of our EFFecting Change livestream. I’m leading this Q&A with EFF Director for International Freedom of Expression Jillian York, EFF Legislative Director Lee Tien, and Professor and EFF Board Member Yoshi Kohno. Together, we’ve seen it all and today we hope you'll join us for what’s next.

WATCH LIVE

11:00 AM Pacific (check local time)

EFF supporters around the world sustain our mission to defend technology creators and users. Thank you for being a part of this community and helping it thrive.

Cindy Cohn

Data Brokers are Selling Your Flight Information to CBP and ICE

1 week 2 days ago

For many years, data brokers have existed in the shadows, exploiting gaps in privacy laws to harvest our information—all for their own profit. They sell our precise movements without our knowledge or meaningful consent to a variety of private and state actors, including law enforcement agencies. And they show no sign of stopping.

This incentivizes other bad actors. If companies collect any kind of personal data and want to make a quick buck, there’s a data broker willing to buy it and sell it to the highest bidder–often law enforcement and intelligence agencies.

One recent investigation by 404 Media revealed that the Airlines Reporting Corporation (ARC), a data broker owned and operated by at least eight major U.S. airlines, including United Airlines and American Airlines, collected travelers’ domestic flight records and secretly sold access to U.S. Customs and Border Protection (CBP). Despite selling passengers’ names, full flight itineraries, and financial details, the data broker prevented U.S. border forces from revealing it as the origin of the information. So, not only is the government doing an end run around the Fourth Amendment to get information where they would otherwise need a warrant—they’ve also been trying to hide how they know these things about us. 

ARC’s Travel Intelligence Program (TIP) aggregates passenger data and contains more than one billion records spanning 39 months of past and future travel by both U.S. and non-U.S. citizens. CBP, which sits within the U.S. Department of Homeland Security (DHS), claims it needs this data to support local and state police keeping track of people of interest. But at a time of growing concerns about increased immigration enforcement at U.S. ports of entry, including unjustified searches, law enforcement officials will use this additional surveillance tool to expand the web of suspicion to even larger numbers of innocent travelers. 

More than 200 airlines settle tickets through ARC, with information on more than 54% of flights taken globally. ARC’s board of directors includes representatives from U.S. airlines like JetBlue and Delta, as well as international airlines like Lufthansa, Air France, and Air Canada. 

In selling law enforcement agencies bulk access to such sensitive information, these airlines—through their data broker—are putting their own profits over travelers' privacy. U.S. Immigration and Customs Enforcement (ICE) recently detailed its own purchase of personal data from ARC. In the current climate, this can have a detrimental impact on people’s lives. 

Movement unrestricted by governments is a hallmark of a free society. In our current moment, when the federal government is threatening legal consequences based on people’s national, religious, and political affiliations, having air travel in and out of the United States tracked by any ARC customer is a recipe for state retribution. 

Sadly, data brokers are doing even broader harm to our privacy. Sensitive location data is harvested from smartphones and sold to cops, internet backbone data is sold to federal counterintelligence agencies, and utility databases containing phone, water, and electricity records are shared with ICE officers. 

At a time when immigration authorities are eroding fundamental freedoms through increased—and arbitrary—actions at the U.S. border, this news further exacerbates concerns that creeping authoritarianism can be fueled by the extraction of our most personal data—all without our knowledge or consent.

The new revelations about ARC’s data sales to CBP and ICE is a fresh reminder of the need for “privacy first” legislation that imposes consent and minimization limits on corporate processing of our data. We also need to pass the “Fourth Amendment is not for sale” act to stop police from bypassing judicial review of their data seizures by means of purchasing data from brokers. And let’s enforce data broker registration laws. 

Paige Collings

Electronic Frontier Foundation to Present Annual EFF Awards to Just Futures Law, Erie Meyer, and Software Freedom Law Center, India

1 week 2 days ago
2025 Awards Will Be Presented in a Live Ceremony Wednesday, Sept. 10 in San Francisco

SAN FRANCISCO—The Electronic Frontier Foundation (EFF) is honored to announce that Just Futures Law, Erie Meyer, and Software Freedom Law Center, India will receive the 2025 EFF Awards for their vital work in ensuring that technology supports privacy, freedom, justice, and innovation for all people.  

The EFF Awards recognize specific and substantial technical, social, economic, or cultural contributions in diverse fields including journalism, art, digital access, legislation, tech development, and law.  

 The EFF Awards ceremony will start at 6 p.m. PT on Wednesday, Sept. 10, 2025 at the San Francisco Design Center Galleria, 101 Henry Adams St. in San Francisco. Guests can register at http://www.eff.org/effawards. The ceremony will be recorded and shared online on Sept. 12. 

For the past 30 years, the EFF Awards—previously known as the Pioneer Awards—have recognized and honored key leaders in the fight for freedom and innovation online. Started when the internet was new, the Awards now reflect the fact that the online world has become both a necessity in modern life and a continually evolving set of tools for communication, organizing, creativity, and increasing human potential. 

“Whether fighting the technological abuses that abet criminalization, detention, and deportation of immigrants and people of color, or working and speaking out fearlessly to protect Americans’ data privacy, or standing up for digital rights in the world’s most populous country, all of our 2025 Awards winners contribute to creating a brighter tech future for humankind,”  EFF Executive Director Cindy Cohn said. “We hope that this recognition will bring even more support for each of these vital efforts.” 

Just Futures Law: Leading Immigration and Surveillance Litigation 

jfl_icon_medium.png Just Futures Law is a women-of-color-led law project that recognizes how surveillance disproportionately impacts immigrants and people of color in the United States.  It uses litigation to fight back as part of defending and building the power of immigrant rights and criminal justice activists, organizers, and community groups to prevent criminalization, detention, and deportation of immigrants and people of color. Just Futures was founded in 2019 using a movement lawyering and racial justice framework and seeks to transform how litigation and legal support serves communities and builds movement power.  

In the past year, Just Futures sued the Department of Homeland Security and its subagencies seeking a court order to compel the agencies to release records on their use of AI and other algorithms, and sued the Trump Administration for prematurely halting Haiti’s Temporary Protected Status, a humanitarian program that allows hundreds of thousands of Haitians to temporarily remain and work in the United States due to Haiti’s current conditions of extraordinary crises. It has represented activists in their fight against tech giants like Clearview AI, it has worked with Mijente to launch the TakeBackTech fellowship to train new advocates on grassroots-directed research, and it has worked with Grassroots Leadership to fight for the release of detained individuals under Operation Lone Star

Erie Meyer: Protecting Americans' Privacy 

eriemeyer.png Erie Meyer is a Senior Fellow at the Vanderbilt Policy Accelerator where she focuses on the intersection of technology, artificial intelligence, and regulation, and a Senior Fellow at the Georgetown Law Institute for Technology Law & Policy. She is former Chief Technologist at both the Consumer Financial Protection Bureau (CFPB) and the Federal Trade Commission. Earlier, she was senior advisor to the U.S. Chief Technology Officer at the White House, where she co-founded the United States Digital Service, a team of technologists and designers working to improve digital services for the public. Meyer also worked as senior director at Code for America, a nonprofit that promotes civic hacking to modernize government services, and in the Ohio Attorney General's office at the height of the financial crisis. 

Since January 20, Meyer has helped organize former government technologists to stand up for the privacy and integrity of governmental systems that hold Americans’ data. In addition to organizing others, she filed a declaration in federal court in February warning that 12 years of critical records could be irretrievably lost in the CFPB’s purge by the Trump Administration’s Department of Government Efficiency. In April, she filed a declaration in another case warning about using private-sector AI on government information. That same month, she testified to the House Oversight Subcommittee on Cybersecurity, Information Technology, and Government Innovation that DOGE is centralizing access to some of the most sensitive data the government holds—Social Security records, disability claims, even data tied to national security—without a clear plan or proper oversight, warning that “DOGE is burning the house down and calling it a renovation.” 

Software Freedom Law Center, India: Defending Digital Freedoms 

sflc_logo.png Software Freedom Law Center, India is a donor-supported legal services organization based in India that brings together lawyers, policy analysts, students, and technologists to protect freedom in the digital world. It promotes innovation and open access to knowledge by helping developers make great free and open-source software, protects privacy and civil liberties for Indians by educating and providing free legal advice, and helps policymakers make informed and just decisions about use of technology. 

Founded in 2010 by technology lawyer and online civil liberties activist Mishi Choudhary, SFLC.IN tracks and participates in litigation, AI regulations, and free speech issues that are defining Indian technology. It also tracks internet shutdowns and censorship incidents across India, provides digital security training, and has launched the Digital Defenders Network, a pan-Indian network of lawyers committed to protecting digital rights. It has conducted landmark litigation cases, petitioned the government of India on freedom of expression and internet issues, and campaigned for WhatsApp and Facebook to fix a feature of their platform that has been used to harass women in India. 

To register for this event:  http://www.eff.org/effawards 

For past honorees: https://www.eff.org/awards/past-winners 

Josh Richman

EFF to US Court of Appeals: Protect Taxpayer Privacy

1 week 3 days ago

EFF has filed an amicus brief in Trabajadores v. Bessent, a case concerning the Internal Revenue Service (IRS) sharing protected personal tax information with the Department of Homeland Security for the purposes of immigration enforcement. Our expertise in  privacy and data sharing makes us the ideal organization to step in and inform the judge: government actions like this have real-world consequences. The IRS’s sharing, and especially bulk sharing, of data is improper and  makes taxpayers vulnerable to inevitable mistakes. As a practical matter, the sharing of data that IRS had previously claimed was protected undermines the trust important civil institutions require in order to be effective. 

You can read the entire brief here

The brief makes two particular arguments. The first is that if the Tax Reform Act, the statute under which the IRS found the authority to share the data, is considered to be ambiguous, and that the statute should be interpreted in light of the legislative intent and historical background, which disfavors disclosure. The brief reads,

Given the historical context, and decades of subsequent agency promises to protect taxpayer confidentiality and taxpayer reliance on those promises, the Administration’s abrupt decision to re-interpret §6103 to allow sharing with ICE whenever a potential “criminal proceeding” can be posited, is a textbook example of an arbitrary and capricious action even if the statute can be read to be ambiguous.

The other argument we make to the court is that data scientists agree: when you try to corroborate information between two databases in which information is only partially identifiable, mistakes happen. We argue:

Those errors result from such mundane issues as outdated information, data entry errors, and taxpayers or tax preparer submission of incorrect names or addresses. If public reports are correct, and officials intend to share information regarding 700,000 or even 7 million taxpayers, the errors will multiply, leading to the mistaken targeting, detention, deportation, and potentially even physical harm to regular taxpayers.

Information silos in the government exist for a reason. Here, it was designed to protect individual privacy and prevent executive abuse that can come with unfettered access to properly-collected information.  The concern motivating Congress to pass the Tax Reform Act was the same as that behind Privacy Act of 1974 and the 1978 Right to Financial Privacy Act. These laws were part of a wave of reforms Congress considered necessary to address the misuse of tax data to spy on and harass political opponents, dissidents, civil rights activists, and anti-war protestors in the 1960s and early 1970s. Congress saw the need to ensure that data collected for one purpose should only be used for that purpose, with very narrow exceptions, or else it is prone to abuse. Yet the IRS is currently sharing information to allow ICE to enforce immigration law.

Taxation in the United States operates through a very simple agreement: the government requires taxes from people working inside the United States in order to function. In order to get people to pay their taxes, including undocumented immigrants living and working in the United States, the IRS has previously promised that the data they collect will not be used against a person for punitive reasons. This increases people to pay taxes and alleviates concerns of people people may have to avoid interacting with the government. But the IRS’s reversal has greatly harmed that trust and has potential to have far reaching and negative ramifications, including decreasing future tax revenue.

Consolidating government information so that the agencies responsible for healthcare, taxes, or financial support are linked to agencies that police, surveil, and fine people is a recipe for disaster. For that reason, EFF is proud to submit this amicus brief in Trabajadores v. Bessent in support of taxpayer privacy. 

Related Cases: American Federation of Government Employees v. U.S. Office of Personnel Management
Matthew Guariglia

How to Build on Washington’s “My Health, My Data” Act

1 week 5 days ago

In 2023, the State of Washington enacted one of the strongest consumer data privacy laws in recent years: the “my health my data” act (HB 1155). EFF commends the civil rights, data privacy, and reproductive justice advocates who worked to pass this law.

This post suggests ways for legislators and advocates in other states to build on the Washington law and draft one with even stronger protections. This post will separately address the law’s scope (such as who is protected); its safeguards (such as consent and minimization); and its enforcement (such as a private right of action). While the law only applies to one category of personal data – our health information – its structure could be used to protect all manner of data.

Scope of Protection

Authors of every consumer data privacy law must make three decisions about scope: What kind of data is protected? Whose data is protected? And who is regulated?

The Washington law protects “consumer health data,” defined as information linkable to a consumer that identifies their “physical or mental health status.” This includes all manner of conditions and treatments, such as gender-affirming and reproductive care. While EFF’s ultimate goal is protection of all types of personal information, bills that protect at least some types can be a great start.

The Washington law protects “consumers,” defined as all natural persons who reside in the state or had their health data collected there. It is best, as here, to protect all people. If a data privacy law protects just some people, that can incentivize a regulated entity to collect even more data, in order to distinguish protected from unprotected people. Notably, Washington’s definition of “consumers” applies only in “an individual or household context,” but not “an employment context”; thus, Washingtonians will need a different health privacy law to protect them from their snooping bosses.

The Washington law defines a “regulated entity” as “any legal entity” that both: “conducts business” in the state or targets residents for products or services; and “determines the purpose and means” of processing consumer health data. This appears to include many non-profit groups, which is good, because such groups can harmfully process a lot of personal data.

The law excludes government from regulation, which is not unusual for data privacy bills focused on non-governmental actors. State and local government will likely need to be regulated by another data privacy law.

Unfortunately, the Washington law also excludes “contracted service providers when processing data on behalf of government.” A data broker or other surveillance-oriented business should not be free from regulation just because it is working for the police.

Consent or Minimization to Collect or Share Health Data

The most important part of Washington’s law requires either consent or minimization for a regulated entity to collect or share a consumer’s health data.

The law has a strong definition of “consent.” It must be “a clear affirmative act that signifies a consumer’s freely given, specific, informed, opt-in, voluntary, and unambiguous agreement.” Consent cannot be obtained with “broad terms of use” or “deceptive design.”

Absent consent, a regulated entity cannot collect or share a consumer’s health data except as necessary to provide a good or service that the consumer requested. Such rules are often called “data minimization.” Their virtue is that a consumer does not need to do anything to enjoy their statutory privacy rights; the burden is on the regulated entity to process less data.

As to data “sale,” the Washington law requires enhanced consent (which the law calls “valid authorization”). Sale is the most dangerous form of sharing, because it incentivizes businesses to collect the most possible data in hopes of later selling it. For this reason, some laws flatly ban sale of sensitive data, like the Illinois biometric information privacy act (BIPA).

For context, there are four ways for a bill or law to configure consent and/or minimization. Some require just consent, like BIPA’s provisions on data collection. Others require just minimization, like the federal “my body my data” bill. Still others require both, like the Massachusetts location data privacy bill. And some require either one or the other. In various times and places, EFF has supported all four configurations. “Either/or” is weakest, because it allows regulated entities to choose whether to minimize or to seek consent – a choice they will make based on their profit and not our privacy.

Two Protections of Location Data Privacy

Data brokers harvest our location information and sell it to anyone who will pay, including advertisers, police, and other adversaries. Legislators are stepping forward to address this threat.

The Washington law does so in two ways. First, the “consumer health data” protected by the consent-or-minimization rule is defined to include “precise location information that could reasonably indicate a consumer’s attempt to acquire or receive health services or supplies.” In turn, “precise location” is defined as within 1,750’ of a person.

Second, the Washington law bans a “geofence” around an “in-person health care service,” if “used” for one of three forbidden purposes (to track consumers, to collect their data, or to send them messages or ads). A “geofence” is defined as technology that uses GPS or the like “to establish a virtual boundary” of 2,000’ around the perimeter of a physical location.

This is a good start. It is also much better than weaker rules that only apply to the immediate vicinity of sensitive locations. Such rules allow adversaries to use location data to track us as we move towards sensitive locations, observe us enter the small no-data bubble around those locations, and infer what we may have done there. On the other hand, Washington’s rules apply to sizeable areas. Also, its consent-or-minimization rule applies to all locations that could indicate pursuit of health care (not just health facilities). And its geofence rule forbids use of location data to track people.

Still, the better approach, as in several recent bills, is to simply protect all location data. Protecting just one kind of sensitive location, like houses of worship, will leave out others, like courthouses. More fundamentally, all locations are sensitive, given the risk that others will use our location data to determine where – and with whom – we live, work, and socialize.

More Data Privacy Protections

Other safeguards in the Washington law deserve attention from legislators in other states:

  • Regulated entities must publish a privacy policy that discloses, for example, the categories of data collected and shared, and the purposes of collection. Regulated entities must not collect, use, or share additional categories of data, or process them for additional purposes, without consent.
  • Regulated entities must provide consumers the rights to access and delete their data.
  • Regulated entities must restrict data access to just those employees who need it, and maintain industry-standard data security
Enforcement

A law is only as strong as its teeth. The best way to ensure enforcement is to empower people to sue regulated entities that violate their privacy; this is often called a “private right of action.”

The Washington law provides that its violation is “an unfair or deceptive act” under the state’s separate consumer protection act. That law, in turn, bans unfair or deceptive acts in the conduct of trade or commerce. Upon a violation of the ban, that law provides a civil action to “any person who is injured in [their] business or property,” with the remedies of injunction, actual damages, treble damages up to $25,000, and legal fees and costs. It remains to be seen how Washington’s courts will apply this old civil action to the new “my health my data” act.

Washington legislators are demonstrating that privacy is important to public policy, but a more explicit claim would be cleaner: invasion of the fundamental human right to data privacy. Sadly, there is a nationwide debate about whether injury to data privacy, by itself, should be enough to go to court, without also proving a more tangible injury like identity theft. The best legislative models ensure full access to the courts in two ways. First, they provide: “A violation of this law regarding an individual’s data constitutes an injury to that individual, and any individual alleging a violation of this law may bring a civil action.” Second, they provide a baseline amount of damages (often called “liquidated” or “statutory” damages), because it is often difficult to prove actual damages arising from a data privacy injury.

Finally, data privacy laws must protect people from “pay for privacy” schemes, where a business charges a higher price or delivers an inferior product if a consumer exercises their statutory data privacy rights. Such schemes will lead to a society of privacy “haves” and “have nots.”

The Washington law has two helpful provisions. First, a regulated entity “may not unlawfully discriminate against a consumer for exercising any rights included in this chapter.” Second, there can be no data sale without a “statement” from the regulated entity to the consumer that “the provision of goods or services may not be conditioned on the consumer signing the valid authorization.”

Some privacy bills contain more-specific language, for example along these lines: “a regulated entity cannot take an adverse action against a consumer (such as refusal to provide a good or service, charging a higher price, or providing a lower quality) because the consumer exercised their data privacy rights, unless the data at issue is essential to the good or service they requested and then only to the extent the data is essential.”

What About Congress?

We still desperately need comprehensive federal consumer data privacy law built on “privacy first” principles. In the meantime, states are taking the lead. The very worst thing Congress could do now is preempt states from protecting their residents’ data privacy. Advocates and legislators from across the country, seeking to take up this mantle, would benefit from looking at – and building on – Washington’s “my health my data” law.

Adam Schwartz

🤫 Meta's Secret Spying Scheme | EFFector 37.7

2 weeks 2 days ago

Keeping up on the latest digital rights news has never been easier. With a new look, EFF's EFFector newsletter covers the latest details on our work defending your rights to privacy and free expression online.

EFFector 37.7 covers some of the very sneaky tactics that Meta has been using to track you online, and how you can mitigate some of this tracking. In this issue, we're also explaining the legal processes police use to obtain your private online data, and providing an update on the NO FAKES Act—a U.S. Senate bill that takes a flawed approach to concerns about AI-generated "replicas." 

And, in case you missed it in the previous newsletter, we're debuting a new audio companion to EFFector as well! This time, Lena Cohen breaks down the ways that Meta tracks you online and what you—and lawmakers—can do to prevent that tracking. You can listen now on YouTube or the Internet Archive.

Listen TO EFFECTOR

EFFECTOR 37.7 - META'S SECRET SPYING SCHEME

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

Podcast Episode: Cryptography Makes a Post-Quantum Leap

2 weeks 2 days ago

The cryptography that protects our privacy and security online relies on the fact that even the strongest computers will take essentially forever to do certain tasks, like factoring prime numbers and finding discrete logarithms which are important for RSA encryption, Diffie-Hellman key exchanges, and elliptic curve encryption. But what happens when those problems – and the cryptography they underpin – are no longer infeasible for computers to solve? Will our online defenses collapse? 

%3Ciframe%20height%3D%2252px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2Fcf786418-1f0e-452e-8026-ef1a38c77f4e%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com

   

(You can also find this episode on the Internet Archive and on YouTube.) 

Not if Deirdre Connolly can help it. As a cutting-edge thinker in post-quantum cryptography, Connolly is making sure that the next giant leap forward in computing – quantum machines that use principles of subatomic mechanics to ignore some constraints of classical mathematics and solve complex problems much faster – don’t reduce our digital walls to rubble. Connolly joins EFF’s Cindy Cohn and Jason Kelley to discuss not only how post-quantum cryptography can shore up those existing walls but also help us find entirely new methods of protecting our information. 

In this episode you’ll learn about: 

  • Why we’re not yet sure exactly what quantum computing can do yet, and that’s exactly why we need to think about post-quantum cryptography now 
  • What a “Harvest Now, Decrypt Later” attack is, and what’s happening today to defend against it
  • How cryptographic collaboration, competition, and community are key to exploring a variety of paths to post-quantum resilience
  • Why preparing for post-quantum cryptography is and isn’t like fixing the Y2K bug
  • How the best impact that end users can hope for from post-quantum cryptography is no visible impact at all
  • Don’t worry—you won’t have to know, or learn, any math for this episode!  

Deirdre Connolly is a research and applied cryptographer at Sandbox AQ with particular expertise in post-quantum encryption. She also co-hosts the “Security Cryptography Whatever” podcast about modern computer security and cryptography, with a focus on engineering and real-world experiences. Earlier, she was an engineer at the Zcash Foundation – a nonprofit that builds financial privacy infrastructure for the public good – as well as at Brightcove, Akamai, and HubSpot

Resources: 

What do you think of “How to Fix the Internet?” Share your feedback here.

Transcript

DEIRDRE CONNOLLY: I only got into cryptography and especially post quantum quickly after that. further into my professional life. I was a software engineer for a whil,e and the Snowden leaks happened, and phone records get leaked. All of Verizon's phone records get leaked. and then Prism and more leaks and more leaks. And as an engineer first, I felt like everything that I was building and we were building and telling people to use was vulnerable.
I wanted to learn more about how to do things securely. I went further and further and further down the rabbit hole of cryptography. And then, I think I saw a talk which was basically like, oh, elliptic curves are vulnerable to a quantum attack. And I was like, well, I, I really like these things. They're very elegant mathematical objects, it's very beautiful. I was sad that they were fundamentally broken, and, I think it was, Dan Bernstein who was like, well, there's this new thing that uses elliptic curves, but is supposed to be post quantum secure.
But the math is very difficult and no one understands it. I was like, well, I want to understand it if it preserves my beautiful elliptic curves. That's how I just went, just running, screaming downhill into post quantum cryptography.

CINDY COHN: That's Deirdre Connolly talking about how her love of beautiful math and her anger at the Snowden revelations about how the government was undermining security, led her to the world of post-quantum cryptography.
I'm Cindy Cohn, the executive director of the Electronic Frontier Foundation.

JASON KELLEY: And I'm Jason Kelley, EFF's activism director. You're listening to How to Fix the Internet.

CINDY COHN: On this show we talk to tech leaders, policy-makers, thinkers, artists and engineers about what the future could look like if we get things right online.

JASON KELLEY: Our guest today is at the forefront of the future of digital security. And just a heads up that this is one of the more technical episodes that we've recorded -- you'll hear quite a bit of cryptography jargon, so we've written up some of the terms that come up in the show notes, so take a look there if you hear a term you don't recognize.

CINDY COHN: Deidre Connolly is a research engineer and applied cryptographer at Sandbox AQ, with a particular expertise in post-quantum encryption. She also co-hosts the Security, Cryptography, Whatever podcast, so she's something of a cryptography influencer too. When we asked our tech team here at EFF who we should be speaking with on this episode about quantum cryptography and quantum computers more generally, everyone agreed that Deirdre was the person. So we're very glad to have you here. Welcome, Deirdre.

DEIRDRE CONNOLLY: Thank you very much for having me. Hi.

CINDY COHN: Now we obviously work with a lot of technologists here and, and certainly personally cryptography is near and dear to my heart, but we are not technologists, neither Jason nor I. So can you just give us a baseline of what post-quantum cryptography is and why people are talking about it?

DEIRDRE CONNOLLY: Sure. So a lot of the cryptography that we have deployed in the real world relies on a lot of math and security assumptions on that math based on things like abstract groups, Diffie-Hellman, elliptic curves, finite fields, and factoring prime numbers such as, uh, systems like RSA.
All of these, constructions and problems, mathematical problems, have served us very well in the last 40-ish years of cryptography. They've let us build very useful, efficient, small cryptography that we've deployed in the real world. It turns out that they are all also vulnerable in the same way to advanced cryptographic attacks that are only possible and only efficient when run on a quantum computer, and this is a class of computation, a whole new class of computation versus digital computers, which is the main computing paradigm that we've been used to for the last 75 years plus.
Quantum computers allow these new classes of attacks, especially, variants of Shore's algorithm – named Dr. Peter Shore – that basically when run on a sufficiently large, cryptographically relevant quantum computer, makes all of the asymmetric cryptography based on these problems that we've deployed very, very vulnerable.
So post-quantum cryptography is trying to take that class of attack into consideration and building cryptography to both replace what we've already deployed and make it resilient to this kind of attack, and trying to see what else we can do with these fundamentally different mathematical and cryptographic assumptions when building cryptography.

CINDY COHN: So we've kind of, we've secured our stuff behind a whole lot of walls, and we're slowly building a bulldozer. This is a particular piece of the world where the speed at which computers can do things has been part of our protection, and so we have to rethink that.

DEIRDRE CONNOLLY: Yeah, quantum computing is a fundamentally new paradigm of how we process data that promises to have very interesting, uh, and like, applications beyond what we can envision right now. Like things like protein folding, chemical analysis, nuclear simulation, and cryptanalysts, or very strong attacks against cryptography.
But it is a field where it's such a fundamentally new computational paradigm that we don't even know what its applications fully would be yet, because like we didn't fully know what we were doing with digital computers in the forties and fifties. Like they were big calculators at one time.

JASON KELLEY: When it was suggested that we talk to you about this. I admit that I have not heard much about this field, and I realized quickly when looking into it that there's sort of a ton of hype around quantum computing and post-quantum cryptography and that kind of hype can make it hard to know whether or not something is like actually going to be a big thing or, whether this is something that's becoming like an investment cycle, like a lot of things do. And one of the things that quickly came up as an actual, like real danger is what's called sort of “save now decrypt later.”

DEIRDRE CONNOLLY: Oh yeah.

JASON KELLEY: Right? We have all these messages, for example, that have been encrypted with current encryption methods. And if someone holds onto those, they can decrypt them using quantum computers in the future. How serious is that danger?

DEIRDRE CONNOLLY: It’s definitely a concern and it's the number one driver I would say to post-quantum crypto adoption in broad industry right now is mitigating the threat of a Store Now/Decrypt Later attack, also known as Harvest Now/Decrypt Later, a bunch of names that all mean the same thing.
And fundamentally, it's, uh, especially if you're doing any kind of key agreement over a public channel, and doing key agreement over a public channel is part of the whole purpose of like, you want to be able to talk to someone who you've never really, touched base with before, and you all kind of know, some public parameters that even your adversary knows and based on just the fact that you can send messages to each other and some public parameters, and some secret values that only you know, and only the other party knows you can establish a shared secret, and then you can start encrypting traffic between you to communicate. And this is what you do in your web browser when you have an HTTPS connection, that's over TLS.
This is what you do with Signal or WhatsApp or any, or, you know, Facebook Messenger with the encrypted communications. They're using Diffie-Helman as part of the protocol to set up a shared secret, and then you use that to encrypt their message bodies that you're sending back and forth between you.
But if you can just store all those communications over that public channel, and the adversary knows the public parameters 'cause they're freely published, that's part of Kerckhoff’s Principle about good cryptography - the only thing that the adversary shouldn't know about your crypto system is the secret key values that you're actually using. It should be secure against an adversary that knows everything that you know, except the secret key material.
And you can just record all those public messages and all the public key exchange messages, and you just store them in a big database somewhere. And then when you have your large cryptographically relevant quantum computer, you can rifle through your files and say, hmm, let's point it at this.
And that's the threat that's live now to the stuff that we have already deployed and the stuff that we're continuing to do communications on now that is protected by elliptic curve Diffie Hellman, or Finite Field Diffie Hellman, or RSA. They can just record that and just theoretically point an attack at it at a later date when that attack comes online.
So like in TLS, there's a lot of browsers and servers and infrastructure providers that have updated to post-quantum resilient solutions for TLS. So they're using a combination of the classic elliptic curve, Diffie Hellman and a post-quantum KEM, uh, called ML Kem that was standardized by the United States based on a public design that's been, you know, a multi international collaboration to help do this design.
I think that's been deployed in Chrome, and I think it's deployed by CloudFlare and it's getting deployed – I think it's now become the default option in the latest version of Open SSL. And a lot of other open source projects, so that's TLS similar, approaches are being adopted in open SSH, the most popular SSH implementation in the world. Signal, the service has updated their key exchange to also include a post quantum KEM and their updated key establishments. So when you start a new conversation with someone or reset a conversation with someone that is the latest version of Signal is now protected against that sort of attack.
That is definitely happening and it's happening the most rapidly because of that Store now/Decrypt later attack, which is considered live. Everything that we're doing now can just be recorded and then later when the attack comes online, they can attack us retroactively. So that's definitely a big driver of things changing in the wild right now.

JASON KELLEY: Okay. I'm going to throw out two parallels for my very limited knowledge to make sure I understand. This reminds me a little bit of sort of the work that had to be done before Y2K in, in the sense of like, now people think nothing went wrong and nothing was ever gonna go wrong, but all of us working anywhere near the field know actually it took a ton of work to make sure that nothing blew up or stopped working.
And the other is that in, I think it was 1998, EFF was involved in something we called Deep Crack, where we made, that's a, I'm realizing now that's a terrible name. But anyway, the DES cracker, um, we basically wanted to show that DES was capable of being cracked, right? And that this was a - correct me if I'm wrong - it was some sort of cryptographic standard that the government was using and people wanted to show that it wasn't sufficient.

DEIRDRE CONNOLLY: Yes - I think it was the first digital encryption standard. And then after its vulnerability was shown, they, they tripled it up to, to make it useful. And that's why Triple DES is still used in a lot of places and is actually considered okay. And then later came the advanced encryption standard, AES, which we prefer today.

JASON KELLEY: Okay, so we've learned the lesson, or we are learning the lesson, it sounds like.

DEIRDRE CONNOLLY: Uh huh.

CINDY COHN: Yeah, I think that that's, that's right. I mean, EFF built the DES cracker because in the nineties the government was insisting that something that everybody knew was really, really insecure and was going to only get worse as computers got stronger and, and strong computers got in more people's hands, um, to basically show that the emperor had no clothes, um, that this wasn't very good.
And I think with the NIST standards and what's happening with post-quantum is really, you know, the hopeful version is we learned that lesson and we're not seeing government trying to pretend like there isn't a risk in order to preserve old standards, but instead leading the way with new ones. Is that fair?

DEIRDRE CONNOLLY: That is very fair. NIST ran this post-quantum competition almost over 10 years, and it had over 80 submissions in the first round from all over the world, from industry, academia, and a mix of everything in between, and then it narrowed it down to. the three that are, they're not all out yet, but there's the key agreement, one called ML Kem, and three signatures. And there's a mix of cryptographic problems that they're based on, but there were multiple rounds, lots of feedback, lots of things got broken.
This competition has absolutely led the way for the world of getting ready for post-quantum cryptography. There are some competitions that have happened in Korea, and I think there's some work happening in China for their, you know, for their area.
There are other open standards and there are standards happening in other standards bodies, but the NIST competition has led the way, and it, because it's all open and all these standards are open and all of the work and the cryptanalysis that has gone in for the whole stretch. It's all been public and all these standards and drafts and analysis and attacks have been public. It's able to benefit everyone in the world.

CINDY COHN: I got started in the crypto wars in the nineties where the government was kind of the problem and they still are. And I do wanna ask you about whether you're seeing any role of the kinda national social security, FBI infrastructure, which has traditionally tried to put a thumb on the scales and make things less secure so that they could have access, if you're seeing any of that there.
But on the NIST side, I think this provides a nice counter example of how government can help facilitate building a better world sometimes, as opposed to being the thing we have to drag kicking and screaming into it.
But let me circle around to the question I embedded in that, which is, you know, one of the problems that that, that we know happened in the nineties around DES, and then of course some of the Snowden revelations indicated some mucking about in security as well behind the scenes by the NSA. Are you seeing anything like that and, and what should we be on the lookout for?

DEIRDRE CONNOLLY: Not in the PQC stuff. Uh, there, like there have been a lot of people that were paying very close attention to what these independent teams were proposing and then what was getting turned into a standard or a proposed standard and every little change, because I, I was closely following the key establishment stuff.
Um, every little change people were trying to be like, did you tweak? Why did you tweak that? Did, like, is there a good reason? And like, running down basically all of those things. And like including trying to get into the nitty gritty of like. Okay. We think this is approximately these many bits of security using these parameter and like talking about, I dunno, 123 versus 128 bits and like really paying attention to all of that stuff.
And I don't think there was any evidence of anything like that. And, and for, for plus or minus, because there were. I don't remember which crypto scheme it was, but it, there was definitely an improvement from, I think some of the folks at NSA very quietly back in the day to, I think it was the S boxes, and I don't remember if it was DES or AES or whatever it was.
But people didn't understand at the time because it was related to advanced, uh, I think it was a differential crypto analysis attacks that folks inside there knew about, and people in outside academia didn't quite know about yet. And then after the fact they were like, oh, they've made this better. Um, we're not, we're not even seeing any evidence of anything of that character either.
It's just sort of like, it's very open letting, like if everything's proceeding well and the products are going well of these post-quantum standards, like, you know, leave it alone. And so everything looks good. And like, especially for NSA, uh, national Security Systems in the, in the United States, they have updated their own targets to migrate to post-quantum, and they are relying fully on the highest security level of these new standards.
So like they are eating their own dog food. They're protecting the highest classified systems and saying these need to be fully migrated to fully post quantum key agreement. Uh, and I think signatures at different times, but there has to be by like 2035. So if they were doing anything to kind of twiddle with those standards, they'd be, you know, hurting themselves and shooting themselves in the foot.

CINDY COHN: Well fingers crossed.

DEIRDRE CONNOLLY: Yes.

CINDY COHN: Because I wanna build a better internet and a better. Internet means that they aren't secretly messing around with our security. And so this is, you know, cautiously good news.

JASON KELLEY: Let's take a quick moment to thank our sponsor.
“How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.
We also want to thank EFF members and donors. EFF has been fighting for digital rights for 35 years, and that fight is bigger than ever, so please, if you like what we do, go to eff.org/pod to donate. Also, we’d love for you to join us at this year’s EFF awards, where we celebrate the people working towards the better digital future that we all care so much about. Those are coming up on September 12th in San Francisco. You can find more information about that at eff.org/awards.
We also wanted to share that our friend Cory Doctorow has a new podcast. Listen to this.  [Who Broke the Internet trailer]

JASON KELLEY: And now, back to our conversation with Deirdre Connolly.

CINDY COHN: I think the thing that's fascinating about this is kind of seeing this cat and mouse game about the ability to break codes, and the ability to build codes and systems that are resistant to the breaking, kind of playing out here in the context of building better computers for everyone.
And I think it's really fascinating and I think it also for people I. You know, this is a pretty technical conversation, um, even, you know, uh, for our audience. But this is the stuff that goes on under the hood of how we keep journalists safe, how we keep activists safe, how we keep us all safe, whether it's our bank accounts or our, you know, people are talking about mobile IDs now and other, you know, all sorts of sensitive documents that are going to not be in physical form anymore, but are gonna be in digital form.
And unless we get this lock part right, we're really creating problems for people. And you know, what I really appreciate about you and the other people kind of in the midst of this fight is it's very unsung, right? It's kind of under the radar for the rest of us, but yet it's the, it's the ground that we need to stand on to, to be safe moving forward.

DEIRDRE CONNOLLY: Yeah, and there's a lot of assumptions, uh, that even the low level theoretical cryptographers and the people implementing their, their stuff into software and the stuff, the people trying to deploy, that there's a, a lot of assumptions that have been baked into what we've built that to a degree don't quite fit in some of the, the things we've been able to build in a post-quantum secure way, or the way we think it's a post-quantum secure way.
Um, we're gonna need to change some stuff and we think we know how to change some stuff to make it work. but we are hoping that we don't accidentally introduce any vulnerabilities or gaps.
We're trying, but also we're not a hundred percent sure that we're not missing something, 'cause these things are new. And so we're trying, and we're also trying to make sure we don't break things as we change them because we're trying to change them to be post quantum resilient. But you know, once you change something, if there's a possibility, you, you just didn't understand it completely. And you don't wanna break something that was working well in one direction because you wanna improve it in another direction.

CINDY COHN: And that's why I think it's important to continue to have a robust community of people who are the breakers, right? Who are, are hackers, who are, who are attacking. And that is a, you know, that's a mindset, right? That's a way of thinking about stuff that is important to protect and nurture, um, because, you know, there's an old quote from Bruce Schneider: Anyone can build a crypto system that they themselves cannot break. Right? It takes a community of people trying to really pound away at something to figure out where the holes are.
And you know, a lot of the work that EFF does around coders rights and other kinds of things is to make sure that there's space for that. and I think it's gonna be as needed in a quantum world as it was in a kind of classical computer world.

DEIRDRE CONNOLLY: Absolutely. I'm confident that we will learn a lot more from the breakers about this new cryptography because, like, we've tried to be robust through this, you know, NIST competition, and a lot of those, the things that we learn apply to other constructions as they come out. but like there's a whole area of people who are going to be encountering this kind of newish cryptography for the first time, and they kind of look at it and they're like. Oh, uh, I, I think I might be able to do something interesting with this, and we're, we'll all learn more and we'll try to patch and update as quickly as possible

JASON KELLEY: And this is why we have competitions to figure out what the best options are and why some people might favor one algorithm over another for different, different processes and things like that.

DEIDRE CONNOLLY: And that's why we're probably gonna have a lot of different flavors of post-quantum cryptography getting deployed in the world because it's not just, ah, you know, I don't love NIST. I'm gonna do my own thing in my own country over here. Or, or have different requirements. There is that at play, but also you're trying to not put all your eggs in one basket as well.

CINDY COHN: Yeah, so we want a menu of things so that people can really pick, from, you know, vetted, but different strategies. So I wanna ask the kind of core question for the podcast, which is, um, what does it look like if we get this right, if we get quantum computing and, you know, post-quantum crypto, right?
How does the world look different? Or does it just look the same? How, what, what does it look like if we do this well?

DEIRDRE CONNOLLY: Hopefully to a person just using their phone or using their computer to talk to somebody on the other side of the world, hopefully they don't notice. Hopefully to them, if they're, you know, deploying a website and they're like, ah, I need to get a Let’s Encrypt certificate or whatever.
Hopefully Let's Encrypt just, you know, insert bot just kind of does everything right by default and they don't have to worry about it.
Um, for the builders, it should be, we have a good recommended menu of cryptography that you can use when you're deploying TLS, when you're deploying SSH, uh, when you're building cryptographic applications, especially.
So like if you are building something in Go or Java or you know, whatever it might be, the crypto library in your language will have the updated recommended signature algorithm or key agreement algorithm and be, like, this is how we, you know, they have code snippets to say like, this is how you should use it, and they will deprecate the older stuff.
And, like, unfortunately there's gonna be a long time where there's gonna be a mix of the new post-quantum stuff that we know how to use and know how to deploy and protect. The most important, you know, stuff like to mitigate Store now/Decrypt later and, you know, get those signatures with the most important, uh, protected stuff.
Uh, get those done. But there's a lot of stuff that we're not really clear about. How we wanna do it yet, and kind of going back to one of the things you mentioned earlier, uh, comparing this to Y2K, there was a lot of work that went into mitigating Y2K before, during, immediately after.
Unfortunately, the comparison to the post quantum migration kind of falls down because after Y2K, if you hadn't fixed something, it would break. And you would notice in usually an obvious way, and then you could go find it. You, you fix the most important stuff that, you know, if it broke, like you would lose billions of dollars or, you know, whatever. You'd have an outage.
For cryptography, especially the stuff that's a little bit fancier. Um, you might not know it's broken because the adversary is not gonna, it's not gonna blow up.
And you have to, you know, reboot a server or patch something and then, you know, redeploy. If it's gonna fail, it's gonna fail quietly. And so we're trying to kind of find these things, or at least make the kind of longer tail of stuff, uh, find fixes for that upfront, you know, so that at least the option is available.
But for a regular person, hopefully they shouldn't notice. So everyone's trying really hard to make it so that the best security, in terms of the cryptography is deployed with, without downgrading your experience. We're gonna keep trying to do that.
I don't wanna build crap and say “Go use it.” I want you to be able to just go about your life and use a tool that's supposed to be useful and helpful. And it's not accidentally leaking all your data to some third party service or just leaving a hole on your network for any, any actor who notices to walk through and you know, all that sort of stuff.
So whether it's like implementing things securely in software, or it's cryptography or you know, post-quantum weirdness, like for me, I just wanna build good stuff for people, that's not crap.

JASON KELLEY: Everyone listening to this agrees with you. We don't want to build crap. We want to build some beautiful things. Let's go out there and do it.

DEIRDRE CONNOLLY: Cool.

JASON KELLEY: Thank you so much, Deirdre.

DEIRDRE CONNOLLY: Thank you!

CINDY COHN: Thank you Deirdre. We really appreciate you coming and explaining all of this to, you know, uh, the lawyer and activist at EFF.

JASON KELLEY: Well, I think that was probably the most technical conversation we've had, but I followed along pretty well and I feel like at first I was very nervous based on the, save and decrypt concerns. But after we talked to Deirdre, I feel like the people working on this. Just like for Y2K are pretty much gonna keep us out of hot water. And I learned a lot more than I did know before we started the conversation. What about you, Cindy?

CINDY COHN: I learned a lot as well. I mean, cryptography and, attacks on security is always, you know, it's a process, and it's a process by which we do the best we can, and then, then we also do the best we can to rip it apart and find all the holes, and then we, we iterate forward. And it's nice to hear that that model is still the model, even as we get into something like quantum computers, which, um, frankly are still hard to conceptualize.
But I agree. I think that what the good news outta this interview is I feel like there's a lot of pieces in place to try to do this right, to have this tremendous shift in computing that we don't know when it's coming, but I think that the research indicates that it SI coming, be something that we can handle, um, rather than something that overwhelms us.
And I think that's really,it's good to hear that good people are trying to do the right thing here since it's not inevitable.

JASON KELLEY: Yeah, and it is nice when someone's sort of best vision for what the future looks like is hopefully your life. You will have no impacts from this because everything will be taken care of. That's always good.
I mean, it sounds like, you know, the main thing for EFF is, as you said, we have to make sure that security engineers, hackers have the resources that they need to protect us from these kinds of threats and, and other kinds of threats obviously.
But, you know, that's part of EFF's job, like you mentioned. Our job is to make sure that there are people able to do this work and be protected while doing it so that when the. Solutions do come about. You know, they work and they're implemented and the average person doesn't have to know anything and isn't vulnerable.

CINDY COHN: Yeah, I also think that, um, I appreciated her vision that this is a, you know, the future's gonna be not just one. Size fits all solution, but a menu of things that take into account, you know, both what works better in terms of, you know, bandwidth and compute time, but also what you know, what people actually need.
And I think that's a piece that's kind of built into the way that this is happening that's also really hopeful. In the past and, and I was around when EFF built the DES cracker, um, you know, we had a government that was saying, you know, you know, everything's fine, everything's fine when everybody knew that things weren't fine.
So it's also really hopeful that that's not the position that NIST is taking now, and that's not the position that people who may not even pick the NIST standards but pick other standards are really thinking through.

JASON KELLEY: Yeah, it's very helpful and positive and nice to hear when something has improved for the better. Right? And that's what happened here. We had this, this different attitude from, you know, government at large in the past and it's changed and that's partly thanks to EFF, which is amazing.

CINDY COHN: Yeah, I think that's right. And, um, you know, we'll see going forward, you know, the governments change and they go through different things, but this is, this is a hopeful moment and we're gonna push on through to this future.
I think there's a lot of, you know, there's a lot of worry about quantum computers and what they're gonna do in the world, and it's nice to have a little vision of, not only can we get it right, but there are forces in place that are getting it right. And of course it does my heart so, so good that, you know, someone like Deirdre was inspired by Snowden and dove deep and figured out how to be one of the people who was building the better world. We've talked to so many people like that, and this is a particular, you know, little geeky corner of the world. But, you know, those are our people and that makes me really happy.

JASON KELLEY: Thanks for joining us for this episode of How to Fix the Internet.
If you have feedback or suggestions, we'd love to hear from you. Visit EFF dot org slash podcast and click on listener feedback. While you're there, you can become a member, donate, maybe even pick up some merch and just see what's happening in digital rights this week and every week.
Our theme music is by Nat Keefe of BeatMower with Reed Mathis
How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.
We’ll see you next time.
I’m Jason Kelley…

CINDY COHN: And I’m Cindy Cohn.

MUSIC CREDITS: This podcast is licensed creative commons attribution 4.0 international, and includes the following music licensed creative commons attribution 3.0 unported by its creators: Drops of H2O, The Filtered Water Treatment by Jay Lang. Sound design, additional music and theme remixes by Gaetan Harris.

Josh Richman
Checked
2 hours 36 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed