EFF Tells Virginia Court That Constitutional Privacy Protections Forbid Cops from Finding out Everyone Who Searched for a Keyword

1 day 18 hours ago

This post was co-authored by EFF legal intern Noam Shemtov.

We are in a constant dialogue with Internet search engines, ranging from the mundane to the confessional. We ask search engines everything: What movies are playing (and which are worth seeing)? Where’s the nearest clinic (and how do I get there)? Who’s running in the sheriff’s race (and what are their views)? These online queries can give insight into our private details and innermost thoughts, but police increasingly access them without adhering to longstanding limits on government investigative power.

A Virginia appeals court is poised to review such a request in a case called Commonwealth v. Clements. In Clements, police sought evidence under a “reverse-keyword warrant,” a novel court order that compels search engines like Google to hand over information about every person who has looked up a word or phrase online. While the trial judge correctly recognized the privacy interest in our Internet queries, he overlooked the other wide-ranging harms that keyword warrants enable and upheld the search.

But as EFF and the ACLU explained in our amicus brief on appeal, reverse keyword warrants simply cannot be conducted in a lawful way. They invert privacy protections, threaten free speech and inquiry, and fundamentally conflict with the principles underlying the Fourth Amendment and its analog in the Virginia Constitution. The court of appeals now has a chance to say so and protect the rights of Internet users well beyond state lines.

To comply with a keyword warrant, a search engine has to trawl through its entire database of user queries to pinpoint the accounts or devices that made a responsive search. For a dominant service like Google, that means billions of records. Such a wide dragnet will predictably pull in people with no plausible connection to a crime under investigation if their searches happened to include keywords police are interested in.

Critically, investigators seldom have a suspect in mind when they seek a reverse-keyword warrant. That isn’t surprising. True to their name, these searches “work in reverse” from the traditional investigative process. What makes them so useful is precisely their ability to identify Internet users on the sole basis of what they searched online. But what makes a search technique convenient to the government does not always make it constitutional. Quite the opposite: the constitution is anathema to inherently suspicionless dragnets.

The Fourth Amendment forbids “exploratory rummaging”—in fact, it was drafted in direct response to British colonial soldiers’ practice of indiscriminately searching people’s homes and papers for evidence of their opposition to the Crown. To secure a lawful warrant, police must have a specific basis to believe evidence will be found in a given location. They must also describe that location in some detail and say what evidence they expect to find there. It’s hard to think of a less specific description than “all the Internet searches in the world” or a weaker hunch than “whoever committed the crime probably looked up search term x.” Because those airy assertions are all law enforcement can martial in support of keyword warrants, they are “tantamount to high-tech versions of the reviled ‘general warrants’ that first gave rise to the . . . Fourth Amendment” and Virginia’s even stronger search-and-seizure provision.

What’s more, since keyword warrants compel search engine companies to hand over records about anyone anywhere who looked up a particular search term within a given timeframe, they effectively make a suspect out of every person whose online activity falls within the warrant’s sweep. As one court has said about related geofences, this approach “invert[s] probable cause” and “cannot stand.”

Keyword warrants’ fatal flaws are even more drastic considering that privacy rights apply with special force to searches of items—like diaries, booklists, and Internet search queries—that reflect a person’s free thought and expression. As both law and lived experience affirm, the Internet is “the most important place[] . . . for the exchange of views.” Using it—and using keyword searches to navigate the practical infinity of its contents—is “indispensable to participation in modern society.” We shouldn’t have to engage in that core endeavor with the fear that our searches will incriminate us, subject to police officers’ discretion about what keywords are worthy of suspicion. That outcome would predictably chill people from accessing information about sensitive and important topics like reproductive health, public safety, or events in the news that could be relevant to a criminal investigation.

The Virginia Court of Appeals now has the opportunity in Clements to protect privacy and speech rights by affirming that keyword warrants can’t be reconciled with constitutional protections guaranteed at the federal or state level. We hope it does so.

Andrew Crocker

No Face, No Case: California’s S.B. 627 Demands Cops Show Their Faces

1 day 19 hours ago

Across the country, people are collecting and sharing footage of masked law enforcement officers from both federal and local agencies deputized to do so-called immigration enforcement: arresting civilians, in some cases violently and/or warrantlessly. That footage is part of a long tradition of recording law enforcement during their operations to ensure some level of accountability if people observe misconduct and/or unconstitutional practices. However, as essential as recording police can be in proving allegations of misconduct, the footage is rendered far less useful when officers conceal their badges and/or faces. Further, lawyers, journalists, and activists cannot then identify officers in public records requests for body-worn camera footage to view the interaction from the officers’ point of view. 

In response to these growing concerns, California has introduced S.B. 627 to prohibit law enforcement from covering their faces during these kinds of public encounters. This builds on legislation (in California and some other states and municipalities) that requires police, for example, “to wear a badge, nameplate, or other device which bears clearly on its face the identification number or name of the officer.” Similarly, police reform legislation passed in 2018 requires greater transparency by opening individual personnel files of law enforcement to public scrutiny when there are use of force cases or allegations of violent misconduct.

But in the case of ICE detentions in 2025, federal and federally deputized officers are not only covering up their badges—they're covering their faces as well. This bill would offer an important tool to prevent this practice, and to ensure that civilians who record the police can actually determine the identity of the officers they’re recording, in case further investigation is warranted. The legislation explicitly includes “any officer or anyone acting on behalf of a local, state, or federal law enforcement agency.” 

This is a necessary move. The right to record police, and to hold government actors accountable for their actions, requires that we know who the government actors are in the first place. The new legislation seeks to cover federal officers in addition to state and local officials, protecting Californians from otherwise unaccountable law enforcement activity. 

As EFF has stood up for the right to record police, we also stand up for the right to be able to identify officers in those recordings. We have submitted a letter to the state legislature to that effect. California should pass S.B. 627, and more states should follow suit to ensure that the right to record remains intact. 

José Martinez

Axon’s Draft One is Designed to Defy Transparency

1 day 23 hours ago

Axon Enterprise’s Draft One — a generative artificial intelligence product that writes police reports based on audio from officers’ body-worn cameras — seems deliberately designed to avoid audits that could provide any accountability to the public, an EFF investigation has found.

Our review of public records from police agencies already using the technology — including police reports, emails, procurement documents, department policies, software settings, and more — as well as Axon’s own user manuals and marketing materials revealed that it’s often impossible to tell which parts of a police report were generated by AI and which parts were written by an officer.

You can read our full report, which details what we found in those documents, how we filed those public records requests, and how you can file your own, here

Everyone should have access to answers, evidence, and data regarding the effectiveness and dangers of this technology. Axon and its customers claim this technology will revolutionize policing, but it remains to be seen how it will change the criminal justice system, and who this technology benefits most.

For months, EFF and other organizations have warned about the threats this technology poses to accountability and transparency in an already flawed criminal justice system.  Now we've concluded the situation is even worse than we thought: There is no meaningful way to audit Draft One usage, whether you're a police chief or an independent researcher, because Axon designed it that way. 

Draft One uses a ChatGPT variant to process body-worn camera audio of public encounters and create police reports based only on the captured verbal dialogue; it does not process the video. The Draft One-generated text is sprinkled with bracketed placeholders that officers are encouraged to add additional observations or information—or can be quickly deleted. Officers are supposed to edit Draft One's report and correct anything the Gen AI misunderstood due to a lack of context, troubled translations, or just plain-old mistakes. When they're done, the officer is prompted to sign an acknowledgement that the report was generated using Draft One and that they have reviewed the report and made necessary edits to ensure it is consistent with the officer’s recollection. Then they can copy and paste the text into their report. When they close the window, the draft disappears.

Any new, untested, and problematic technology needs a robust process to evaluate its use by officers. In this case, one would expect police agencies to retain data that ensures officers are actually editing the AI-generated reports as required, or that officers can accurately answer if a judge demands to know whether, or which part of, reports used by the prosecution were written by AI. 

"We love having new toys until the public gets wind of them."

One would expect audit systems to be readily available to police supervisors, researchers, and the public, so that anyone can make their own independent conclusions. And one would expect that Draft One would make it easy to discern its AI product from human product – after all, even your basic, free word processing software can track changes and save a document history.

But Draft One defies all these expectations, offering meager oversight features that deliberately conceal how it is used. 

So when a police report includes biased language, inaccuracies, misinterpretations, or even outright lies, the record won't indicate whether the officer or the AI is to blame. That makes it extremely difficult, if not impossible, to assess how the system affects justice outcomes, because there is little non-anecdotal data from which to determine whether the technology is junk. 

The disregard for transparency is perhaps best encapsulated by a short email that an administrator in the Frederick Police Department in Colorado, one of Axon's first Draft One customers, sent to a company representative after receiving a public records request related to AI-generated reports. 

"We love having new toys until the public gets wind of them," the administrator wrote.

No Record of Who Wrote What

The first question anyone should have about a police report written using Draft One is which parts were written by AI and which were added by the officer. Once you know this, you can start to answer more questions, like: 

  • Are officers meaningfully editing and adding to the AI draft? Or are they reflexively rubber-stamping the drafts to move on as quickly as possible? 
  • How often are officers finding and correcting errors made by the AI, and are there patterns to these errors? 
  • If there is inappropriate language or a fabrication in the final report, was it introduced by the AI or the officer? 
  • Is the AI overstepping in its interpretation of the audio? If a report says, "the subject made a threatening gesture," was that added by the officer, or did the AI make a factual assumption based on the audio? If a suspect uses metaphorical slang, does the AI document literally? If a subject says "yeah" through a conversation as a verbal acknowledgement that they're listening to what the officer says, is that interpreted as an agreement or a confession?

"So we don’t store the original draft and that’s by design..."

Ironically, Draft One does not save the first draft it generates. Nor does the system store any subsequent versions. Instead, the officer copies and pastes the text into the police report, and the previous draft, originally created by Draft One, disappears as soon as the window closes. There is no log or record indicating which portions of a report were written by the computer and which portions were written by the officer, except for the officer's own recollection. If an officer generates a Draft One report multiple, there's no way to tell whether the AI interprets the audio differently each time.

Axon is open about not maintaining these records, at least when it markets directly to law enforcement.

In this video of a roundtable discussion about the Draft One product, Axon’s senior principal product manager for generative AI is asked (at the 49:47 mark) whether or not it’s possible to see after-the-fact which parts of the report were suggested by the AI and which were edited by the officer. His response (bold and definition of RMS added): 

So we don’t store the original draft and that’s by design and that’s really because the last thing we want to do is create more disclosure headaches for our customers and our attorney’s offices—so basically the officer generates that draft, they make their edits, if they submit it into our Axon records system then that’s the only place we store it, if they copy and paste it into their third-party RMS [records management system] system as soon as they’re done with that and close their browser tab, it’s gone. It’s actually never stored in the cloud at all so you don’t have to worry about extra copies floating around.”

To reiterate: Axon deliberately does not store the original draft written by the Gen AI, because "the last thing" they want is for cops to have to provide that data to anyone (say, a judge, defense attorney or civil liberties non-profit). 

Following up on the same question, Axon's Director of Strategic Relationships at Axon Justice suggests this is fine, since a police officer using a word processor wouldn't be required to save every draft of a police report as they're re-writing it. This is, of course, misdirection and not remotely comparable. An officer with a word processor is one thought process and a record created by one party; Draft One is two processes from two parties–Axon and the officer. Ultimately, it could and should be considered two records: the version sent to the officer from Axon and the version edited by the officer.

The days of there being unexpected consequences of police departments writing reports in word processors may be over, but Draft One is still unproven. After all, every AI-evangelist, including Axon, claims this technology is a game-changer. So, why wouldn't an agency want to maintain a record that can establish the technology’s accuracy? 

It also appears that Draft One isn't simply hewing to long-established norms of police report-writing; it may fundamentally change them. In one email, the Campbell Police Department's Police Records Supervisor tells staff, “You may notice a significant difference with the narrative format…if the DA’s office has comments regarding our report narratives, please let me know.” It's more than a little shocking that a police department would implement such a change without fully soliciting and addressing the input of prosecutors. In this case, the Santa Clara County District Attorney had already suggested police include a disclosure when Axon Draft One is used in each report, but Axon's engineers had yet to finalize the feature at the time it was rolled out. 

One of the main concerns, of course, is that this system effectively creates a smokescreen over truth-telling in police reports. If an officer lies or uses inappropriate language in a police report, who is to say that the officer wrote it or the AI? An officer can be punished severely for official dishonesty, but the consequences may be more lenient for a cop who blames it on the AI. There has already been an occasion when engineers discovered a bug that allowed officers on at least three occasions to circumvent the "guardrails" that supposedly deter officers from submitting AI-generated reports without reading them first, as Axon disclosed to the Frederick Police Department.

To serve and protect the public interest, the AI output must be continually and aggressively evaluated whenever and wherever it's used. But Axon has intentionally made this difficult. 

What the Audit Trail Actually Looks Like 

You may have seen news stories or other public statements asserting that Draft One does, indeed, have auditing features. So, we dug through the user manuals to figure out what that exactly means. 

The first thing to note is that, based on our review of the documentation, there appears to be  no feature in Axon software that allows departments to export a list of all police officers who have used Draft One. Nor is it possible to export a list of all reports created by Draft One, unless the department has customized its process (we'll get to that in a minute). 

This is disappointing because, without this information, it's near impossible to do even the most basic statistical analysis: how many officers are using the technology and how often. 

Based on the documentation, you can only export two types of very basic logs, with the process differing depending on whether an agency uses Evidence or Records/Standards products. These are:

  1. A log of basic actions taken on a particular report. If the officer requested a Draft One report or signed the Draft One liability disclosure related to the police report, it will show here. But nothing more than that.
  2.  A log of an individual officer/user's basic activity in the Axon Evidence/Records system. This audit log shows things such as when an officer logs into the system, uploads videos, or accesses a piece of evidence. The only Draft One-related activities this tracks are whether the officer ran a Draft One request, signed the Draft One liability disclosure, or changed the Draft One settings. 

This means that, to do a comprehensive review, an evaluator may need to go through the record management system and look up each officer individually to identify whether that officer used Draft One and when. That could mean combing through dozens, hundreds, or in some cases, thousands of individual user logs. 

An example of Draft One usage in an audit log.

An auditor could also go report-by-report as well to see which ones involved Draft One, but the sheer number of reports generated by an agency means this method would require a massive amount of time. 

But can agencies even create a list of police reports that were co-written with AI? It depends on whether the agency has included a disclosure in the body of the text, such as "I acknowledge this report was generated from a digital recording using Draft One by Axon." If so, then an administrator can use "Draft One" as a keyword search to find relevant reports.

Agencies that do not require that language told us they could not identify which reports were written with Draft One. For example, one of those agencies and one of Axon's most promoted clients, the Lafayette Police Department in Indiana, told us: 

"Regarding the attached request, we do not have the ability to create a list of reports created through Draft One. They are not searchable. This request is now closed."

Meanwhile, in response to a similar public records request, the Palm Beach County Sheriff's Office, which does require a disclosure at the bottom of each report that it had been written by AI, was able to isolate more than 3,000 Draft One reports generated between December 2024 and March 2025.

They told us: "We are able to do a keyword and a timeframe search. I used the words draft one and the system generated all the draft one reports for that timeframe."

We have requested further clarification from Axon, but they have yet to respond. 

However, as we learned from email exchanges between the Frederick Police Department in Colorado and Axon, Axon is tracking police use of the technology at a level that isn't available to the police department itself. 

In response to a request from Politico's Alfred Ng in August 2024 for Draft One-generated police reports, the police department was struggling to isolate those reports. 

An Axon representative responded: "Unfortunately, there’s no filter for DraftOne reports so you’d have to pull a User’s audit trail and look for Draft One entries. To set expectations, it’s not going to be graceful, but this wasn’t a scenario we anticipated needing to make easy."

But then, Axon followed up: "We track which reports use Draft One internally so I exported the data." Then, a few days later, Axon provided Frederick with some custom JSON code to extract the data in the future. 


What is Being Done About Draft One

The California Assembly is currently considering SB 524, a bill that addresses transparency measures for AI-written police reports. The legislation would require disclosure whenever police use artificial intelligence to partially or fully write official reports, as well as “require the first draft created to be retained for as long as the final report is retained.” Because Draft One is designed not to retain the first or any previous drafts of a report, it cannot comply with this common-sense and first-step bill,  and any law enforcement usage would be unlawful.

Axon markets Draft One as a solution to a problem police have been complaining about for at least a century: that they do too much paperwork. Or, at least, they spend too much time doing paperwork. The current research on whether Draft One remedies this issue shows mixed results, from some agencies claiming it has no real-time savings, with others agencies extolling its virtues (although their data also shows that results vary even within the department).

In the justice system, police must prioritize accuracy over speed. Public safety and a trustworthy legal system demand quality over corner-cutting. Time saved should not be the only metric, or even the most important one. It's like evaluating a drive-through restaurant based only on how fast the food comes out, while deliberately concealing the ingredients and nutritional information and failing to inspect whether the kitchen is up to health and safety standards. 

Given how untested this technology is and how much the company is in a hurry to sell Draft One, many local lawmakers and prosecutors have taken it upon themselves to try to regulate the product’s use. Utah is currently considering a bill that would mandate disclosure for any police reports generated by AI, thus sidestepping one of the current major transparency issues: it’s nearly impossible to tell which finished reports started as an AI draft. 

In King County, Washington, which includes Seattle, the district attorney’s office has been clear in their instructions: police should not use AI to write police reports. Their memo says

We do not fear advances in technology – but we do have legitimate concerns about some of the products on the market now... AI continues to develop and we are hopeful that we will reach a point in the near future where these reports can be relied on. For now, our office has made the decision not to accept any police narratives that were produced with the assistance of AI.

We urge other prosecutors to follow suit and demand that police in their jurisdiction not unleash this new, unaccountable, and intentionally opaque AI product. 

Conclusion

Police should not be using AI to write police reports. There are just too many unanswered questions about how AI would translate the audio of situations and whether police will actually edit those drafts, while simultaneously, there is no way for the public to reliably discern what was written by a person and what was written by a computer. This is before we even get to the question of how these reports might compound and exacerbate existing problems or create new ones in an already unfair and untransparent criminal justice system. 

EFF will continue to research and advocate against the use of this technology but for now, the lesson is clear: Anyone with control or influence over police departments, be they lawmakers or people in the criminal justice system, has a duty to be informed about the potential harms and challenges posed by AI-written police reports.  

Matthew Guariglia

EFF's Guide to Getting Records About Axon's Draft One AI-Generated Police Reports

1 day 23 hours ago

The moment Axon Enterprise announced a new product, Draft One, that would allow law enforcement officers to use artificial intelligence to automatically generate incident report narratives based on body-worn camera audio, everyone in the police accountability community immediately started asking the same questions

What do AI-generated police reports look like? What kind of paper trail does this system leave? How do we get a hold of documentation using public records laws? 

Unfortunately, obtaining these records isn't easy. In many cases, it's straight-up impossible. 

Read our full report on how Axon's Draft One defies transparency expectations by design here

In some jurisdictions, the documents are walled off behind government-created barriers. For example, California fully exempts police narrative reports from public disclosure, while other states charge fees to access individual reports that become astronomical if you want to analyze the output in bulk. Then there are technical barriers: Axon's product itself does not allow agencies to isolate reports that contain an AI-generated narrative, although an agency can voluntarily institute measures to make them searchable by a keyword.  

This spring, EFF tested out different public records request templates and sent them to dozens of law enforcement agencies we believed are using Draft One. 

We asked each agency for the Draft One-generated police reports themselves, knowing that in most cases this would be a long shot. We also dug into Axon's user manuals to figure out what kind of logs are generated and how to carefully phrase our public records request to get them. We asked for the current system settings for Draft One, since there are a lot of levers police administrators can pull that drastically change how and when officers can use the software. We also requested the standard records that we usually ask for when researching new technologies: procurement documents, agreements, training manuals, policies, and emails with vendors. 

Like all mass public records campaigns, the results were… mixed. Some agencies were refreshingly open with their records. Others assessed us records fees well outside the usual range for a non-profit organization. 

What we learned about the process is worth sharing. Axon has thousands of clients nationwide that use its Tasers, body-worn cameras and bundles of surveillance equipment, and the company is using those existing relationships to heavily promote Draft One.  We expect many more cities to deploy the technology over the next few years. Watchdogging police use of AI will require a nationwide effort by journalists, advocacy organizations and community volunteers.

Below we’re sharing some sample language you can use in your own public records requests about Draft One — but be warned. It’s likely that the more you include, the longer it might take and the higher the fees will get. The template language and our suggestions for filing public records requests are not legal advice. If you have specific questions about a public records request you filed, consult a lawyer.

1. Police Reports

Language to try in your public records request:

  • All police report narratives, supplemental report narratives, warrant affidavits, statements, and other narratives generated using Axon Draft One to document law enforcement-related incidents for the period between [DATE IN THE LAST FEW WEEKS] and the date this request is received. If your agency requires a Draft One disclosure in the text of the message, you can use "Draft One" as a keyword search term.

Or

  • The [NUMBER] most recent police report narratives that were generated using Axon Draft One between [DATE IN THE LAST FEW WEEKS] and the date this request is received.

If you are curious about a particular officer's Draft One usage, you can also ask for their reports specifically. However it may be helpful to obtain their usage log first (see section 2).

  • All police report narratives, supplemental report narratives, warrant affidavits, statements, and other narratives generated by [OFFICER NAME] using Axon Draft One to document law enforcement-related incidents for the period between [DATE IN THE LAST FEW WEEKS] and the date this request is received.

We suggest using weeks, not months, because the sheer number of reports can get costly very quickly.

As an add-on to Axon's evidence and records management platforms, Draft One uses ChatGPT to convert audio taken from Axon body-worn cameras into the so-called first draft of the narrative portion of a police report. 

When Politico surveyed seven agencies in September 2024, reporter Alfred Ng found that police administrators did not have the technical ability to identify which reports contained AI-generated language. As Ng reported. There is no way for us to search for these on our end,” a Lafayette, IN police captain told Ng. Six months later, EFF received the same no-can-do response from the Lafayette Police Department.

Although Lafayette Police could not create a list on their own, it turns out that Axon's engineers can generate these reports for police if asked. When the Frederick Police Department in Colorado received a similar request from Ng, the agency contacted Axon for help. The company does internally track reports written with Draft One and was able to provide a spreadsheet of Draft One reports (.csv) and even provided Frederick Police with computer code to allow the agency to create similar lists in the future. Axon told them they would look at making this a feature in the future, but that appears not to have happened yet. 

But we also struck gold with two agencies: the Palm Beach County Sheriff's Office (PBCSO) in Florida and the Lake Havasu City Police Department in Arizona. In both cases, the agencies require officers to include a disclosure that they used Draft One at the end of the police narrative. Here's a slide from the Palm Beach County Sheriff's Draft One training:

And here's the boilerplate disclosure: 

I acknowledge this report was generated from a digital recording using Draft One by Axon. I further acknowledge that I have I reviewed the report, made any necessary edits, and believe it to be an accurate representation of my recollection of the reported events. I am willing to testify to the accuracy of this report.

As small a gesture as it may seem, that disclosure makes all the difference when it comes to responding to a public records request. Lafayette Police could not isolate the reports because its policy does not require the disclosure. A Frederick Police Department sergeant noted in an email to Axon that they could isolate reports when the auto-disclosure was turned on, but not after they decided to turn it off. This year, Utah legislators introduced a bill to require this kind of disclosure on AI-generated reports.

As the PBCSO records manager told us: "We are able to do a keyword and a timeframe search. I used the words ‘Draft One’ and the system generated all the Draft One reports for that timeframe." In fact, in Palm Beach County and Lake Havasu, records administrators dug up huge numbers of records. But, once we saw the estimated price tag, we ultimately narrowed our request to just 10 reports.

Here is an example of a report from PBCSO, which only allows Draft One to be used in incidents that don't involve a criminal charge. As a result, many of the reports were related to mental health or domestic dispute responses.  

A machine readable text version of this report is available here. Full version here.

And here is an example from the Lake Havasu City Police Department, whose clerk was kind enough to provide us with a diverse sample of requests.

A machine readable text version of this report is available here. Full version here.

EFF redacted some of these records to protect the identity of members of the public who were captured on body-worn cameras. Black-bar redactions were made by the agencies, while bars with X's were made by us. You can view all the examples we received below: 

We also received police reports (perhaps unintentionally) from two other agencies that were contained as email attachments in response to another part of our request (see section 7).

2. Audit Logs

Language to try in your public records request:

Note: You can save time by determining in advance whether the agency uses Axon Evidence or Axon Records and Standards, then choose the applicable option below. If you don't know, you can always request both.

Audit logs from Axon Evidence

  • Audit logs for the period December 1, 2024 through the date this request is received, for the 10 most recently active users.
    According to Axon's online user manual, through Axon Evidence agencies are able to view audit logs of individual officers to ascertain whether they have requested the use of Draft One, signed a Draft One liability disclosure or changed Draft One settings (https://my.axon.com/s/article/View-the-audit-trail-in-Axon-Evidence-Draft-One?language=en_US). In order to obtain these audit logs, you may follow the instructions on this Axon page: https://my.axon.com/s/article/Viewing-a-user-audit-trail?language=en_US.
    In order to produce a list of the 10 most recent active users, you may click the arrow next to "Last Active" then select the most 10 recent. The [...] menu item allows you to export the audit log. We would prefer these audits as .csv files if possible.
    Alternatively, if you know the names of specific officers, you can name them rather than selecting the most recent.

Or

Audit logs from Axon Records and Axon Standards

  • According to Axon's online user manual, through Axon Records and Standards, agencies are able to view audit logs of individual officers to ascertain whether they have requested a Draft One draft or signed a Draft One liability disclosure. https://my.axon.com/s/article/View-the-audit-log-in-Axon-Records-and-Standards-Draft-One?language=en_US
    To obtain these logs using the Axon Records Audit Tool, follow these instructions: https://my.axon.com/s/article/Audit-Log-Tool-Axon-Records?language=en_US
    a. Audit logs for the period December 1, 2024 through the date this request is received for the first user who comes up when you enter the letter "M" into the audit tool. If no user comes up with M, please try "Mi."
    b. Audit logs for the period December 1, 2024 through the date this request is received for the first user who comes up when you enter the letter "J" into the audit tool. If no user comes up with J, please try "Jo."
    c. Audit logs for the period December 1, 2024 through the date this request is received for the first user who comes up when you enter the letter "S" into the audit tool. If no user comes up with S, please try "Sa."

You could also tell the agency you are only interested in Draft One related items, which may save the agency time in reviewing and redacting the documents.

Generally, many of the basic actions a police officer takes using Axon technology — whether it's signing in, changing a password, accessing evidence or uploading BWC footage — is logged in the system. 

This also includes some actions when an officer uses Draft One. However, the system only logs three types of activities: requesting that Draft One generate a report, signing a Draft One liability disclosure, or changing Draft One's settings. And these reports are one of the only ways to identify which reports were written with AI and how widely the technology is used. 

Unfortunately, Axon appears to have designed its system so that administrators cannot create a list of all Draft One activities taken by the entire police force. Instead, all they can do is view an individual officer's audit log to see when they used Draft One or look at the log for a particular piece of evidence to see if Draft One was used. These can be exported as a spreadsheet or a PDF. (When the Frederick Police Department asked Axon how to create a list of Draft One reports, the Axon rep told them that feature wasn't available and they would have to follow the above method. "To set expectations, it’s not going to be graceful, but this wasn’t a scenario we anticipated needing to make easy," Axon wrote in August 2024, then suggested it might come up with a long-term solution. We emailed Axon back in March to see if this was still the case, but they did not provide a response.) 

Here's an excerpt from a PDF version from the Bishop Police Department in California:

Here are some additional audit log examples: 

If you know the name of an individual officer, you can try to request their audit logs to see if they used Draft One. Since we didn't have a particular officer in mind, we had to get creative. 

An agency may manage their documents with one of a few different Axon offerings: Axon Evidence, Axon Records, or Axon Standards. The process for requesting records is slightly different depending on which one is used. We dug through the user manuals and came up with a few ways to export a random(ish) example. We also linked the manuals and gave clear instructions for the records officers.

With Axon Evidence, an administrator can simply sort the system to show the 10 most recent users then export their usage logs. With Axon Records/Standard, the administrator has to start typing in a name and then it auto-populates with suggestions. So, we ask them to export the audit logs for the first few users who came up when they typed the letters M, J, and S into the search (since those letters are common at the beginning of names). 

Unfortunately, this method is a little bit of a gamble. Many officers still aren't using Draft One, so you may end up with hundreds of pages of logs that don't mention Draft One at all (as was the case with the records we received from Monroe County, NY).

3. Settings

Language to try in your public records request: 

  • A copy of all settings and configurations made by this agency in its use of the Axon Draft One platform, including all opt-in features that the department has elected to use and the incident types for which the software can be used. A screen capture of these settings will suffice.

We knew the Draft One system offers department managers the option to customize how it can be used, including the categories of crime for which reports can be generated and whether or not there is a disclaimer automatically added to the bottom of the report disclosing the use of AI in its generation. So we asked for a copy of these settings and configurations. In some cases, agencies claimed this was exempted from their public records laws, while other agencies did provide the information. Here is an example from the Campbell Police Department in California: 

(It's worth noting that while Campbell does require each police report to contain a disclosure that Draft One was used, the California Public Records Act exempts police reports from being released.)

Examples of settings: 

4. Procurement-related Documents and Agreements

Language to try in your public records request:

  • All contracts, memorandums of understanding, and any other written agreements between this agency and Axon related to the use of Draft One, Narrative Assistant, or any other AI-assisted report generation tool provided by Axon. Responsive records include all associated amendments, exhibits, and supplemental and supporting documentation, as well as all relevant terms of use, licensing agreements, and any other guiding materials. If access to Draft One or similar tools is being provided via an existing contract or through an informal agreement, please provide the relevant contract or the relevant communication or agreement that facilitated the access. This includes all agreements, both formal and informal, including all trial access, even if that access does not or did not involve financial obligations.

It can be helpful to know how much Draft One costs, how many user licenses the agency paid for, and what the terms of the agreement are. That information is often contained in records related to the contracting process. Agencies will often provide these records with minimal pushback or redactions. Many of these records may already be online, so a requester can save time and effort by looking around first. These are often found in city council agenda packets. Also, law enforcement agencies often will bump these requests to the city or county clerk instead. 

Here's an excerpt from the Monroe County Sheriff's Office in New York:

These kinds of procurement records describe the nature and cost of the relationship between the police department and the company. They can be very helpful for understanding how much a continuing service subscription will cost and what else was bundled in as part of the purchase. Draft One, so far, is often accessed as an additional feature along with other Axon products. 

We received too many documents to list them all, but here is a representative example of some of the other documents you might receive, courtesy of the Dacono Police Department in Colorado.

5. Training, Manuals and Policies

All training materials relevant to Draft One or Axon Narrative Assistant generated by this agency, including but not limited to:

  • All training material provided by Axon to this agency regarding its use of Draft One;
  • All internal training materials regarding the use of Draft One;
  • All user manuals, other guidance materials, help documents, or related materials;
  • Guides, safety tests, and other supplementary material that mention Draft One provided by Axon from January 1, 2024 and the date this request is received;
  • Any and all policies and general orders related to the use of Draft One, the Narrative Assistant, or any other AI-assisted report generation offerings provided by Axon (An example of one such policy can be found here: https://cdn.muckrock.com/foia_files/2024/11/26/608_Computer_Software_and_Transcription-Assisted_Report_Generation.pdf).

In addition to seeing when Draft One was used and how it was acquired, it can be helpful to know what rules officers must follow, what directions they're given for using it, and what features are available to users. That's where manuals, policies and training materials come in handy. 

User manuals are typically going to come from Axon itself. In general, if you can get your hands on one, this will help you to better understand the mechanisms of the system, and it will help you align the way you craft your request with the way the system actually works. Luckily, Axon has published many of the materials online and we've already obtained the user manual from multiple agencies. However, Axon does update the manual from time to time, so it can be helpful to know which version the agency is working from.

Here's one from December 2024:

Policies are internal police department guidance for using Draft One. Not all agencies have developed a policy, but the ones they do have may reveal useful information, such as other records you might be able to request. Here are some examples: 

Training and user manuals also might reveal crucial information about how the technology is used. In some cases these documents are provided by Axon to the customer. These records may illuminate the specific direction that departments are emphasizing about using the product.

Here are a few examples of training presentations:

 6. Evaluations

Language to try in your public records request:

  • All final reports, evaluations, reports, or other documentation concluding or summarizing a trial or evaluation period or pilot project

Many departments are getting access to Draft One as part of a trial or pilot program. The outcome of those experiments with the product can be eye-opening or eyebrow-raising. There might also be additional data or a formal report that reviews what the department was hoping to get from the experience, how they structured any evaluation of its time-saving value for the department, and other details about how officers did or did not use Draft One. 

Here are some examples we received: 

7. Communications

Language to try in your public records request:

• All communications sent or received by any representative of this agency with individuals representing Axon referencing the following term, including emails and attachments:

  • Draft One
  • Narrative Assistant
  • AI-generated report

• All communications sent to or received by any representative of this agency with each of the following email addresses, including attachments:

  • [INSERT EMAIL ADDRESSES]

Note: We are not including the specific email addresses here that we used, since they are subject to change when employees are hired, promoted, or find new gigs. However, you can find the emails we used in our requests on MuckRock.

The communications we wanted were primarily the emails between Axon and the law enforcement agency. As you can imagine, these emails could reveal the back-and-forth between the company and its potential customers, and these conversations could include the marketing pitch made to the department, the questions and problems police may have had with it, and more. 

In some cases, these emails reveal cozy relationships between salespeople and law enforcement officials. Take, for example, this email exchange between the Dickinson Police Department and an Axon rep:

Or this email between a Frederick Police Department sergeant and an Axon representative, in which a sergeant describes himself as "doing sales" for Axon by providing demos to other agencies.

A machine readable text version of this email is available here.

Emails like this also show what other agencies are considering using Draft One in the future. For example, in this email we received from the Campbell Police Department shows that the San Francisco Police Department was testing Draft One as early as October 2024 (the usage was confirmed in June 2025 by the San Francisco Standard).

 

A machine readable text version of this email is available here.

Your mileage will certainly vary for these email requests, in part because the ability for agencies to search their communications can vary. Some agencies can search by a keyword like "Draft One” or "Axon" and while other agencies can only search by the specific email address. 

Communications can be one of the more expensive parts of the request. We've found that adding a date range and key terms or email addresses has helped limit these costs and made our requests a bit clearer for the agency. Axon sends a lot of automated emails to its subscribers, so the agency may quote a large fee for hundreds or thousands of emails that aren't particularly interesting. Many agencies respond positively if a requester reaches out to say they're open to narrowing or focusing their request. 

Asking for Body-Worn Camera Footage 

One of the big questions is how do the Draft One-generated reports compare to the BWC audio the narrative is based on? Are the reports accurate? Are they twisting people's words? Does Draft One hallucinate?

Finding these answers requires both obtaining the police report and the footage of the incident that was fed into the system. The laws and process for obtaining BWC footage vary dramatically state to state, and even department to department. Depending on where you live, it can also get expensive very quickly, since some states allow agencies to charge you not only for the footage but the time it takes to redact the footage. So before requesting footage, read up on your state’s public access laws or consult a lawyer.

However, once you have a copy of a Draft One report, you should have enough information to file a follow-up request for the BWC footage. 

So far, EFF has not requested BWC footage. In addition to the aforementioned financial and legal hurdles, the footage can implicate both individual privacy and transparency regarding police activity. As an organization that advocates for both, we want to make sure we get this balance right. Afterall, BWCs are a surveillance technology that collects intelligence on suspects, victims, witnesses, and random passersby. When the Palm Beach County Sheriff's Office gave us an AI-generated account of a teenager being hospitalized for suicidal ideations, we of course felt that the minor's privacy outweighed our interest in evaluating the AI. But do we feel the same way about a Draft One-generated narrative about a spring break brawl in Lake Havasu? 

Ultimately, we may try to obtain a limited amount of BWC footage, but we also recognize that we shouldn't make the public wait while we work it out for ourselves. Accountability requires different methods, different expertise, and different interests, and with this guide we hope to not only shine light on Draft One, but to provide the schematics for others–including academics, journalists, and local advocates–to build their own spotlights to expose police use of this problematic technology.

Where to Find More Docs 

Despite the variation in how agencies responded, we did have some requests that proved fruitful. You can find these requests and the documents we got via the linked police department names below.

Please note that we filed two different types of requests, so not all the elements above may be represented in each link.

Via Document Cloud (PDFs)

Via MuckRock (Assorted filetypes)

Special credit goes to EFF Research Assistant Jesse Cabrera for public records request coordination. 

Dave Maass

It's EFF's 35th Anniversary (And We're Just Getting Started)

2 days 8 hours ago

Today we celebrate 35 years of EFF bearing the torch for digital rights against the darkness of the world, and I couldn’t be prouder. EFF was founded at a time when governments were hostile toward technology and clueless about how it would shape your life. While threats from state and commercial forces grew alongside the internet, so too did EFF’s expertise. Our mission has become even larger than pushing back on government ignorance and increasingly dangerous corporate power. In this moment, we're doing our part to preserve the necessities of democracy: privacy, free expression, and due process. It's about guarding the security of our society, along with our loved ones and the vulnerable communities around us.

With the support of EFF’s members, we use law, technology, and activism to create the conditions for human rights and civil liberties to flourish, and for repression to fail.

In this moment, we're doing our part to preserve the necessities of democracy: privacy, free expression, and due process.

EFF believes in commonsense freedom and fairness. We’re working toward an environment where your technology works the way you want it to; you can move through the world without the threat of surveillance; and you can have private conversations with the people you care about and support the causes you believe in. We’ve won many fights for encryption, free expression, innovation, and your personal data throughout the years. The opposition is tough, but—with a powerful vision for a better future and you on our side—EFF is formidable.

Throughout EFF’s year-long 35th Anniversary celebration, our dedicated activists, investigators, technologists, and attorneys will share the lessons from EFF’s long and rich history so that we can help overcome the obstacles ahead. Thanks to you, EFF is here to stay.

Together for the Digital Future

As a member-supported nonprofit, everything EFF does depends on you. Donate to help fuel the fight for privacy, free expression, and a future where we protect digital freedom for everyone.

JOIN EFF

Powerful forces may try to chip away at your rights—but when we stand together, we win.


Watch Today: EFFecting Change Live

Just hours from now, join me for the 35th Anniversary edition of our EFFecting Change livestream. I’m leading this Q&A with EFF Director for International Freedom of Expression Jillian York, EFF Legislative Director Lee Tien, and Professor and EFF Board Member Yoshi Kohno. Together, we’ve seen it all and today we hope you'll join us for what’s next.

WATCH LIVE

11:00 AM Pacific (check local time)

EFF supporters around the world sustain our mission to defend technology creators and users. Thank you for being a part of this community and helping it thrive.

Cindy Cohn

Data Brokers are Selling Your Flight Information to CBP and ICE

2 days 16 hours ago

For many years, data brokers have existed in the shadows, exploiting gaps in privacy laws to harvest our information—all for their own profit. They sell our precise movements without our knowledge or meaningful consent to a variety of private and state actors, including law enforcement agencies. And they show no sign of stopping.

This incentivizes other bad actors. If companies collect any kind of personal data and want to make a quick buck, there’s a data broker willing to buy it and sell it to the highest bidder–often law enforcement and intelligence agencies.

One recent investigation by 404 Media revealed that the Airlines Reporting Corporation (ARC), a data broker owned and operated by at least eight major U.S. airlines, including United Airlines and American Airlines, collected travelers’ domestic flight records and secretly sold access to U.S. Customs and Border Protection (CBP). Despite selling passengers’ names, full flight itineraries, and financial details, the data broker prevented U.S. border forces from revealing it as the origin of the information. So, not only is the government doing an end run around the Fourth Amendment to get information where they would otherwise need a warrant—they’ve also been trying to hide how they know these things about us. 

ARC’s Travel Intelligence Program (TIP) aggregates passenger data and contains more than one billion records spanning 39 months of past and future travel by both U.S. and non-U.S. citizens. CBP, which sits within the U.S. Department of Homeland Security (DHS), claims it needs this data to support local and state police keeping track of people of interest. But at a time of growing concerns about increased immigration enforcement at U.S. ports of entry, including unjustified searches, law enforcement officials will use this additional surveillance tool to expand the web of suspicion to even larger numbers of innocent travelers. 

More than 200 airlines settle tickets through ARC, with information on more than 54% of flights taken globally. ARC’s board of directors includes representatives from U.S. airlines like JetBlue and Delta, as well as international airlines like Lufthansa, Air France, and Air Canada. 

In selling law enforcement agencies bulk access to such sensitive information, these airlines—through their data broker—are putting their own profits over travelers' privacy. U.S. Immigration and Customs Enforcement (ICE) recently detailed its own purchase of personal data from ARC. In the current climate, this can have a detrimental impact on people’s lives. 

Movement unrestricted by governments is a hallmark of a free society. In our current moment, when the federal government is threatening legal consequences based on people’s national, religious, and political affiliations, having air travel in and out of the United States tracked by any ARC customer is a recipe for state retribution. 

Sadly, data brokers are doing even broader harm to our privacy. Sensitive location data is harvested from smartphones and sold to cops, internet backbone data is sold to federal counterintelligence agencies, and utility databases containing phone, water, and electricity records are shared with ICE officers. 

At a time when immigration authorities are eroding fundamental freedoms through increased—and arbitrary—actions at the U.S. border, this news further exacerbates concerns that creeping authoritarianism can be fueled by the extraction of our most personal data—all without our knowledge or consent.

The new revelations about ARC’s data sales to CBP and ICE is a fresh reminder of the need for “privacy first” legislation that imposes consent and minimization limits on corporate processing of our data. We also need to pass the “Fourth Amendment is not for sale” act to stop police from bypassing judicial review of their data seizures by means of purchasing data from brokers. And let’s enforce data broker registration laws. 

Paige Collings

Electronic Frontier Foundation to Present Annual EFF Awards to Just Futures Law, Erie Meyer, and Software Freedom Law Center, India

2 days 18 hours ago
2025 Awards Will Be Presented in a Live Ceremony Wednesday, Sept. 10 in San Francisco

SAN FRANCISCO—The Electronic Frontier Foundation (EFF) is honored to announce that Just Futures Law, Erie Meyer, and Software Freedom Law Center, India will receive the 2025 EFF Awards for their vital work in ensuring that technology supports privacy, freedom, justice, and innovation for all people.  

The EFF Awards recognize specific and substantial technical, social, economic, or cultural contributions in diverse fields including journalism, art, digital access, legislation, tech development, and law.  

 The EFF Awards ceremony will start at 6 p.m. PT on Wednesday, Sept. 10, 2025 at the San Francisco Design Center Galleria, 101 Henry Adams St. in San Francisco. Guests can register at http://www.eff.org/effawards. The ceremony will be recorded and shared online on Sept. 12. 

For the past 30 years, the EFF Awards—previously known as the Pioneer Awards—have recognized and honored key leaders in the fight for freedom and innovation online. Started when the internet was new, the Awards now reflect the fact that the online world has become both a necessity in modern life and a continually evolving set of tools for communication, organizing, creativity, and increasing human potential. 

“Whether fighting the technological abuses that abet criminalization, detention, and deportation of immigrants and people of color, or working and speaking out fearlessly to protect Americans’ data privacy, or standing up for digital rights in the world’s most populous country, all of our 2025 Awards winners contribute to creating a brighter tech future for humankind,”  EFF Executive Director Cindy Cohn said. “We hope that this recognition will bring even more support for each of these vital efforts.” 

Just Futures Law: Leading Immigration and Surveillance Litigation 

jfl_icon_medium.png Just Futures Law is a women-of-color-led law project that recognizes how surveillance disproportionately impacts immigrants and people of color in the United States.  It uses litigation to fight back as part of defending and building the power of immigrant rights and criminal justice activists, organizers, and community groups to prevent criminalization, detention, and deportation of immigrants and people of color. Just Futures was founded in 2019 using a movement lawyering and racial justice framework and seeks to transform how litigation and legal support serves communities and builds movement power.  

In the past year, Just Futures sued the Department of Homeland Security and its subagencies seeking a court order to compel the agencies to release records on their use of AI and other algorithms, and sued the Trump Administration for prematurely halting Haiti’s Temporary Protected Status, a humanitarian program that allows hundreds of thousands of Haitians to temporarily remain and work in the United States due to Haiti’s current conditions of extraordinary crises. It has represented activists in their fight against tech giants like Clearview AI, it has worked with Mijente to launch the TakeBackTech fellowship to train new advocates on grassroots-directed research, and it has worked with Grassroots Leadership to fight for the release of detained individuals under Operation Lone Star

Erie Meyer: Protecting Americans' Privacy 

eriemeyer.png Erie Meyer is a Senior Fellow at the Vanderbilt Policy Accelerator where she focuses on the intersection of technology, artificial intelligence, and regulation, and a Senior Fellow at the Georgetown Law Institute for Technology Law & Policy. She is former Chief Technologist at both the Consumer Financial Protection Bureau (CFPB) and the Federal Trade Commission. Earlier, she was senior advisor to the U.S. Chief Technology Officer at the White House, where she co-founded the United States Digital Service, a team of technologists and designers working to improve digital services for the public. Meyer also worked as senior director at Code for America, a nonprofit that promotes civic hacking to modernize government services, and in the Ohio Attorney General's office at the height of the financial crisis. 

Since January 20, Meyer has helped organize former government technologists to stand up for the privacy and integrity of governmental systems that hold Americans’ data. In addition to organizing others, she filed a declaration in federal court in February warning that 12 years of critical records could be irretrievably lost in the CFPB’s purge by the Trump Administration’s Department of Government Efficiency. In April, she filed a declaration in another case warning about using private-sector AI on government information. That same month, she testified to the House Oversight Subcommittee on Cybersecurity, Information Technology, and Government Innovation that DOGE is centralizing access to some of the most sensitive data the government holds—Social Security records, disability claims, even data tied to national security—without a clear plan or proper oversight, warning that “DOGE is burning the house down and calling it a renovation.” 

Software Freedom Law Center, India: Defending Digital Freedoms 

sflc_logo.png Software Freedom Law Center, India is a donor-supported legal services organization based in India that brings together lawyers, policy analysts, students, and technologists to protect freedom in the digital world. It promotes innovation and open access to knowledge by helping developers make great free and open-source software, protects privacy and civil liberties for Indians by educating and providing free legal advice, and helps policymakers make informed and just decisions about use of technology. 

Founded in 2010 by technology lawyer and online civil liberties activist Mishi Choudhary, SFLC.IN tracks and participates in litigation, AI regulations, and free speech issues that are defining Indian technology. It also tracks internet shutdowns and censorship incidents across India, provides digital security training, and has launched the Digital Defenders Network, a pan-Indian network of lawyers committed to protecting digital rights. It has conducted landmark litigation cases, petitioned the government of India on freedom of expression and internet issues, and campaigned for WhatsApp and Facebook to fix a feature of their platform that has been used to harass women in India. 

To register for this event:  http://www.eff.org/effawards 

For past honorees: https://www.eff.org/awards/past-winners 

Josh Richman

EFF to US Court of Appeals: Protect Taxpayer Privacy

3 days 19 hours ago

EFF has filed an amicus brief in Trabajadores v. Bessent, a case concerning the Internal Revenue Service (IRS) sharing protected personal tax information with the Department of Homeland Security for the purposes of immigration enforcement. Our expertise in  privacy and data sharing makes us the ideal organization to step in and inform the judge: government actions like this have real-world consequences. The IRS’s sharing, and especially bulk sharing, of data is improper and  makes taxpayers vulnerable to inevitable mistakes. As a practical matter, the sharing of data that IRS had previously claimed was protected undermines the trust important civil institutions require in order to be effective. 

You can read the entire brief here

The brief makes two particular arguments. The first is that if the Tax Reform Act, the statute under which the IRS found the authority to share the data, is considered to be ambiguous, and that the statute should be interpreted in light of the legislative intent and historical background, which disfavors disclosure. The brief reads,

Given the historical context, and decades of subsequent agency promises to protect taxpayer confidentiality and taxpayer reliance on those promises, the Administration’s abrupt decision to re-interpret §6103 to allow sharing with ICE whenever a potential “criminal proceeding” can be posited, is a textbook example of an arbitrary and capricious action even if the statute can be read to be ambiguous.

The other argument we make to the court is that data scientists agree: when you try to corroborate information between two databases in which information is only partially identifiable, mistakes happen. We argue:

Those errors result from such mundane issues as outdated information, data entry errors, and taxpayers or tax preparer submission of incorrect names or addresses. If public reports are correct, and officials intend to share information regarding 700,000 or even 7 million taxpayers, the errors will multiply, leading to the mistaken targeting, detention, deportation, and potentially even physical harm to regular taxpayers.

Information silos in the government exist for a reason. Here, it was designed to protect individual privacy and prevent executive abuse that can come with unfettered access to properly-collected information.  The concern motivating Congress to pass the Tax Reform Act was the same as that behind Privacy Act of 1974 and the 1978 Right to Financial Privacy Act. These laws were part of a wave of reforms Congress considered necessary to address the misuse of tax data to spy on and harass political opponents, dissidents, civil rights activists, and anti-war protestors in the 1960s and early 1970s. Congress saw the need to ensure that data collected for one purpose should only be used for that purpose, with very narrow exceptions, or else it is prone to abuse. Yet the IRS is currently sharing information to allow ICE to enforce immigration law.

Taxation in the United States operates through a very simple agreement: the government requires taxes from people working inside the United States in order to function. In order to get people to pay their taxes, including undocumented immigrants living and working in the United States, the IRS has previously promised that the data they collect will not be used against a person for punitive reasons. This increases people to pay taxes and alleviates concerns of people people may have to avoid interacting with the government. But the IRS’s reversal has greatly harmed that trust and has potential to have far reaching and negative ramifications, including decreasing future tax revenue.

Consolidating government information so that the agencies responsible for healthcare, taxes, or financial support are linked to agencies that police, surveil, and fine people is a recipe for disaster. For that reason, EFF is proud to submit this amicus brief in Trabajadores v. Bessent in support of taxpayer privacy. 

Related Cases: American Federation of Government Employees v. U.S. Office of Personnel Management
Matthew Guariglia

How to Build on Washington’s “My Health, My Data” Act

5 days 16 hours ago

In 2023, the State of Washington enacted one of the strongest consumer data privacy laws in recent years: the “my health my data” act (HB 1155). EFF commends the civil rights, data privacy, and reproductive justice advocates who worked to pass this law.

This post suggests ways for legislators and advocates in other states to build on the Washington law and draft one with even stronger protections. This post will separately address the law’s scope (such as who is protected); its safeguards (such as consent and minimization); and its enforcement (such as a private right of action). While the law only applies to one category of personal data – our health information – its structure could be used to protect all manner of data.

Scope of Protection

Authors of every consumer data privacy law must make three decisions about scope: What kind of data is protected? Whose data is protected? And who is regulated?

The Washington law protects “consumer health data,” defined as information linkable to a consumer that identifies their “physical or mental health status.” This includes all manner of conditions and treatments, such as gender-affirming and reproductive care. While EFF’s ultimate goal is protection of all types of personal information, bills that protect at least some types can be a great start.

The Washington law protects “consumers,” defined as all natural persons who reside in the state or had their health data collected there. It is best, as here, to protect all people. If a data privacy law protects just some people, that can incentivize a regulated entity to collect even more data, in order to distinguish protected from unprotected people. Notably, Washington’s definition of “consumers” applies only in “an individual or household context,” but not “an employment context”; thus, Washingtonians will need a different health privacy law to protect them from their snooping bosses.

The Washington law defines a “regulated entity” as “any legal entity” that both: “conducts business” in the state or targets residents for products or services; and “determines the purpose and means” of processing consumer health data. This appears to include many non-profit groups, which is good, because such groups can harmfully process a lot of personal data.

The law excludes government from regulation, which is not unusual for data privacy bills focused on non-governmental actors. State and local government will likely need to be regulated by another data privacy law.

Unfortunately, the Washington law also excludes “contracted service providers when processing data on behalf of government.” A data broker or other surveillance-oriented business should not be free from regulation just because it is working for the police.

Consent or Minimization to Collect or Share Health Data

The most important part of Washington’s law requires either consent or minimization for a regulated entity to collect or share a consumer’s health data.

The law has a strong definition of “consent.” It must be “a clear affirmative act that signifies a consumer’s freely given, specific, informed, opt-in, voluntary, and unambiguous agreement.” Consent cannot be obtained with “broad terms of use” or “deceptive design.”

Absent consent, a regulated entity cannot collect or share a consumer’s health data except as necessary to provide a good or service that the consumer requested. Such rules are often called “data minimization.” Their virtue is that a consumer does not need to do anything to enjoy their statutory privacy rights; the burden is on the regulated entity to process less data.

As to data “sale,” the Washington law requires enhanced consent (which the law calls “valid authorization”). Sale is the most dangerous form of sharing, because it incentivizes businesses to collect the most possible data in hopes of later selling it. For this reason, some laws flatly ban sale of sensitive data, like the Illinois biometric information privacy act (BIPA).

For context, there are four ways for a bill or law to configure consent and/or minimization. Some require just consent, like BIPA’s provisions on data collection. Others require just minimization, like the federal “my body my data” bill. Still others require both, like the Massachusetts location data privacy bill. And some require either one or the other. In various times and places, EFF has supported all four configurations. “Either/or” is weakest, because it allows regulated entities to choose whether to minimize or to seek consent – a choice they will make based on their profit and not our privacy.

Two Protections of Location Data Privacy

Data brokers harvest our location information and sell it to anyone who will pay, including advertisers, police, and other adversaries. Legislators are stepping forward to address this threat.

The Washington law does so in two ways. First, the “consumer health data” protected by the consent-or-minimization rule is defined to include “precise location information that could reasonably indicate a consumer’s attempt to acquire or receive health services or supplies.” In turn, “precise location” is defined as within 1,750’ of a person.

Second, the Washington law bans a “geofence” around an “in-person health care service,” if “used” for one of three forbidden purposes (to track consumers, to collect their data, or to send them messages or ads). A “geofence” is defined as technology that uses GPS or the like “to establish a virtual boundary” of 2,000’ around the perimeter of a physical location.

This is a good start. It is also much better than weaker rules that only apply to the immediate vicinity of sensitive locations. Such rules allow adversaries to use location data to track us as we move towards sensitive locations, observe us enter the small no-data bubble around those locations, and infer what we may have done there. On the other hand, Washington’s rules apply to sizeable areas. Also, its consent-or-minimization rule applies to all locations that could indicate pursuit of health care (not just health facilities). And its geofence rule forbids use of location data to track people.

Still, the better approach, as in several recent bills, is to simply protect all location data. Protecting just one kind of sensitive location, like houses of worship, will leave out others, like courthouses. More fundamentally, all locations are sensitive, given the risk that others will use our location data to determine where – and with whom – we live, work, and socialize.

More Data Privacy Protections

Other safeguards in the Washington law deserve attention from legislators in other states:

  • Regulated entities must publish a privacy policy that discloses, for example, the categories of data collected and shared, and the purposes of collection. Regulated entities must not collect, use, or share additional categories of data, or process them for additional purposes, without consent.
  • Regulated entities must provide consumers the rights to access and delete their data.
  • Regulated entities must restrict data access to just those employees who need it, and maintain industry-standard data security
Enforcement

A law is only as strong as its teeth. The best way to ensure enforcement is to empower people to sue regulated entities that violate their privacy; this is often called a “private right of action.”

The Washington law provides that its violation is “an unfair or deceptive act” under the state’s separate consumer protection act. That law, in turn, bans unfair or deceptive acts in the conduct of trade or commerce. Upon a violation of the ban, that law provides a civil action to “any person who is injured in [their] business or property,” with the remedies of injunction, actual damages, treble damages up to $25,000, and legal fees and costs. It remains to be seen how Washington’s courts will apply this old civil action to the new “my health my data” act.

Washington legislators are demonstrating that privacy is important to public policy, but a more explicit claim would be cleaner: invasion of the fundamental human right to data privacy. Sadly, there is a nationwide debate about whether injury to data privacy, by itself, should be enough to go to court, without also proving a more tangible injury like identity theft. The best legislative models ensure full access to the courts in two ways. First, they provide: “A violation of this law regarding an individual’s data constitutes an injury to that individual, and any individual alleging a violation of this law may bring a civil action.” Second, they provide a baseline amount of damages (often called “liquidated” or “statutory” damages), because it is often difficult to prove actual damages arising from a data privacy injury.

Finally, data privacy laws must protect people from “pay for privacy” schemes, where a business charges a higher price or delivers an inferior product if a consumer exercises their statutory data privacy rights. Such schemes will lead to a society of privacy “haves” and “have nots.”

The Washington law has two helpful provisions. First, a regulated entity “may not unlawfully discriminate against a consumer for exercising any rights included in this chapter.” Second, there can be no data sale without a “statement” from the regulated entity to the consumer that “the provision of goods or services may not be conditioned on the consumer signing the valid authorization.”

Some privacy bills contain more-specific language, for example along these lines: “a regulated entity cannot take an adverse action against a consumer (such as refusal to provide a good or service, charging a higher price, or providing a lower quality) because the consumer exercised their data privacy rights, unless the data at issue is essential to the good or service they requested and then only to the extent the data is essential.”

What About Congress?

We still desperately need comprehensive federal consumer data privacy law built on “privacy first” principles. In the meantime, states are taking the lead. The very worst thing Congress could do now is preempt states from protecting their residents’ data privacy. Advocates and legislators from across the country, seeking to take up this mantle, would benefit from looking at – and building on – Washington’s “my health my data” law.

Adam Schwartz

🤫 Meta's Secret Spying Scheme | EFFector 37.7

1 week 2 days ago

Keeping up on the latest digital rights news has never been easier. With a new look, EFF's EFFector newsletter covers the latest details on our work defending your rights to privacy and free expression online.

EFFector 37.7 covers some of the very sneaky tactics that Meta has been using to track you online, and how you can mitigate some of this tracking. In this issue, we're also explaining the legal processes police use to obtain your private online data, and providing an update on the NO FAKES Act—a U.S. Senate bill that takes a flawed approach to concerns about AI-generated "replicas." 

And, in case you missed it in the previous newsletter, we're debuting a new audio companion to EFFector as well! This time, Lena Cohen breaks down the ways that Meta tracks you online and what you—and lawmakers—can do to prevent that tracking. You can listen now on YouTube or the Internet Archive.

Listen TO EFFECTOR

EFFECTOR 37.7 - META'S SECRET SPYING SCHEME

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

Podcast Episode: Cryptography Makes a Post-Quantum Leap

1 week 3 days ago

The cryptography that protects our privacy and security online relies on the fact that even the strongest computers will take essentially forever to do certain tasks, like factoring prime numbers and finding discrete logarithms which are important for RSA encryption, Diffie-Hellman key exchanges, and elliptic curve encryption. But what happens when those problems – and the cryptography they underpin – are no longer infeasible for computers to solve? Will our online defenses collapse? 

%3Ciframe%20height%3D%2252px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2Fcf786418-1f0e-452e-8026-ef1a38c77f4e%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com

   

(You can also find this episode on the Internet Archive and on YouTube.) 

Not if Deirdre Connolly can help it. As a cutting-edge thinker in post-quantum cryptography, Connolly is making sure that the next giant leap forward in computing – quantum machines that use principles of subatomic mechanics to ignore some constraints of classical mathematics and solve complex problems much faster – don’t reduce our digital walls to rubble. Connolly joins EFF’s Cindy Cohn and Jason Kelley to discuss not only how post-quantum cryptography can shore up those existing walls but also help us find entirely new methods of protecting our information. 

In this episode you’ll learn about: 

  • Why we’re not yet sure exactly what quantum computing can do yet, and that’s exactly why we need to think about post-quantum cryptography now 
  • What a “Harvest Now, Decrypt Later” attack is, and what’s happening today to defend against it
  • How cryptographic collaboration, competition, and community are key to exploring a variety of paths to post-quantum resilience
  • Why preparing for post-quantum cryptography is and isn’t like fixing the Y2K bug
  • How the best impact that end users can hope for from post-quantum cryptography is no visible impact at all
  • Don’t worry—you won’t have to know, or learn, any math for this episode!  

Deirdre Connolly is a research and applied cryptographer at Sandbox AQ with particular expertise in post-quantum encryption. She also co-hosts the “Security Cryptography Whatever” podcast about modern computer security and cryptography, with a focus on engineering and real-world experiences. Earlier, she was an engineer at the Zcash Foundation – a nonprofit that builds financial privacy infrastructure for the public good – as well as at Brightcove, Akamai, and HubSpot

Resources: 

What do you think of “How to Fix the Internet?” Share your feedback here.

Transcript

DEIRDRE CONNOLLY: I only got into cryptography and especially post quantum quickly after that. further into my professional life. I was a software engineer for a whil,e and the Snowden leaks happened, and phone records get leaked. All of Verizon's phone records get leaked. and then Prism and more leaks and more leaks. And as an engineer first, I felt like everything that I was building and we were building and telling people to use was vulnerable.
I wanted to learn more about how to do things securely. I went further and further and further down the rabbit hole of cryptography. And then, I think I saw a talk which was basically like, oh, elliptic curves are vulnerable to a quantum attack. And I was like, well, I, I really like these things. They're very elegant mathematical objects, it's very beautiful. I was sad that they were fundamentally broken, and, I think it was, Dan Bernstein who was like, well, there's this new thing that uses elliptic curves, but is supposed to be post quantum secure.
But the math is very difficult and no one understands it. I was like, well, I want to understand it if it preserves my beautiful elliptic curves. That's how I just went, just running, screaming downhill into post quantum cryptography.

CINDY COHN: That's Deirdre Connolly talking about how her love of beautiful math and her anger at the Snowden revelations about how the government was undermining security, led her to the world of post-quantum cryptography.
I'm Cindy Cohn, the executive director of the Electronic Frontier Foundation.

JASON KELLEY: And I'm Jason Kelley, EFF's activism director. You're listening to How to Fix the Internet.

CINDY COHN: On this show we talk to tech leaders, policy-makers, thinkers, artists and engineers about what the future could look like if we get things right online.

JASON KELLEY: Our guest today is at the forefront of the future of digital security. And just a heads up that this is one of the more technical episodes that we've recorded -- you'll hear quite a bit of cryptography jargon, so we've written up some of the terms that come up in the show notes, so take a look there if you hear a term you don't recognize.

CINDY COHN: Deidre Connolly is a research engineer and applied cryptographer at Sandbox AQ, with a particular expertise in post-quantum encryption. She also co-hosts the Security, Cryptography, Whatever podcast, so she's something of a cryptography influencer too. When we asked our tech team here at EFF who we should be speaking with on this episode about quantum cryptography and quantum computers more generally, everyone agreed that Deirdre was the person. So we're very glad to have you here. Welcome, Deirdre.

DEIRDRE CONNOLLY: Thank you very much for having me. Hi.

CINDY COHN: Now we obviously work with a lot of technologists here and, and certainly personally cryptography is near and dear to my heart, but we are not technologists, neither Jason nor I. So can you just give us a baseline of what post-quantum cryptography is and why people are talking about it?

DEIRDRE CONNOLLY: Sure. So a lot of the cryptography that we have deployed in the real world relies on a lot of math and security assumptions on that math based on things like abstract groups, Diffie-Hellman, elliptic curves, finite fields, and factoring prime numbers such as, uh, systems like RSA.
All of these, constructions and problems, mathematical problems, have served us very well in the last 40-ish years of cryptography. They've let us build very useful, efficient, small cryptography that we've deployed in the real world. It turns out that they are all also vulnerable in the same way to advanced cryptographic attacks that are only possible and only efficient when run on a quantum computer, and this is a class of computation, a whole new class of computation versus digital computers, which is the main computing paradigm that we've been used to for the last 75 years plus.
Quantum computers allow these new classes of attacks, especially, variants of Shore's algorithm – named Dr. Peter Shore – that basically when run on a sufficiently large, cryptographically relevant quantum computer, makes all of the asymmetric cryptography based on these problems that we've deployed very, very vulnerable.
So post-quantum cryptography is trying to take that class of attack into consideration and building cryptography to both replace what we've already deployed and make it resilient to this kind of attack, and trying to see what else we can do with these fundamentally different mathematical and cryptographic assumptions when building cryptography.

CINDY COHN: So we've kind of, we've secured our stuff behind a whole lot of walls, and we're slowly building a bulldozer. This is a particular piece of the world where the speed at which computers can do things has been part of our protection, and so we have to rethink that.

DEIRDRE CONNOLLY: Yeah, quantum computing is a fundamentally new paradigm of how we process data that promises to have very interesting, uh, and like, applications beyond what we can envision right now. Like things like protein folding, chemical analysis, nuclear simulation, and cryptanalysts, or very strong attacks against cryptography.
But it is a field where it's such a fundamentally new computational paradigm that we don't even know what its applications fully would be yet, because like we didn't fully know what we were doing with digital computers in the forties and fifties. Like they were big calculators at one time.

JASON KELLEY: When it was suggested that we talk to you about this. I admit that I have not heard much about this field, and I realized quickly when looking into it that there's sort of a ton of hype around quantum computing and post-quantum cryptography and that kind of hype can make it hard to know whether or not something is like actually going to be a big thing or, whether this is something that's becoming like an investment cycle, like a lot of things do. And one of the things that quickly came up as an actual, like real danger is what's called sort of “save now decrypt later.”

DEIRDRE CONNOLLY: Oh yeah.

JASON KELLEY: Right? We have all these messages, for example, that have been encrypted with current encryption methods. And if someone holds onto those, they can decrypt them using quantum computers in the future. How serious is that danger?

DEIRDRE CONNOLLY: It’s definitely a concern and it's the number one driver I would say to post-quantum crypto adoption in broad industry right now is mitigating the threat of a Store Now/Decrypt Later attack, also known as Harvest Now/Decrypt Later, a bunch of names that all mean the same thing.
And fundamentally, it's, uh, especially if you're doing any kind of key agreement over a public channel, and doing key agreement over a public channel is part of the whole purpose of like, you want to be able to talk to someone who you've never really, touched base with before, and you all kind of know, some public parameters that even your adversary knows and based on just the fact that you can send messages to each other and some public parameters, and some secret values that only you know, and only the other party knows you can establish a shared secret, and then you can start encrypting traffic between you to communicate. And this is what you do in your web browser when you have an HTTPS connection, that's over TLS.
This is what you do with Signal or WhatsApp or any, or, you know, Facebook Messenger with the encrypted communications. They're using Diffie-Helman as part of the protocol to set up a shared secret, and then you use that to encrypt their message bodies that you're sending back and forth between you.
But if you can just store all those communications over that public channel, and the adversary knows the public parameters 'cause they're freely published, that's part of Kerckhoff’s Principle about good cryptography - the only thing that the adversary shouldn't know about your crypto system is the secret key values that you're actually using. It should be secure against an adversary that knows everything that you know, except the secret key material.
And you can just record all those public messages and all the public key exchange messages, and you just store them in a big database somewhere. And then when you have your large cryptographically relevant quantum computer, you can rifle through your files and say, hmm, let's point it at this.
And that's the threat that's live now to the stuff that we have already deployed and the stuff that we're continuing to do communications on now that is protected by elliptic curve Diffie Hellman, or Finite Field Diffie Hellman, or RSA. They can just record that and just theoretically point an attack at it at a later date when that attack comes online.
So like in TLS, there's a lot of browsers and servers and infrastructure providers that have updated to post-quantum resilient solutions for TLS. So they're using a combination of the classic elliptic curve, Diffie Hellman and a post-quantum KEM, uh, called ML Kem that was standardized by the United States based on a public design that's been, you know, a multi international collaboration to help do this design.
I think that's been deployed in Chrome, and I think it's deployed by CloudFlare and it's getting deployed – I think it's now become the default option in the latest version of Open SSL. And a lot of other open source projects, so that's TLS similar, approaches are being adopted in open SSH, the most popular SSH implementation in the world. Signal, the service has updated their key exchange to also include a post quantum KEM and their updated key establishments. So when you start a new conversation with someone or reset a conversation with someone that is the latest version of Signal is now protected against that sort of attack.
That is definitely happening and it's happening the most rapidly because of that Store now/Decrypt later attack, which is considered live. Everything that we're doing now can just be recorded and then later when the attack comes online, they can attack us retroactively. So that's definitely a big driver of things changing in the wild right now.

JASON KELLEY: Okay. I'm going to throw out two parallels for my very limited knowledge to make sure I understand. This reminds me a little bit of sort of the work that had to be done before Y2K in, in the sense of like, now people think nothing went wrong and nothing was ever gonna go wrong, but all of us working anywhere near the field know actually it took a ton of work to make sure that nothing blew up or stopped working.
And the other is that in, I think it was 1998, EFF was involved in something we called Deep Crack, where we made, that's a, I'm realizing now that's a terrible name. But anyway, the DES cracker, um, we basically wanted to show that DES was capable of being cracked, right? And that this was a - correct me if I'm wrong - it was some sort of cryptographic standard that the government was using and people wanted to show that it wasn't sufficient.

DEIRDRE CONNOLLY: Yes - I think it was the first digital encryption standard. And then after its vulnerability was shown, they, they tripled it up to, to make it useful. And that's why Triple DES is still used in a lot of places and is actually considered okay. And then later came the advanced encryption standard, AES, which we prefer today.

JASON KELLEY: Okay, so we've learned the lesson, or we are learning the lesson, it sounds like.

DEIRDRE CONNOLLY: Uh huh.

CINDY COHN: Yeah, I think that that's, that's right. I mean, EFF built the DES cracker because in the nineties the government was insisting that something that everybody knew was really, really insecure and was going to only get worse as computers got stronger and, and strong computers got in more people's hands, um, to basically show that the emperor had no clothes, um, that this wasn't very good.
And I think with the NIST standards and what's happening with post-quantum is really, you know, the hopeful version is we learned that lesson and we're not seeing government trying to pretend like there isn't a risk in order to preserve old standards, but instead leading the way with new ones. Is that fair?

DEIRDRE CONNOLLY: That is very fair. NIST ran this post-quantum competition almost over 10 years, and it had over 80 submissions in the first round from all over the world, from industry, academia, and a mix of everything in between, and then it narrowed it down to. the three that are, they're not all out yet, but there's the key agreement, one called ML Kem, and three signatures. And there's a mix of cryptographic problems that they're based on, but there were multiple rounds, lots of feedback, lots of things got broken.
This competition has absolutely led the way for the world of getting ready for post-quantum cryptography. There are some competitions that have happened in Korea, and I think there's some work happening in China for their, you know, for their area.
There are other open standards and there are standards happening in other standards bodies, but the NIST competition has led the way, and it, because it's all open and all these standards are open and all of the work and the cryptanalysis that has gone in for the whole stretch. It's all been public and all these standards and drafts and analysis and attacks have been public. It's able to benefit everyone in the world.

CINDY COHN: I got started in the crypto wars in the nineties where the government was kind of the problem and they still are. And I do wanna ask you about whether you're seeing any role of the kinda national social security, FBI infrastructure, which has traditionally tried to put a thumb on the scales and make things less secure so that they could have access, if you're seeing any of that there.
But on the NIST side, I think this provides a nice counter example of how government can help facilitate building a better world sometimes, as opposed to being the thing we have to drag kicking and screaming into it.
But let me circle around to the question I embedded in that, which is, you know, one of the problems that that, that we know happened in the nineties around DES, and then of course some of the Snowden revelations indicated some mucking about in security as well behind the scenes by the NSA. Are you seeing anything like that and, and what should we be on the lookout for?

DEIRDRE CONNOLLY: Not in the PQC stuff. Uh, there, like there have been a lot of people that were paying very close attention to what these independent teams were proposing and then what was getting turned into a standard or a proposed standard and every little change, because I, I was closely following the key establishment stuff.
Um, every little change people were trying to be like, did you tweak? Why did you tweak that? Did, like, is there a good reason? And like, running down basically all of those things. And like including trying to get into the nitty gritty of like. Okay. We think this is approximately these many bits of security using these parameter and like talking about, I dunno, 123 versus 128 bits and like really paying attention to all of that stuff.
And I don't think there was any evidence of anything like that. And, and for, for plus or minus, because there were. I don't remember which crypto scheme it was, but it, there was definitely an improvement from, I think some of the folks at NSA very quietly back in the day to, I think it was the S boxes, and I don't remember if it was DES or AES or whatever it was.
But people didn't understand at the time because it was related to advanced, uh, I think it was a differential crypto analysis attacks that folks inside there knew about, and people in outside academia didn't quite know about yet. And then after the fact they were like, oh, they've made this better. Um, we're not, we're not even seeing any evidence of anything of that character either.
It's just sort of like, it's very open letting, like if everything's proceeding well and the products are going well of these post-quantum standards, like, you know, leave it alone. And so everything looks good. And like, especially for NSA, uh, national Security Systems in the, in the United States, they have updated their own targets to migrate to post-quantum, and they are relying fully on the highest security level of these new standards.
So like they are eating their own dog food. They're protecting the highest classified systems and saying these need to be fully migrated to fully post quantum key agreement. Uh, and I think signatures at different times, but there has to be by like 2035. So if they were doing anything to kind of twiddle with those standards, they'd be, you know, hurting themselves and shooting themselves in the foot.

CINDY COHN: Well fingers crossed.

DEIRDRE CONNOLLY: Yes.

CINDY COHN: Because I wanna build a better internet and a better. Internet means that they aren't secretly messing around with our security. And so this is, you know, cautiously good news.

JASON KELLEY: Let's take a quick moment to thank our sponsor.
“How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.
We also want to thank EFF members and donors. EFF has been fighting for digital rights for 35 years, and that fight is bigger than ever, so please, if you like what we do, go to eff.org/pod to donate. Also, we’d love for you to join us at this year’s EFF awards, where we celebrate the people working towards the better digital future that we all care so much about. Those are coming up on September 12th in San Francisco. You can find more information about that at eff.org/awards.
We also wanted to share that our friend Cory Doctorow has a new podcast. Listen to this.  [Who Broke the Internet trailer]

JASON KELLEY: And now, back to our conversation with Deirdre Connolly.

CINDY COHN: I think the thing that's fascinating about this is kind of seeing this cat and mouse game about the ability to break codes, and the ability to build codes and systems that are resistant to the breaking, kind of playing out here in the context of building better computers for everyone.
And I think it's really fascinating and I think it also for people I. You know, this is a pretty technical conversation, um, even, you know, uh, for our audience. But this is the stuff that goes on under the hood of how we keep journalists safe, how we keep activists safe, how we keep us all safe, whether it's our bank accounts or our, you know, people are talking about mobile IDs now and other, you know, all sorts of sensitive documents that are going to not be in physical form anymore, but are gonna be in digital form.
And unless we get this lock part right, we're really creating problems for people. And you know, what I really appreciate about you and the other people kind of in the midst of this fight is it's very unsung, right? It's kind of under the radar for the rest of us, but yet it's the, it's the ground that we need to stand on to, to be safe moving forward.

DEIRDRE CONNOLLY: Yeah, and there's a lot of assumptions, uh, that even the low level theoretical cryptographers and the people implementing their, their stuff into software and the stuff, the people trying to deploy, that there's a, a lot of assumptions that have been baked into what we've built that to a degree don't quite fit in some of the, the things we've been able to build in a post-quantum secure way, or the way we think it's a post-quantum secure way.
Um, we're gonna need to change some stuff and we think we know how to change some stuff to make it work. but we are hoping that we don't accidentally introduce any vulnerabilities or gaps.
We're trying, but also we're not a hundred percent sure that we're not missing something, 'cause these things are new. And so we're trying, and we're also trying to make sure we don't break things as we change them because we're trying to change them to be post quantum resilient. But you know, once you change something, if there's a possibility, you, you just didn't understand it completely. And you don't wanna break something that was working well in one direction because you wanna improve it in another direction.

CINDY COHN: And that's why I think it's important to continue to have a robust community of people who are the breakers, right? Who are, are hackers, who are, who are attacking. And that is a, you know, that's a mindset, right? That's a way of thinking about stuff that is important to protect and nurture, um, because, you know, there's an old quote from Bruce Schneider: Anyone can build a crypto system that they themselves cannot break. Right? It takes a community of people trying to really pound away at something to figure out where the holes are.
And you know, a lot of the work that EFF does around coders rights and other kinds of things is to make sure that there's space for that. and I think it's gonna be as needed in a quantum world as it was in a kind of classical computer world.

DEIRDRE CONNOLLY: Absolutely. I'm confident that we will learn a lot more from the breakers about this new cryptography because, like, we've tried to be robust through this, you know, NIST competition, and a lot of those, the things that we learn apply to other constructions as they come out. but like there's a whole area of people who are going to be encountering this kind of newish cryptography for the first time, and they kind of look at it and they're like. Oh, uh, I, I think I might be able to do something interesting with this, and we're, we'll all learn more and we'll try to patch and update as quickly as possible

JASON KELLEY: And this is why we have competitions to figure out what the best options are and why some people might favor one algorithm over another for different, different processes and things like that.

DEIDRE CONNOLLY: And that's why we're probably gonna have a lot of different flavors of post-quantum cryptography getting deployed in the world because it's not just, ah, you know, I don't love NIST. I'm gonna do my own thing in my own country over here. Or, or have different requirements. There is that at play, but also you're trying to not put all your eggs in one basket as well.

CINDY COHN: Yeah, so we want a menu of things so that people can really pick, from, you know, vetted, but different strategies. So I wanna ask the kind of core question for the podcast, which is, um, what does it look like if we get this right, if we get quantum computing and, you know, post-quantum crypto, right?
How does the world look different? Or does it just look the same? How, what, what does it look like if we do this well?

DEIRDRE CONNOLLY: Hopefully to a person just using their phone or using their computer to talk to somebody on the other side of the world, hopefully they don't notice. Hopefully to them, if they're, you know, deploying a website and they're like, ah, I need to get a Let’s Encrypt certificate or whatever.
Hopefully Let's Encrypt just, you know, insert bot just kind of does everything right by default and they don't have to worry about it.
Um, for the builders, it should be, we have a good recommended menu of cryptography that you can use when you're deploying TLS, when you're deploying SSH, uh, when you're building cryptographic applications, especially.
So like if you are building something in Go or Java or you know, whatever it might be, the crypto library in your language will have the updated recommended signature algorithm or key agreement algorithm and be, like, this is how we, you know, they have code snippets to say like, this is how you should use it, and they will deprecate the older stuff.
And, like, unfortunately there's gonna be a long time where there's gonna be a mix of the new post-quantum stuff that we know how to use and know how to deploy and protect. The most important, you know, stuff like to mitigate Store now/Decrypt later and, you know, get those signatures with the most important, uh, protected stuff.
Uh, get those done. But there's a lot of stuff that we're not really clear about. How we wanna do it yet, and kind of going back to one of the things you mentioned earlier, uh, comparing this to Y2K, there was a lot of work that went into mitigating Y2K before, during, immediately after.
Unfortunately, the comparison to the post quantum migration kind of falls down because after Y2K, if you hadn't fixed something, it would break. And you would notice in usually an obvious way, and then you could go find it. You, you fix the most important stuff that, you know, if it broke, like you would lose billions of dollars or, you know, whatever. You'd have an outage.
For cryptography, especially the stuff that's a little bit fancier. Um, you might not know it's broken because the adversary is not gonna, it's not gonna blow up.
And you have to, you know, reboot a server or patch something and then, you know, redeploy. If it's gonna fail, it's gonna fail quietly. And so we're trying to kind of find these things, or at least make the kind of longer tail of stuff, uh, find fixes for that upfront, you know, so that at least the option is available.
But for a regular person, hopefully they shouldn't notice. So everyone's trying really hard to make it so that the best security, in terms of the cryptography is deployed with, without downgrading your experience. We're gonna keep trying to do that.
I don't wanna build crap and say “Go use it.” I want you to be able to just go about your life and use a tool that's supposed to be useful and helpful. And it's not accidentally leaking all your data to some third party service or just leaving a hole on your network for any, any actor who notices to walk through and you know, all that sort of stuff.
So whether it's like implementing things securely in software, or it's cryptography or you know, post-quantum weirdness, like for me, I just wanna build good stuff for people, that's not crap.

JASON KELLEY: Everyone listening to this agrees with you. We don't want to build crap. We want to build some beautiful things. Let's go out there and do it.

DEIRDRE CONNOLLY: Cool.

JASON KELLEY: Thank you so much, Deirdre.

DEIRDRE CONNOLLY: Thank you!

CINDY COHN: Thank you Deirdre. We really appreciate you coming and explaining all of this to, you know, uh, the lawyer and activist at EFF.

JASON KELLEY: Well, I think that was probably the most technical conversation we've had, but I followed along pretty well and I feel like at first I was very nervous based on the, save and decrypt concerns. But after we talked to Deirdre, I feel like the people working on this. Just like for Y2K are pretty much gonna keep us out of hot water. And I learned a lot more than I did know before we started the conversation. What about you, Cindy?

CINDY COHN: I learned a lot as well. I mean, cryptography and, attacks on security is always, you know, it's a process, and it's a process by which we do the best we can, and then, then we also do the best we can to rip it apart and find all the holes, and then we, we iterate forward. And it's nice to hear that that model is still the model, even as we get into something like quantum computers, which, um, frankly are still hard to conceptualize.
But I agree. I think that what the good news outta this interview is I feel like there's a lot of pieces in place to try to do this right, to have this tremendous shift in computing that we don't know when it's coming, but I think that the research indicates that it SI coming, be something that we can handle, um, rather than something that overwhelms us.
And I think that's really,it's good to hear that good people are trying to do the right thing here since it's not inevitable.

JASON KELLEY: Yeah, and it is nice when someone's sort of best vision for what the future looks like is hopefully your life. You will have no impacts from this because everything will be taken care of. That's always good.
I mean, it sounds like, you know, the main thing for EFF is, as you said, we have to make sure that security engineers, hackers have the resources that they need to protect us from these kinds of threats and, and other kinds of threats obviously.
But, you know, that's part of EFF's job, like you mentioned. Our job is to make sure that there are people able to do this work and be protected while doing it so that when the. Solutions do come about. You know, they work and they're implemented and the average person doesn't have to know anything and isn't vulnerable.

CINDY COHN: Yeah, I also think that, um, I appreciated her vision that this is a, you know, the future's gonna be not just one. Size fits all solution, but a menu of things that take into account, you know, both what works better in terms of, you know, bandwidth and compute time, but also what you know, what people actually need.
And I think that's a piece that's kind of built into the way that this is happening that's also really hopeful. In the past and, and I was around when EFF built the DES cracker, um, you know, we had a government that was saying, you know, you know, everything's fine, everything's fine when everybody knew that things weren't fine.
So it's also really hopeful that that's not the position that NIST is taking now, and that's not the position that people who may not even pick the NIST standards but pick other standards are really thinking through.

JASON KELLEY: Yeah, it's very helpful and positive and nice to hear when something has improved for the better. Right? And that's what happened here. We had this, this different attitude from, you know, government at large in the past and it's changed and that's partly thanks to EFF, which is amazing.

CINDY COHN: Yeah, I think that's right. And, um, you know, we'll see going forward, you know, the governments change and they go through different things, but this is, this is a hopeful moment and we're gonna push on through to this future.
I think there's a lot of, you know, there's a lot of worry about quantum computers and what they're gonna do in the world, and it's nice to have a little vision of, not only can we get it right, but there are forces in place that are getting it right. And of course it does my heart so, so good that, you know, someone like Deirdre was inspired by Snowden and dove deep and figured out how to be one of the people who was building the better world. We've talked to so many people like that, and this is a particular, you know, little geeky corner of the world. But, you know, those are our people and that makes me really happy.

JASON KELLEY: Thanks for joining us for this episode of How to Fix the Internet.
If you have feedback or suggestions, we'd love to hear from you. Visit EFF dot org slash podcast and click on listener feedback. While you're there, you can become a member, donate, maybe even pick up some merch and just see what's happening in digital rights this week and every week.
Our theme music is by Nat Keefe of BeatMower with Reed Mathis
How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.
We’ll see you next time.
I’m Jason Kelley…

CINDY COHN: And I’m Cindy Cohn.

MUSIC CREDITS: This podcast is licensed creative commons attribution 4.0 international, and includes the following music licensed creative commons attribution 3.0 unported by its creators: Drops of H2O, The Filtered Water Treatment by Jay Lang. Sound design, additional music and theme remixes by Gaetan Harris.

Josh Richman

EFFecting Change: EFF Turns 35!

1 week 3 days ago

We're wishing EFF a happy birthday on July 10! Since 1990, EFF's lawyers, activists, analysts, and technologists have used everything in their toolkit to ensure that technology supports freedom, justice, and innovation for all people of the world. They've seen it all and in this special edition of our EFFecting Change livestream series, leading experts at EFF will explore what's next for technology users.

EFFecting Change Livestream Series:
EFF Turns 35!
Thursday, July 10th
11:00 AM - 12:00 PM Pacific - Check Local Time
This event is LIVE and FREE!


Join EFF Executive Director Cindy Cohn, EFF Legislative Director Lee Tien, EFF Director for International Freedom of Expression Jillian C. York, and Professor / EFF Board Member Yoshi Kohno for this live Q&A. Learn what they have seen and how we can fuel the fight for privacy, free expression, and a future where digital freedoms are protected for everyone. 

We hope you and your friends can join us live! Be sure to spread the word, and share our past livestreams. Please note that all events will be recorded for later viewing on our YouTube page.

Want to make sure you don’t miss our next livestream? Here’s a link to sign up for updates about this series:eff.org/ECUpdates.

Aaron Jue

Flock Safety’s Feature Updates Cannot Make Automated License Plate Readers Safe

2 weeks ago

Two recent statements from the surveillance company—one addressing Illinois privacy violations and another defending the company's national surveillance network—reveal a troubling pattern: when confronted by evidence of widespread abuse, Flock Safety has blamed users, downplayed harms, and doubled down on the very systems that enabled the violations in the first place.

Flock's aggressive public relations campaign to salvage its reputation comes as no surprise. Last month, we described how investigative reporting from 404 Media revealed that a sheriff's office in Texas searched data from more than 83,000 automated license plate reader (ALPR) cameras to track down a woman suspected of self-managing an abortion. (A scenario that may have been avoided, it's worth noting, had Flock taken action when they were first warned about this threat three years ago).

Flock calls the reporting on the Texas sheriff's office "purposefully misleading," claiming the woman was searched for as a missing person at her family's request rather than for her abortion. But that ignores the core issue: this officer used a nationwide surveillance dragnet (again: over 83,000 cameras) to track someone down, and used her suspected healthcare decisions as a reason to do so. Framing this as concern for her safety plays directly into anti-abortion narratives that depict abortion as dangerous and traumatic in order to justify increased policing, criminalization, control—and, ultimately, surveillance.

Flock Safety has blamed users, downplayed harms, and doubled down on the very systems that enabled the violations in the first place.

As if that weren't enough, the company has also come under fire for how its ALPR network data is being actively used to assist in mass deportation. Despite U.S. Immigration and Customs Enforcement (ICE) having no formal agreement with Flock Safety, public records revealed "more than 4,000 nation and statewide lookups by local and state police done either at the behest of the federal government or as an 'informal' favor to federal law enforcement, or with a potential immigration focus." The network audit data analyzed by 404 exposed an informal data-sharing environment that creates an end-run around oversight and accountability measures: federal agencies can access the surveillance network through local partnerships without the transparency and legal constraints that would apply to direct federal contracts.

Flock Safety is adamant this is "not Flock's decision," and by implication, not their fault. Instead, the responsibility lies with each individual local law enforcement agency. In the same breath, they insist that data sharing is essential, loudly claiming credit when the technology is involved in cross-jurisdictional investigations—but failing to show the same attitude when that data-sharing ecosystem is used to terrorize abortion seekers or immigrants. 

Flock Safety: The Surveillance Social Network

In growing from a 2017 startup to a $7.5 billion company "serving over 5,000 communities," Flock allowed individual agencies wide berth to set and regulate their own policies. In effect, this approach offered cheap surveillance technology with minimal restrictions, leaving major decisions and actions in the hands of law enforcement while the company scaled rapidly.

And they have no intention of slowing down. Just this week, Flock launched its Business Network, facilitating unregulated data sharing amongst its private sector security clients. "For years, our law enforcement customers have used the power of a shared network to identify threats, connect cases, and reduce crime. Now, we're extending that same network effect to the private sector," Flock Safety's CEO announced

Flock Safety wooing law enforcement officers at the 2023 International Chiefs of Police Conference.

The company is building out a new mass surveillance network using the exact template that ended with the company having to retrain thousands of officers in Illinois on how not to break state law—the same template that made it easy for officers to do so in the first place. Flock's continued integration of disparate surveillance networks across the public and private spheres—despite the harms that have already occurred—is owed in part to the one thing that it's gotten really good at over the past couple of years: facilitating a surveillance social network. 

Employing marketing phrases like "collaboration" and "force multiplier," Flock encourages as much sharing as possible, going as far as to claim that network effects can significantly improve case closure rates. They cultivate a sense of shared community and purpose among users so they opt into good faith sharing relationships with other law enforcement agencies across the country. But it's precisely that social layer that creates uncontrollable risk.

The possibility of human workarounds at every level undermines any technical safeguards Flock may claim. Search term blocking relies on officers accurately labeling search intent—a system easily defeated by entering vague reasons like "investigation" or incorrect justifications, made either intentionally or not. And, of course, words like "investigation" or "missing person" can mean virtually anything, offering no value to meaningful oversight of how and for what the system is being used. Moving forward, sheriff's offices looking to avoid negative press can surveil abortion seekers or immigrants with ease, so long as they use vague and unsuspecting reasons. 

The same can be said for case number requirements, which depend on manual entry. This can easily be circumvented by reusing legitimate case numbers for unauthorized searches. Audit logs only track inputs, not contextual legitimacy. Flock's proposed AI-driven audit alerts, something that may be able to flag suspicious activity after searches (and harm) have already occurred, relies on local agencies to self-monitor misuse—despite their demonstrated inability to do so.

Flock operates as a single point of failure that can compromise—and has compromised—the privacy of millions of Americans simultaneously.

And, of course, even the most restrictive department policy may not be enough. Austin, Texas, had implemented one of the most restrictive ALPR programs in the country, and the program still failed: the city's own audit revealed systematic compliance failures that rendered its guardrails meaningless. The company's continued appeal to "local policies" means nothing when Flock's data-sharing network does not account for how law enforcement policies, regulations, and accountability vary by jurisdiction. You may have a good relationship with your local police, who solicit your input on what their policy looks like; you don't have that same relationship with hundreds or thousands of other agencies with whom they share their data. So if an officer on the other side of the country violates your privacy, it’d be difficult to hold them accountable. 

ALPR surveillance systems are inherently vulnerable to both technical exploitation and human manipulation. These vulnerabilities are not theoretical—they represent real pathways for bad actors to access vast databases containing millions of Americans' location data. When surveillance databases are breached, the consequences extend far beyond typical data theft—this information can be used to harass, stalk, or even extort. The intimate details of people's daily routines, their associations, and their political activities may become available to anyone with malicious intent. Flock operates as a single point of failure that can compromise—and has compromised—the privacy of millions of Americans simultaneously.

Don't Stop de-Flocking

Rather than addressing legitimate concerns about privacy, security, and constitutional rights, Flock has only promised updates that fall short of meaningful reforms. These software tweaks and feature rollouts cannot assuage the fear engendered by the massive surveillance system it has built and continues to expand.

A typical specimen of Flock Safety's automated license plate readers.

Flock's insistence that what's happening with abortion criminalization and immigration enforcement has nothing to do with them—that these are just red-state problems or the fault of rogue officers—is concerning. Flock designed the network that is being used, and the public should hold them accountable for failing to build in protections from abuse that cannot be easily circumvented.

Thankfully, that's exactly what's happening: cities like Austin, San MarcosDenver, Norfolk, and San Diego are pushing back. And it's not nearly as hard a choice as Flock would have you believe: Austinites are weighing the benefits of a surveillance system that generates a hit less than 0.02% of the time against the possibility that scanning 75 million license plates will result in an abortion seeker being tracked down by police, or an immigrant being flagged by ICE in a so-called "sanctuary city." These are not hypothetical risks. It is already happening.

Given how pervasive, sprawling, and ungovernable ALPR sharing networks have become, the only feature update we can truly rely on to protect people's rights and safety is no network at all. And we applaud the communities taking decisive action to dismantle its surveillance infrastructure.

Follow their lead: don't stop de-flocking.

Sarah Hamid

Today's Supreme Court Decision on Age Verification Tramples Free Speech and Undermines Privacy

2 weeks ago

Today’s decision in Free Speech Coalition v. Paxton is a direct blow to the free speech rights of adults. The Court ruled that “no person—adult or child—has a First Amendment right to access speech that is obscene to minors without first submitting proof of age.” This ruling allows states to enact onerous age-verification rules that will block adults from accessing lawful speech, curtail their ability to be anonymous, and jeopardize their data security and privacy. These are real and immense burdens on adults, and the Court was wrong to ignore them in upholding Texas’ law.  

Importantly, the Court's reasoning applies only to age-verification rules for certain sexual material, and not to age limits in general. We will continue to fight against age restrictions on online access more broadly, such as on social media and specific online features.  

Still, the decision has immense consequences for internet users in Texas and in other states that have enacted similar laws. The Texas law forces adults to submit personal information over the internet to access entire websites that hold some amount of sexual material, not just pages or portions of sites that contain specific sexual materials. Many sites that cannot reasonably implement age verification measures for reasons such as cost or technical requirements will likely block users living in Texas and other states with similar laws wholesale.  

Importantly, the Court's reasoning applies only to age-verification rules for certain sexual material, and not to age limits in general. 

Many users will not be comfortable sharing private information to access sites that do implement age verification, for reasons of privacy or concern for data breaches. Many others do not have a driver’s license or photo ID to complete the age verification process. This decision will, ultimately, deter adult users from speaking and accessing lawful content, and will endanger the privacy of those who choose to go forward with verification. 

What the Court Said Today 

In the 6-3 decision, the Court ruled that Texas’ HB 1181 is constitutional. This law requires websites that Texas decides are composed of “one-third” or more of “sexual material harmful to minors” to confirm the age of users by collecting age-verifying personal information from all visitors—even to access the other two-thirds of material that is not adult content.   

In 1997, the Supreme Court struck down a federal online age-verification law in Reno v. American Civil Liberties Union. In that case the court ruled that many elements of the Communications Decency Act violated the First Amendment, including part of the law making it a crime for anyone to engage in online speech that is "indecent" or "patently offensive" if the speech could be viewed by a minor. Like HB 1181, that law would have resulted in many users being unable to view constitutionally protected speech, as many websites would have had to implement age verification, while others would have been forced to shut down.  

In Reno and in subsequent cases, the Supreme Court ruled that laws that burden adults’ access to lawful speech are subjected to the highest level of review under the First Amendment, known as strict scrutiny. This level of scrutiny requires a law to be very narrowly tailored or the least speech-restrictive means available to the government.  

That all changed with the Supreme Court’s decision today.  

The Court now says that laws that burden adults’ access to sexual materials that are obscene to minors are subject to less-searching First Amendment review, known as intermediate scrutiny. And under that lower standard, the Texas law does not violate the First Amendment. The Court did not have to respond to arguments that there are less speech-restrictive ways of reaching the same goal—for example, encouraging parents to install content-filtering software on their children’s devices.

The court reached this decision by incorrectly assuming that online age verification is functionally equivalent to flashing an ID at a brick-and-mortar store. As we explained in our amicus brief, this ignores the many ways in which verifying age online is significantly more burdensome and invasive than doing so in person. As we and many others have previously explained, unlike with in-person age-checks, the only viable way for a website to comply with an age verification requirement is to require all users to upload and submit—not just momentarily display—a data-rich government-issued ID or other document with personal identifying information.  

This leads to a host of serious anonymity, privacy, and security concerns—all of which the majority failed to address. A person who submits identifying information online can never be sure if websites will keep that information or how that information might be used or disclosed. This leaves users highly vulnerable to data breaches and other security harms. Age verification also undermines anonymous internet browsing, even though courts have consistently ruled that anonymity is an aspect of the freedom of speech protected by the First Amendment.    

This Supreme Court broke a fundamental agreement between internet users and the state that has existed since its inception

The Court sidestepped its previous online age verification decisions by claiming the internet has changed too much to follow the precedent from Reno that requires these laws to survive strict scrutiny. Writing for the minority, Justice Kagan disagreed with the premise that the internet has changed: “the majority’s claim—again mistaken—that the internet has changed too much to follow our precedents’ lead.”   

But the majority argues that past precedent does not account for the dramatic expansion of the internet since the 1990s, which has led to easier and greater internet access and larger amounts of content available to teens online. The majority’s opinion entirely fails to address the obvious corollary: the internet’s expansion also has benefited adults. Age verification requirements now affect exponentially more adults than they did in the 1990s and burden vastly more constitutionally protected online speech. The majority's argument actually demonstrates that the burdens on adult speech have grown dramatically larger because of technological changes, yet the Court bizarrely interprets this expansion as justification for weaker constitutional protection. 

What It Means Going Forward 

This Supreme Court broke a fundamental agreement between internet users and the state that has existed since its inception: the government will not stand in the way of people accessing First Amendment-protected material. There is no question that multiple states will now introduce similar laws to Texas. Two dozen already have, though they are not all in effect. At least three of those states have no limit on the percentage of material required before the law applies—a sweeping restriction on every site that contains any material that the state believes the law includes. These laws will force U.S.-based adult websites to implement age-verification or block users in those states, as many have in the past when similar laws were in effect.  

Rather than submit to verification, research has found that people will choose a variety of other paths: using VPNs to indicate that they are outside of the state, accessing similar sites that don’t comply with the law, often because the site is operating in a different country. While many users will simply not access the content as a result, others may accept the risk, at their peril.   

We expect some states to push the envelope in terms of what content they consider “harmful to minors,” and to expand the type of websites that are covered by these laws, either through updated language or threats of litigation. Even if these attacks are struck down, operators of sites that involve sexual content of any type may be under threat, especially if that information is politically divisive. We worry that the point of some of these laws will be to deter queer folks and others from accessing lawful speech and finding community online by requiring them to identify themselves. We will continue to fight to protect against the disclosure of this critical information and for people to maintain their anonymity. 

EFF Will Continue to Fight for All Users’ Free Expression and Privacy 

That said, the ruling does not give states or Congress the green light to impose age-verification regulations on the broader internet. The majority’s decision rests on the fact that minors do not have a First Amendment right to access sexual material that would be obscene. In short, adults have a First Amendment right to access those sexual materials, while minors do not. Although it was wrong, the majority’s opinion ruled that because Texas is blocking minors from speech they have no constitutional right to access, the age-verification requirement only incidentally burdens adult’s First Amendment rights.  

But the same rationale does not apply to general-audience sites and services, including social media. Minors and adults have coextensive rights to both speak and access the speech of other users on these sites because the vast majority of the speech is not sexual materials that would be obscene to minors. Lawmakers should be careful not to interpret this ruling to mean that broader restrictions on minors’ First Amendment rights, like those included in the Kids Online Safety Act, would be deemed constitutional.  

Free Speech Coalition v. Paxton will have an effect on nearly every U.S. adult internet user for the foreseeable future. It marks a worrying shift in the ways that governments can restrict access to speech online. But that only means we must work harder than ever to protect privacy, security, and free speech as central tenets of the internet.  

Aaron Mackey

Georgia Court Rules for Transparency over Private Police Foundation

2 weeks ago

A Georgia court has decided that private non-profit Atlanta Police Foundation (APF) must comply with public records requests under the Georgia Open Records Act for some of its functions on behalf of the Atlanta Police Department. This is a major win for transparency in the state. 

 The lawsuit was brought last year by the Atlanta Community Press Collective (ACPC) and Electronic Frontier Alliance member Lucy Parsons Labs (LPL). It concerns the APF’s refusal to disclose records about its role as the leaser and manager of the site of so-called Cop City, the Atlanta Public Safety Training Center at the heart of a years-long battle that pitted local social and environmental movements against the APF. We’ve previously written about how APF and similar groups fund police surveillance technology, and how the Atlanta Police Department spied on the social media of activists opposed to Cop City.  

This is a big win for transparency and for local communities who want to maintain their right to know what public agencies are doing. 

Police Foundations often provide resources to police departments that help them avoid public oversight, and the Atlanta Police Foundation leads the way with its maintenance of the Loudermilk Video Integration Center and its role in Cop City, which will be used by public agencies including the Atlanta and other police departments. 

ACPC and LPL were represented by attorneys Joy Ramsingh, Luke Andrews, and Samantha Hamilton who had won the release of some materials this past December. The plaintiffs had earlier been represented by the University of Georgia School of Law First Amendment Clinic.  

The win comes at just the right time. Last Summer, the Georgia Supreme Court ruled that private contractors working for public entities are subject to open records laws. The Georgia state legislature then passed a bill to make it harder to file public records requests against private entities. With this month’s ruling, there is still time for the Atlanta Police Foundation to appeal the decision, but failing that, they will have to begin to comply with public records requests by the beginning of July.  

We hope that this will help ensure transparency and accountability when government agencies farm out public functions to private entities, so that local activists and journalists will be able to uncover materials that should be available to the general public. 

José Martinez

Two Courts Rule On Generative AI and Fair Use — One Gets It Right

2 weeks 1 day ago

Things are speeding up in generative AI legal cases, with two judicial opinions just out on an issue that will shape the future of generative AI: whether training gen-AI models on copyrighted works is fair use. One gets it spot on; the other, not so much, but fortunately in a way that future courts can and should discount.

The core question in both cases was whether using copyrighted works to train Large Language Models (LLMs) used in AI chatbots is a lawful fair use. Under the US Copyright Act, answering that question requires courts to consider:

  1. whether the use was transformative;
  2. the nature of the works (Are they more creative than factual? Long since published?)
  3. how much of the original was used; and
  4. the harm to the market for the original work.

In both cases, the judges focused on factors (1) and (4).

The right approach

In Bartz v. Anthropic, three authors sued Anthropic for using their books to train its Claude chatbot. In his order deciding parts of the case, Judge William Alsup confirmed what EFF has said for years: fair use protects the use of copyrighted works for training because, among other things, training gen-AI is “transformative—spectacularly so” and any alleged harm to the market for the original is pure speculation. Just as copying books or images to create search engines is fair, the court held, copying books to create a new, “transformative” LLM and related technologies is also protected:

[U]sing copyrighted works to train LLMs to generate new text was quintessentially transformative. Like any reader aspiring to be a writer, Anthropic’s LLMs trained upon works not to race ahead and replicate or supplant them—but to turn a hard corner and create something different. If this training process reasonably required making copies within the LLM or otherwise, those copies were engaged in a transformative use.

Importantly, Bartz rejected the copyright holders’ attempts to claim that any model capable of generating new written material that might compete with existing works by emulating their “sweeping themes, “substantive points,” or “grammar, composition, and style” was an infringement machine. As the court rightly recognized, building gen-AI models that create new works is beyond “anything that any copyright owner rightly could expect to control.” 

There’s a lot more to like about the Bartz ruling, but just as we were digesting it Kadrey v. Meta Platforms came out. Sadly, this decision bungles the fair use analysis.

A fumble on fair use

Kadrey is another suit by authors against the developer of an AI model, in this case Meta’s ‘Llama’ chatbot. The authors in Kadrey asked the court to rule that fair use did not apply.

Much of the Kadrey ruling by Judge Vince Chhabria is dicta—meaning, the opinion spends many paragraphs on what it thinks could justify ruling in favor of the author plaintiffs, if only they had managed to present different facts (rather than pure speculation). The court then rules in Meta’s favor because the plaintiffs only offered speculation. 

But it makes a number of errors along the way to the right outcome. At the top, the ruling broadly proclaims that training AI without buying a license to use each and every piece of copyrighted training material will be “illegal” in “most cases.” The court asserted that fair use usually won’t apply to AI training uses even though training is a “highly transformative” process, because of hypothetical “market dilution” scenarios where competition from AI-generated works could reduce the value of the books used to train the AI model..

That theory, in turn, depends on three mistaken premises. First, that the most important factor for determining fair use is whether the use might cause market harm. That’s not correct. Since its seminal 1994 opinion in Cambell v Acuff-Rose, the Supreme Court has been very clear that no single factor controls the fair use analysis.

Second, that an AI developer would typically seek to train a model entirely on a certain type of work, and then use that model to generate new works in the exact same genre, which would then compete with the works on which it was trained, such that the market for the original works is harmed. As the Kadrey ruling notes, there was no evidence that Llama was intended to to, or does, anything like that, nor will most LLMs for the exact reasons discussed in Bartz.

Third, as a matter of law, copyright doesn't prevent “market dilution” unless the new works are otherwise infringing. In fact, the whole purpose of copyright is to be an engine for new expression. If that new expression competes with existing works, that’s a feature, not a bug.

Gen-AI is spurring the kind of tech panics we’ve seen before; then, as now, thoughtful fair use opinions helped ensure that copyright law served innovation and creativity. Gen-AI does raise a host of other serious concerns about fair labor practices and misinformation, but copyright wasn’t designed to address those problems. Trying to force copyright law to play those roles only hurts important and legal uses of this technology.

In keeping with that tradition, courts deciding fair use in other AI copyright cases should look to Bartz, not Kadrey.

Tori Noble

Ahead of Budapest Pride, EFF and 46 Organizations Call on European Commission to Defend Fundamental Rights in Hungary

2 weeks 1 day ago

This week, EFF joined EDRi and nearly 50 civil society organizations urging the European Commission’s President Ursula von der Leyen, Executive Vice President Henna Virkunnen, and Commissioners Michael McGrath and Hadja Lahbib to take immediate action and defend human rights in Hungary.

The European Commission has a responsibility to protect EU fundamental rights, including the rights of LGBTQ+ individuals in Hungary and across the Union

With Budapest Pride just two days away, Hungary has criminalized Pride marches and is planning to deploy real-time facial recognition technology to identify those participating in the event. This is a flagrant violation of fundamental rights, particularly the rights to free expression and assembly.

On April 15, a new amendment package went into effect in Hungary which authorizes the use of real-time facial recognition to identify protesters at ‘banned protests’ like LGBTQ+ events, and includes harsh penalties like excessive fines and imprisonment. This is prohibited by the EU Artificial Intelligence (AI) Act, which does not permit the use of real-time face recognition for these purposes.

This came on the back of members of Hungary’s Parliament rushing through three amendments in March to ban and criminalize Pride marches and their organizers, and permit the use of real-time facial recognition technologies for the identification of protestors. These amendments were passed without public consultation and are in express violation of the EU AI Act and Charter of Fundamental Rights. In response, civil society organizations urged the European Commission to put interim measures in place to rectify the violation of fundamental rights and values. The Commission is yet to respond—a real cause of concern.

This is an attack on LGBTQ+ individuals, as well as an attack on the rights of all people in Hungary. The letter urges the European Commission to take the following actions:

  • Open an infringement procedure against any new violations of EU law, in particular the violation of Article 5 of the AI Act
  • Adopt interim measures on ongoing infringement against Hungary’s 2021 anti LGBT law which is used as a legal basis for the ban on LGBTQIA+ related public assemblies, including Budapest Pride.

There's no question that, when EU law is at stake, the European Commission has a responsibility to protect EU fundamental rights, including the rights of LGBTQ+ individuals in Hungary and across the Union. This includes ensuring that those organizing and marching at Pride in Budapest are safe and able to peacefully assemble and protest. If the EU Commission does not urgently act to ensure these rights, it risks hollowing out the values that the EU is built from.

Read our full letter to the Commission here.

Paige Collings

How Cops Can Get Your Private Online Data

2 weeks 1 day ago

Can the cops get your online data? In short, yes. There are a variety of US federal and state laws which give law enforcement powers to obtain information that you provided to online services. But, there are steps you as a user and/or as a service provider can take to improve online privacy.

Law enforcement demanding access to your private online data goes back to the beginning of the internet. In fact, one of  EFF’s first cases, Steve Jackson Games v. Secret Service, exemplified the now all-too-familiar story where unfounded claims about illegal behavior resulted in overbroad seizures of user messages. But it’s not the ’90s anymore, the internet has become an integral part of everyone’s life. Everyone now relies on organizations big and small to steward our data, from huge service providers like Google, Meta, or your ISP, to hobbyists hosting a blog or Mastodon server

There is no “cloud,” just someone else's computer—and when the cops come knocking on their door, these hosts need to be willing to stand up for privacy, and know how to do so to the fullest extent under the law. These legal limits are also important for users to know, not only to mitigate risks in their security plan when choosing where to share data, but to understand whether these hosts are going to bat for them. Taking action together, service hosts and users can curb law enforcement getting more data than they’re allowed, protecting not just themselves but targeted populations, present and future.

This is distinct from law enforcement’s methods of collecting public data, such as the information now being collected on student visa applicants. Cops may use social media monitoring tools and sock puppet accounts to collect what you share publicly, or even within “private” communities. Police may also obtain the contents of communication in other ways that do not require court authorization, such as monitoring network traffic passively to catch metadata and possibly using advanced tools to partially reveal encrypted information. They can even outright buy information from online data brokers. Unfortunately there are few restrictions or oversight for these practices—something EFF is fighting to change.

Below however is a general breakdown of the legal processes used by US law enforcement for accessing private data, and what categories of private data these processes can disclose. Because this is a generalized summary, it is neither exhaustive nor should be considered legal advice. Please seek legal help if you have specific data privacy and security needs.

Type of data

Process used

Challenge prior to disclosure?

Proof needed

Subscriber information

Subpoena

Yes

Relevant to an investigation

Non-content information, metadata

Court order; sometimes subpoena

Yes

Specific and articulable facts that info is relevant to an investigation

Stored content

Search warrant

No

Probable cause that info will provide evidence of a crime

Content in transit

Super warrant

No

Probable cause plus exhaustion and minimization

Types of Data that Can be Collected

The laws protecting private data online generally follow a pattern: the more sensitive the personal data is, the greater factual and legal burden police have to meet before they can obtain it. Although this is not exhaustive, here are a few categories of data you may be sharing with services, and why police might want to obtain it.

    • Subscriber Data: Information you provide in order to use the service. Think about ID or payment information, IP address location, email, phone number, and other information you provided when signing up. 
      • Law enforcement can learn who controls an anonymous account, and find other service providers to gather information from.
    • Non-content data, or "metadata": This is saved information about your interactions on the service; like when you used the service, for how long, and with whom. Analogous to what a postal worker can infer from a sealed letter with addressing information.
      • Law enforcement can use this information to infer a social graph, login history, and other information about a suspect’s behavior.
      • Stored content: This is the actual content you are sending and receiving, like your direct message history or saved drafts. This can cover any private information your service provider can access. 
        • This most sensitive data is collected to reveal criminal evidence. Overly broad requests also allow for retroactive searches, information on other users, and can take information out of its original context. 
      • Content in transit: This is the content of your communications as it is being communicated. This real-time access may also collect info which isn’t typically stored by a provider, like your voice during a phone call.
        • Law enforcement can compel providers to wiretap their own services for a particular user—which may also implicate the privacy of users they interact with.
    Legal Processes Used to Get Your Data

    When US law enforcement has identified a service that likely has this data, they have a few tools to legally compel that service to hand it over and prevent users from knowing information is being collected.

    Subpoena

    Subpoenas are demands from a prosecutor, law enforcement, or a grand jury which do not require approval of a judge before being sent to a service. The only restriction is this demand be relevant to an investigation. Often the only time a court reviews a subpoena is when a service or user challenges it in court.

    Due to the lack of direct court oversight in most cases, subpoenas are prone to abuse and overreach. Providers should scrutinize such requests carefully with a lawyer and push back before disclosure, particularly when law enforcement tries to use subpoenas to obtain more private data, such as the contents of communications.

    Court Order

    This is a similar demand to subpoenas, but usually pertains to a specific statute which requires a court to authorize the demand. Under the Stored Communications Act, for example, a court can issue an order for non-content information if police provide specific facts that the information being sought is relevant to an investigation. 

    Like subpoenas, providers can usually challenge court orders before disclosure and inform the user(s) of the request, subject to law enforcement obtaining a gag order (more on this below). 

    Search Warrant

    A warrant is a demand issued by a judge to permit police to search specific places or persons. To obtain a warrant, police must submit an affidavit (a written statement made under oath) establishing that there is a fair probability (or “probable cause”) that evidence of a crime will be found at a particular place or on a particular person. 

    Typically services cannot challenge a warrant before disclosure, as these requests are already approved by a magistrate. Sometimes police request that judges also enter gag orders against the target of the warrant that prevent hosts from informing the public or the user that the warrant exists.

    Super Warrant

    Police seeking to intercept communications as they occur generally face the highest legal burden. Usually the affidavit needs to not only establish probable cause, but also make clear that other investigation methods are not viable (exhaustion) and that the collection avoids capturing irrelevant data (minimization). 

    Some laws also require high-level approval within law enforcement, such as leadership, to approve the request. Some laws also limit the types of crimes that law enforcement may use wiretaps in while they are investigating. The laws may also require law enforcement to periodically report back to the court about the wiretap, including whether they are minimizing collection of non-relevant communications. 

    Generally these demands cannot be challenged while wiretapping is occurring, and providers are prohibited from telling the targets about the wiretap. But some laws require disclosure to targets and those who were communicating with them after the wiretap has ended. 

    Gag orders

    Many of the legal authorities described above also permit law enforcement to simultaneously prohibit the service from telling the target of the legal process or the general public that the surveillance is occurring. These non-disclosure orders are prone to abuse and EFF has repeatedly fought them because they violate the First Amendment and prohibit public understanding about the breadth of law enforcement surveillance.

    How Services Can (and Should) Protect You

    This process isn't always clean-cut, and service providers must ultimately comply with lawful demands for user’s data, even when they challenge them and courts uphold the government’s demands. 

    Service providers outside the US also aren’t totally in the clear, as they must often comply with US law enforcement demands. This is usually because they either have a legal presence in the US or because they can be compelled through mutual legal assistance treaties and other international legal mechanisms. 

    However, services can do a lot by following a few best practices to defend user privacy, thus limiting the impact of these requests and in some cases make their service a less appealing door for the cops to knock on.

    Put Cops through the Process

    Paramount is the service provider's willingness to stand up for their users. Carving out exceptions or volunteering information outside of the legal framework erodes everyone's right to privacy. Even in extenuating and urgent circumstances, the responsibility is not on you to decide what to share, but on the legal process. 

    Smaller hosts, like those of decentralized services, might be intimidated by these requests, but consulting legal counsel will ensure requests are challenged when necessary. Organizations like EFF can sometimes provide legal help directly or connect service providers with alternative counsel.

    Challenge Bad Requests

    It’s not uncommon for law enforcement to overreach or make burdensome requests. Before offering information, services can push back on an improper demand informally, and then continue to do so in court. If the demand is overly broad, violates a user's First or Fourth Amendment rights, or has other legal defects, a court may rule that it is invalid and prevent disclosure of the user’s information.

    Even if a court doesn’t invalidate the legal demand entirely, pushing back informally or in court can limit how much personal information is disclosed and mitigate privacy impacts.

    Provide Notice 

    Unless otherwise restricted, service providers should give notice about requests and disclosures as soon as they can. This notice is vital for users to seek legal support and prepare a defense.

    Be Clear With Users 

    It is important for users to understand if a host is committed to pushing back on data requests to the full extent permitted by law. Privacy policies with fuzzy thresholds like "when deemed appropriate" or “when requested” make it ambiguous if a user’s right to privacy will be respected. The best practices for providers not only require clarity and a willingness to push back on law enforcement demands, but also a commitment to be transparent with the public about law enforcement’s demands. For example, with regular transparency reports breaking down the countries and states making these data requests.

    Social media services should also consider clear guidelines for finding and removing sock puppet accounts operated by law enforcement on the platform, as these serve as a backdoor to government surveillance.

    Minimize Data Collection 

    You can't be compelled to disclose data you don’t have. If you collect lots of user data, law enforcement will eventually come demanding it. Operating a service typically requires some collection of user data, even if it’s just login information. But the problem is when information starts to be collected beyond what is strictly necessary. 

    This excess collection can be seen as convenient or useful for running the service, or often as potentially valuable like behavioral tracking used for advertising. However, the more that’s collected, the more the service becomes a target for both legal demands and illegal data breaches. 

    For data that enables desirable features for the user, design choices can make privacy the default and give users additional (preferably opt-in) sharing choices. 

    Shorter Retention

    As another minimization strategy, hosts should regularly and automatically delete information when it is no longer necessary. For example, deleting logs of user activity can limit the scope of law enforcement’s retrospective surveillance—maybe limiting a court order to the last 30 days instead of the lifetime of the account. 

    Again design choices, like giving users the ability to send disappearing messages and deleting them from the server once they’re downloaded, can also further limit the impact of future data requests. Furthermore, these design choices should have privacy-preserving default

    Avoid Data Sharing 

    Depending on the service being hosted there may be some need to rely on another service to make everything work for users. Third-party login or ad services are common examples with some amount of tracking built in. Information shared with these third-parties should also be minimized and avoided, as they may not have a strict commitment to user privacy. Most notoriously, data brokers who sell advertisement data can provide another legal work-around for law enforcement by letting them simply buy collected data across many apps. This extends to decisions about what information is made public by default, thus accessible to many third parties, and if that is clear to users.

    (True) End-to-End Encryption

    Now that HTTPS is actually everywhere, most traffic between a service and a user can be easily secured—for free. This limits what onlookers can collect on users of the service, since messages between the two are in a secure “envelope.” However, this doesn’t change the fact the service is opening this envelope before passing it along to other users, or returning it to the same user. With each opened message, this is more information to defend.

    Better, is end-to-end encryption (e2ee), which just means providing users with secure envelopes that even the service provider cannot open. This is how a featureful messaging app like Signal can respond to requests with only three pieces of information: the account identifier (phone number), the date of creation, and the last date of access. Many services should follow suit and limit access through encryption.

    Note that while e2ee has become a popular marketing term, it is simply inaccurate for describing any encryption use designed to be broken or circumvented. Implementing “encryption backdoors” to break encryption when desired, or simply collecting information before or after the envelope is sealed on a user’s device (“client-side scanning”) is antithetical to encryption. Finally, note that e2ee does not protect against law enforcement obtaining the contents of communications should they gain access to any device used in the conversation, or if message history is stored on the server unencrypted.

    Protecting Yourself and Your Community

    As outlined, often the security of your personal data depends on the service providers you choose to use. But as a user you do still have some options. EFF’s Surveillance Self-Defense is a maintained resource with many detailed steps you can take. In short, you need to assess your risks, limit the services you use to those you can trust (as much as you can), improve settings, and when all else fails, accessorize with tools that prevent data sharing in the first place—like EFF’s Privacy Badger browser extension.

    Remember that privacy is a team sport. It’s not enough to make these changes as an individual, it’s just as important to share and educate others, as well as fighting for better digital privacy policy on all levels of governance. Learn, get organized, and take action.

     

    Rory Mir

    California’s Corporate Cover-Up Act Is a Privacy Nightmare

    2 weeks 2 days ago

    California lawmakers are pushing one of the most dangerous privacy rollbacks we’ve seen in years. S.B. 690, what we’re calling the Corporate Cover-Up Act, is a brazen attempt to let corporations spy on us in secret, gutting long-standing protections without a shred of accountability.

    The Corporate Cover-Up Act is a massive carve-out that would gut California’s Invasion of Privacy Act (CIPA) and give Big Tech and data brokers a green light to spy on us without consent for just about any reason. If passed, S.B. 690 would let companies secretly record your clicks, calls, and behavior online—then share or sell that data with whomever they’d like, all under the banner of a “commercial business purpose.”

    Simply put, The Corporate Cover-Up Act (S.B. 690) is a blatant attack on digital privacy, and is written to eviscerate long-standing privacy laws and legal safeguards Californians rely on. If passed, it would:

    • Gut California’s Invasion of Privacy Act (CIPA)—a law that protects us from being secretly recorded or monitored
    • Legalize corporate wiretaps, allowing companies to intercept real-time clicks, calls, and communications
    • Authorize pen registers and trap-and-trace tools, which track who you talk to, when, and how—without consent
    • Let companies use all of this surveillance data for “commercial business purposes”—with zero notice and no legal consequences

    This isn’t a small fix. It’s a sweeping rollback of hard-won privacy protections—the kind that helped expose serious abuses by companies like Facebook, Google, and Oracle.

    TAKE ACTION

    You Can't Opt Out of Surveillance You Don't Know Is Happening

    Proponents of The Corporate Cover-Up Act claim it’s just a “clarification” to align CIPA with the California Consumer Privacy Act (CCPA). That’s misleading. The truth is, CIPA and CCPA don’t conflict. CIPA stops secret surveillance. The CCPA governs how data is used after it’s collected, such as through the right to opt out of your data being shared.

    You can't opt out of being spied on if you’re never told it’s happening in the first place. Once companies collect your data under S.B. 690, they can:

    • Sell it to data brokers
    • Share it with immigration enforcement or other government agencies
    • Use it to against abortion seekers, LGBTQ+ people, workers, and protesters, and
    • Retain it indefinitely for profiling

    …with no consent; no transparency; and no recourse.

    The Communities Most at Risk

    This bill isn’t just a tech policy misstep. It’s a civil rights disaster. If passed, S.B. 690 will put the most vulnerable people in California directly in harm’s way:

    • Immigrants, who may be tracked and targeted by ICE
    • LGBTQ+ individuals, who could be outed or monitored without their knowledge
    • Abortion seekers, who could have location or communications data used against them
    • Protesters and workers, who rely on private conversations to organize safely

    The message this bill sends is clear: corporate profits come before your privacy.

    We Must Act Now

    S.B. 690 isn’t just a bad tech bill—it’s a dangerous precedent. It tells every corporation: Go ahead and spy on your consumers—we’ve got your back.

    Californians deserve better.

    If you live in California, now is the time to call your lawmakers and demand they vote NO on the Corporate Cover-Up Act.

    TAKE ACTION

    Spread the word, amplify the message, and help stop this attack on privacy before it becomes law.

    Rindala Alajaji

    FBI Warning on IoT Devices: How to Tell If You Are Impacted

    2 weeks 3 days ago

    On June 5th, the FBI released a PSA titled “Home Internet Connected Devices Facilitate Criminal Activity.” This PSA largely references devices impacted by the latest generation of BADBOX malware (as named by HUMAN’s Satori Threat Intelligence and Research team) that EFF researchers also encountered primarily on Android TV set-top boxes. However, the malware has impacted tablets, digital projectors, aftermarket vehicle infotainment units, picture frames, and other types of IoT devices. 

    One goal of this malware is to create a network proxy on the devices of unsuspecting buyers, potentially making them hubs for various potential criminal activities, putting the owners of these devices at risk from authorities. This malware is particularly insidious, coming pre-installed out of the box from major online retailers such as Amazon and AliExpress. If you search “Android TV Box” on Amazon right now, many of the same models that have been impacted are still up being sold by sellers of opaque origins. Facilitating the sale of these devices even led us to write an open letter to the FTC, urging them to take action on resellers.

    The FBI listed some indicators of compromise (IoCs) in the PSA for consumers to tell if they were impacted. But the average person isn’t running network detection infrastructure in their homes, and cannot hope to understand what IoCs can be used to determine if their devices generate “unexplained or suspicious Internet traffic.” Here, we will attempt to help give more comprehensive background information about these IoCs. If you find any of these on devices you own, then we encourage you to follow through by contacting the FBI's Internet Crime Complaint Center (IC3) at www.ic3.gov.

    The FBI lists these IoC:

    • The presence of suspicious marketplaces where apps are downloaded.
    • Requiring Google Play Protect settings to be disabled.
    • Generic TV streaming devices advertised as unlocked or capable of accessing free content.
    • IoT devices advertised from unrecognizable brands.
    • Android devices that are not Play Protect certified.
    • Unexplained or suspicious Internet traffic.

    The following adds context to above, as well as some added IoCs we have seen from our research.

    Play Protect Certified

    “Android devices that are not Play Protect certified” refers to any device brand or partner not listed here: https://www.android.com/certified/partners/. Google subjects devices to compatibility and security tests in their criteria for inclusion in the Play Protect program, though the mentioned list’s criteria are not made completely transparent outside of Google. But this list does change, as we saw with the tablet brand we researched being de-listed. This encompasses “devices advertised from unrecognizable brands.” The list includes international brands and partners as well.

    Outdated Operating Systems

    Other issues we saw were really outdated Android versions. For posterity, Android 16 just started rolling out. Android 9-12 appeared to be the most common versions routinely used. This could be a result of “copied homework” from previous legitimate Android builds, and often come with their own update software that can present a problem on its own and deliver second-stage payloads for device infection in addition to what it is downloading and updating on the device.

    You can check which version of Android you have by going to Settings and searching “Android version”.

    Android App Marketplaces

    We’ve previously argued how the availability of different app marketplaces leads to greater consumer choice, where users can choose alternatives even more secure than the Google Play Store. While this is true, the FBI’s warning about suspicious marketplaces is also prudent. Avoiding “downloading apps from unofficial marketplaces advertising free streaming content” is sound (if somewhat vague) advice for set-top boxes, yet this recommendation comes without further guidelines on how to identify which marketplaces might be suspicious for other Android IoT platforms. Best practice is to investigate any app stores used on Android devices separately, but to be aware that if a suspicious Android device is purchased, it can contain preloaded app stores that mimic the functionality of legitimate ones but also contain unwanted or malicious code.

    Models Listed from the Badbox Report

    We also recommend looking up device names and models that were listed in the BADBOX 2.0 report. We investigated the T95 models along with other independent researchers that initially found this malware present. A lot of model names could be grouped in families with the same letters but different numbers. These operations are iterating fast, but the naming conventions are often lazy in this respect. If you're not sure what model you own, you can usually find it listed on a sticker somewhere on the device. If that fails, you may be able to find it by pulling up the original receipt or looking through your order history.

    A Note from Satori Researchers:

    “Below is a list of device models known to be targeted by the threat actors. Not all devices of a given model are necessarily infected, but Satori researchers are confident that infections are present on some devices of the below device models:”

    List of Potentially Impacted Models

    Broader Picture: The Digital Divide

    Unfortunately, the only way to be sure that an Android device from an unknown brand is safe is not to buy it in the first place. Though initiatives like the U.S. Cyber Trust Mark are welcome developments intended to encourage demand-side trust in vetted products, recent shake ups in federal regulatory bodies means the future of this assurance mark is unknown. This means those who face budget constraints and have trouble affording top-tier digital products for streaming content or other connected purposes may rely on cheaper imitation products that are rife with not only vulnerabilities, but even come out-of-the-box preloaded with malware. This puts these people disproportionately at legal risk when these devices are used to provide the buyers’ home internet connection as a proxy for nefarious or illegal purposes.

    Cybersecurity and trust that the products we buy won’t be used against us is essential: not just for those that can afford name-brand digital devices, but for everyone. While we welcome the IoCs that the FBI has listed in its PSA, more must be done to protect consumers from a myriad of dangers that their devices expose them to.

    Alexis Hancock
    Checked
    1 hour 36 minutes ago
    EFF's Deeplinks Blog: Noteworthy news from around the internet
    Subscribe to EFF update feed