Ten Years of The Foilies

1 month 1 week ago
A look back at the games governments played to avoid transparency

In the year 2015, we witnessed the launch of OpenAI, a debate over the color of a dress going viral, and a Supreme Court decision that same-sex couples have the right to get married. It was also the year that the Electronic Frontier Foundation (EFF) first published The Foilies, an annual report that hands out tongue-in-cheek "awards" to government agencies and officials that respond outrageously when a member of the public tries to access public records through the Freedom of Information Act (FOIA) or similar laws.

A lot has changed over the last decade, but one thing that hasn't is the steady flow of attempts by authorities to avoid their legal and ethical obligations to be open and accountable. Sometimes, these cases are intentional, but just as often, they are due to incompetence or straight-up half-assedness.

Over the years, EFF has teamed up with MuckRock to document and ridicule these FOIA fails and transparency trip-ups. And through a partnership with AAN Publishers, we have named-and-shamed the culprits in weekly newspapers and on indie news sites across the United States in celebration of Sunshine Week, an annual event raising awareness of the role access to public records plays in a democracy.  

This year, we reflect on the most absurd and frustrating winners from the last 10 years as we prepare for the next decade, which may even be more terrible for government transparency.

The Most Infuriating FOIA Fee: U.S. Department of Defense (2016 Winner)

Assessing huge fee estimates is one way agencies discourage FOIA requesters.

Under FOIA, federal agencies are able to charge "reasonable" fees for producing copies of records. But sometimes agencies fabricate enormous price tags to pressure the requester to drop the query.

In 2015, Martin Peck asked the U.S. Department of Defense (DOD) to disclose the number of "HotPlug” devices (tools used to preserve data on seized computers) it had purchased. The DOD said it would cost $660 million and 15 million labor hours (over 1,712 years), because its document system wasn't searchable by keyword, and staff would have to comb through 30 million contracts by hand. 

Runners-up: 

City of Seattle (2019 Winner): City officials quoted a member of the public $33 million for metadata for every email sent in 2017, but ultimately reduced the fee to $40.

Rochester (Michigan) Community Schools District (2023 Winner): A group of parents critical of the district's remote-learning plan requested records to see if the district was spying on their social media. One parent was told they would have to cough up $18,641,345 for the records, because the district would have to sift through every email. 

Willacy County (Texas) Sheriff's Office (2016 Winner): When the Houston Chronicle asked for crime data, the sheriff sent them an itemized invoice that included $98.40 worth of Wite-Out–the equivalent of 55 bottles–to redact 1,016 pages of records.

The Most Ridiculous Redaction: Federal Bureau of Investigation (2015 Winner)

Ain't no party like a REDACTED FBI party!

Brad Heath, who in 2014 was a reporter at USA Today, got a tip that a shady figure had possibly attended an FBI retirement party. So he filed a request for the guest list and pictures taken at the event. In response, the FBI sent a series of surreal photos of the attendees, hugging, toasting, and posing awkwardly, but all with polygonal redactions covering their faces like some sort of mutant, Minecraft family reunion.

Runner-Up 

U.S. Southern Command (2023 Winner): Investigative journalist Jason Leopold obtained scans of paintings by detainees at Guantanamo Bay, which were heavily redacted under the claim that the art would disclose law enforcement information that could "reasonably be expected to risk circumvention of the law."

The Most Reprehensible Reprisal Against a Requester: White Castle, Louisiana (2017 Winner)

WBRZ Reporter Chris Nakamoto was cuffed for trying to obtain records in White Castle, Louisiana. Credit: WBRZ-TV

Chris Nakamoto, at the time a reporter for WBRZ, filed a public records request to probe the White Castle mayor's salary. But when he went down to check on some of the missing records, he was handcuffed, placed in a holding cell, and charged with the crime of "remaining after being forbidden.” He was summoned to appear before the "Mayor's Court" in a judicial proceeding presided over by none other than the same mayor he was investigating. The charges were dropped two months later. 

Runners-up

Jack White (2015 Winner): One of the rare non-government Foilies winners, the White Stripes guitarist verbally abused University of Oklahoma student journalists and announced he wouldn't play at the school anymore. The reason? The student newspaper, OU Daily, obtained and published White's contract for a campus performance, which included his no-longer-secret guacamole recipe, a bowl of which was demanded in his rider.

Richlands, Virginia (2024 Winner): Resident Laura Mollo used public records laws to investigate problems with the 911 system and, in response, experienced intense harassment from the city and its contractors, including the police pulling her over and the city appointing a special prosecutor to investigate her. On separate occasions, Morro even says she found her mailbox filled with spaghetti and manure. 

Worst Federal Agency of the Decade: Federal Bureau of Investigation 

Bashing the FBI has come back into vogue among certain partisan circles in recent years, but we've been slamming the feds long before it was trendy.

The agency received eight Foilies over the last decade, more than any other entity, but the FBI's hostility towards FOIA goes back much further. In 2021, the Cato Institute uncovered records showing that, since at least 1989, the FBI had been spying on the National Security Archive, a non-profit watchdog that keeps an eye on the intelligence community. The FBI’s methods included both physical and electronic surveillance, and the records show the FBI specifically cited the organization's "tenacity" in using FOIA.

Cato's Patrick G. Eddington reported it took 11 months for the FBI to produce those records, but that's actually relatively fast for the agency. We highlighted a 2009 FOIA request that the FBI took 12 years to fulfil: Bruce Alpert of the Times-Picayune had asked for records regarding the corruption case of U.S. Rep. William Jefferson, but by the time he received the 84 pages in 2021, the reporter had retired. Similarly, when George Washington University professor and documentary filmmaker Nina Seavey asked the FBI for records related to surveillance of antiwar and civil rights activists, the FBI told her it would take 17 years to provide the documents. When the agency launched an online system for accepting FOIA requests, it somehow made the process even more difficult.

The FBI was at its worst when it was attempting to use non-disclosure agreements to keep local law enforcement agencies from responding to public records requests regarding the use of cell phone surveillance technologies called cell-site simulators, or "stingrays." The agency even went so far as to threaten agencies that release technical information to media organizations with up to 20 years in prison and a $1 million fine, claiming it would be a violation of the Arms Export Control Act.

But you don't have to take our word for it: Even Micky Dolenz of The Monkees had to sue the FBI to get records on how agents collected intelligence on the 1960s band.

Worst Local Jurisdiction of the Decade: Chicago, Illinois

Some agencies, like the city of Chicago, treat FOIA requests like a plague.

Over the last decade, The Foilies have called out officials at all levels of government and in every part of the country (and even in several other countries), but time and time again, one city keeps demonstrating special antagonism to the idea of freedom of information: the Windy City.

In fact, the most ridiculous justification for ignoring transparency obligations we ever encountered was proudly championed by now-former Mayor Lori Lightfoot during the COVID-19 lockdown in April 2020. She offered a bogus choice to Chicagoans: the city could either process public records requests or provide pandemic response, falsely claiming that answering these requests would pull epidemiologists off the job. According to the Chicago Tribune, she implied that responding to FOIA requests would result in people having to "bury another grandmother." She even invoked the story of Passover, claiming that the "angel of death is right here in our midst every single day" as a reason to suspend FOIA deadlines.

If we drill down on Chicago, there's one particular department that seems to take particular pleasure in screwing the public: the Chicago Police Department (CPD). In 2021, CPD was nominated so many times (for withholding records of search warrants, a list of names of police officers, and body-worn camera footage from a botched raid) that we just threw up our hands and named them "The Hardest Department to FOIA" of the year.

In one particularly nasty case, CPD had mistakenly raided the home of an innocent woman and handcuffed her while she was naked and did not allow her to dress. Later, the woman filed a FOIA request for the body-worn camera footage and had to sue to get it. But CPD didn't leave it there: the city's lawyers tried to block a TV station from airing the video and then sought sanctions against the woman's attorney. 

If you thought these were some doozies, check out The Foilies 2025 to read the beginning of a new decade's worth of FOIA horror stories.

Dave Maass

Right to Repair: A Prime Example of Grassroots Advocacy

1 month 1 week ago

Good old-fashioned grassroots advocacy is one of the best tools we have right now for making a positive change for our civil liberties online. When we unite toward a shared goal, anything is possible, and the right to repair movement is a prime example of this.

In July of last year, EFF and many other organizations celebrated Repair Independence Day to commemorate both California and Minnesota enacting strong right to repair laws. And, very recently, it was reported that all 50 states have introduced right to repair legislation. Now, not every state has passed laws yet, but this signals an important milestone for the movement—we want to fix the stuff we own!

And this movement has had an impact beyond specific right to repair legislation. In a similar vein, just a few months ago, the U.S. Copyright Office ruled that users can legally repair commercial food preparation equipment without breaking copyright law. Device manufacturers themselves are also starting to feel the pressure and are creating repair-friendly programs.

Years of hard work have made it possible for us to celebrate the right-to-repair movement time and time again. It's a group effort—folks like iFixit, who provide repair guides and repairability scores; the Repair Association, who’ve helped lead the movement in state legislatures; and of course, people like you who contact local representatives, are the reason this movement has gained so much momentum.

Fix Copyright! Also available in kids' sizes.

But there's still work that can be done. If you’re itching to fix your devices, you can read up on what your state’s repair laws mean for you. You can educate your friends, family, and colleagues when they’re frustrated at how expensive device repair is. And, of course, you can show your support for the right to repair movement with EFF’s latest member t-shirt. 

We live in a very tumultuous time, so it’s important to celebrate the victories, and it’s equally important to remember that your voice and support can bring about positive change that you want to see.  

Christian Romero

EFF Sends Letter to the Senate Judiciary Committee Opposing the STOP CSAM Act

1 month 1 week ago

On Monday, March 10, EFF sent a letter to the Senate Judiciary Committee opposing the Strengthening Transparency and Obligation to Protect Children Suffering from Abuse and Mistreatment Act (STOP CSAM Act) ahead of a committee hearing on the bill. 

EFF opposed the original and amended versions of this bill in the previous Congress, and we are concerned to see the Committee moving to consider the same flawed ideas in the current Congress. 

At its core, STOP CSAM endangers encrypted messages – jeopardizing the privacy, security, and free speech of every American and fundamentally altering our online communications. In the digital world, end-to-end encryption is our best chance to maintain both individual and national security. Particularly in the wake of the major breach of telecom systems in October 2024 from Salt Typhoon, a sophisticated Chinese-government backed hacking group, legislators should focus on bolstering encryption, not weakening it. In fact, in response to this breach, a top U.S. cybersecurity chief said “encryption is your friend.”  

Given its significant problems and potential vast impact on internet users, we urge the Committee to reject this bill.

Maddie Daly

RightsCon Community Calls for Urgent Release of Alaa Abd El-Fattah

1 month 1 week ago

Last month saw digital rights organizations and social justice groups head to Taiwan for this year's RightsCon conference on human rights in the digital age. During the conference, one prominent message was spoken loud and clear: Alaa Abd El-Fattah must be immediately released from illegal detention in Egypt.

"As Alaa’s mother, I thank you for your solidarity and ask you to not to give up until Alaa is out of prison."

During the RightsCon opening ceremony, Access Now’s Executive Director, Alejandro Mayoral Baños, affirmed the urgency of Alaa’s situation in detention and called for Alaa’s freedom. The RightsCon community was also addressed by Alaa’s mother, mathematician Laila Soueif, who has been on hunger strike in London for 158 days. In a video highlighting Alaa’s work with digital rights and his role in this community, she stated: “As Alaa’s mother, I thank you for your solidarity and ask you to not to give up until Alaa is out of prison.” Laila was admitted to hospital the next day with dangerously low blood sugar, blood pressure and sodium levels.

RightsCon participants gather in solidarity with the #FreeAlaa campaign

The calls to #FreeAlaa and save Laila were again reaffirmed during the closing ceremony in a keynote by Sara Alsherif, Migrant Digital Justice Programme Manager at Open Rights Group and close friend of Alaa. Referencing Alaa’s early work as a digital activist, Alsherif said: “He understood that the fight for digital rights is at the core of the struggle for human rights and democracy.” She closed by reminding the hundreds-strong audience that “Alaa could be any one of us … Please do for him what you would want us to do for you if you were in his position.”

During RightsCon, with Laila still in hospital, calls for UK Prime Minister Starmer to get on the phone with Egyptian President Sisi reached a fever pitch, and on 28 February, one day after the closing ceremony, the UK government issued a press release affirming that Alaa’s case had been discussed, with Starmer pressing for Alaa’s freedom. 

Alaa should have been released on September 29, after serving a five-year sentence for sharing a Facebook post about a death in police custody, but Egyptian authorities have continued his imprisonment in contravention of the country’s own Criminal Procedure Code. British consular officials are prevented from visiting him in prison because the Egyptian government refuses to recognise Alaa’s British citizenship.

Laila Soueif has been on hunger strike for more than five months while she and the rest of his family have worked in concert with various advocacy groups to engage the British government in securing Alaa’s release. On December 12, she also started protesting daily outside the Foreign Office and has since been joined by numerous MPs and public figures. Laila still remains in hospital, but following Starmer’s call with Sisi agreed to take glucose, she stated that she is ready to end her hunger strike if progress is made. 

Laila Soueif and family meeting with UK Prime Minister Keir Starmer

As of March 6, Laila has moved to a partial hunger strike of 300 calories per day citing “hope that Alaa’s case might move.” However, the family has learned that Alaa himself began a hunger strike on March 1 in prison after hearing that his mother had been hospitalized. Laila has said that without fast movement on Alaa’s case she will return to a total hunger strike. Alaa’s sister Sanaa, who was previously jailed by the regime on bogus charges, visited Alaa on March 8.

If you’re based in the UK, we encourage you to write to your MP to urgently advocate for Alaa’s release (external link): https://freealaa.net/message-mp 

Supporters everywhere can share Alaa’s plight and Laila’s story on social media using the hashtags #FreeAlaa and #SaveLaila. Additionally, the campaign’s website (external link) offers additional actions, including purchasing Alaa’s book, and participating in a one-day solidarity hunger strike. You can also sign up for campaign updates by e-mail.

Every second counts, and time is running out. Keir Starmer and the British government must do everything it can to ensure Alaa’s immediate and unconditional release.

Jillian C. York

First Porn, Now Skin Cream? ‘Age Verification’ Bills Are Out of Control

1 month 1 week ago

I’m old enough to remember when age verification bills were pitched as a way to ‘save the kids from porn’ and shield them from other vague dangers lurking in the digital world (like…“the transgender”). We have long cautioned about the dangers of these laws, and pointed out why they are likely to fail. While they may be well-intentioned, the growing proliferation of age verification schemes poses serious risks to all of our digital freedoms.

Fast forward a few years, and these laws have morphed into something else entirely—unfortunately, something we expected. What started as a misguided attempt to protect minors from "explicit" content online has spiraled into a tangled mess of privacy-invasive surveillance schemes affecting skincare products, dating apps, and even diet pills, threatening everyone’s right to privacy.

Age Verification Laws: A Backdoor to Surveillance

Age verification laws do far more than ‘protect children online’—they require the  creation of a system that collects vast amounts of personal information from everyone. Instead of making the internet safer for children, these laws force all users—regardless of age—to verify their identity just to access basic content or products. This isn't a mistake; it's a deliberate strategy. As one sponsor of age verification bills in Alabama admitted, "I knew the tough nut to crack that social media would be, so I said, ‘Take first one bite at it through pornography, and the next session, once that got passed, then go and work on the social media issue.’” In other words, they recognized that targeting porn would be an easier way to introduce these age verification systems, knowing it would be more emotionally charged and easier to pass. This is just the beginning of a broader surveillance system disguised as a safety measure.

This alarming trend is already clear, with the growing creep of age verification bills filed in the first month of the 2025-2026 state legislative session. Consider these three bills: 

  1. Skincare: AB-728 in California
    Age verification just hit the skincare aisle! California’s AB-728 mandates age verification for anyone purchasing skin care products or cosmetics that contain certain chemicals like Vitamin A or alpha hydroxy acids. On the surface, this may seem harmless—who doesn't want to ensure that minors are safe from harmful chemicals? But the real issue lies in the invasive surveillance it mandates. A person simply trying to buy face cream could be forced to submit sensitive personal data through “an age verification system,” creating a system of constant tracking and data collection for a product that should be innocuous.
  2. Dating Apps: A3323 in New York
    Match made in heaven? Not without your government-issued ID. New York’s A3323 bill mandates that online dating services verify users’ age, identity, and location before allowing access to their platforms. The bill's sweeping requirements introduce serious privacy concerns for all users. By forcing users to provide sensitive personal information—such as government-issued IDs and location data—the bill creates significant risks that this data could be misused, sold, or exposed through data breaches. 
  3. Dieting products: SB 5622 in Washington State
    Shed your privacy before you shed those pounds! Washington State’s SB 5622 takes aim at diet pills and dietary supplements by restricting their sale to anyone under 18. While the bill’s intention is to protect young people from potentially harmful dieting products, it misses the mark by overlooking the massive privacy risks associated with the age verification process for everyone else. To enforce this restriction, the bill requires intrusive personal data collection for purchasing diet pills in person or online, opening the door for sensitive information to be exploited.
The Problem with Age Verification: No Solution Is Safe

Let’s be clear: no method of age verification is both privacy-protective and entirely accurate. The methods also don’t fall on a neat spectrum of “more safe” to “less safe.” Instead, every form of age verification is better described as “dangerous in one way” or “dangerous in a different way.” These systems are inherently flawed, and none come without trade-offs. Additionally, they continue to burden adults who just want to browse the internet or buy everyday items without being subjected to mass data collection.

For example, when an age verification system requires users to submit government-issued identification or a scan of their face, it collects a staggering amount of sensitive, often immutable, biometric or other personal data—jeopardizing internet users’ privacy and security. Systems that rely on credit card information, phone numbers, or other third-party material  similarly amass troves of personal data. This data is just as susceptible to being misused as any other data, creating vulnerabilities for identity theft and data breaches. These issues are not just theoretical: age verification companies can be—and already have been—hacked. These are real, ongoing concerns for anyone who values their privacy. 

We must push back against age verification bills that create surveillance systems and undermine our civil liberties, and we must be clear-eyed about the dangers posed by these expanding age verification laws. While the intent to protect children makes sense, the unintended consequence is a massive erosion of privacy, security, and free expression online for everyone. Rather than focusing on restrictive age verification systems, lawmakers should explore better, less invasive ways to protect everyone online—methods that don’t place the entire burden of risk on individuals or threaten their fundamental rights. 

EFF will continue to advocate for digital privacy, security, and free expression. We urge legislators to prioritize solutions that uphold these essential values, ensuring that the internet remains a space for learning, connecting, and creating—without the constant threat of surveillance or censorship. Whether you’re buying a face cream, swiping on a dating app, or browsing for a bottle of diet pills, age verification laws undermine that vision, and we must do better.

Rindala Alajaji

Simple Phish Bait: EFF Is Not Investigating Your Albion Online Forum Account

1 month 1 week ago

We recently learned that users of the Albion Online gaming forum have received direct messages purporting to be from us. That message, which leverages the fear of an account ban, is a phishing attempt.

If you’re an Albion Online forum user and receive a message that claims to be from “the EFF team,” don’t click the link, and be sure to use the in-forum reporting tool to report the message and the user who sent it to the moderators.

A screenshot of the message shared by a user of the forums.

The message itself has some of the usual hallmarks of a phishing attempt, including tactics like creating a sense of fear that your account may be suspended, leveraging the name of a reputable group, and further raising your heart rate with claims that the message needs a quick response. The goal appears to be to get users to download a PDF file designed to deliver malware. That PDF even uses our branding and typefaces (mostly) correctly.

A full walk through of this malware and what it does was discovered by the Hunt team. The PDF is a trojan, or malware disguised as a non malicious file or program, that has an embedded script that calls out to an attacker server. The attacker server then sends a “stage 2” payload that installs itself onto the user’s device. The attack structure used was discovered to be the Pyramid C2 framework. In this case, it is a Windows operating system intended malware. There’s a variety of actions it takes, like writing and modifying files to the victim’s physical drive. But the most worrisome discovery is that it appears to connect the user’s device to a malicious botnet and has potential access to the “VaultSvc” service. This service securely stores user credentials, such as usernames and passwords

File-based IoCs:
act-7wbq8j3peso0qc1.pages[.]dev/819768.pdf
Hash: 4674dec0a36530544d79aa9815f2ce6545781466ac21ae3563e77755307e0020

This incident is a good reminder that often, the best ways to avoid malware and phishing attempts are the same: avoid clicking strange links in unsolicited emails, keep your computer’s software updated, and always scrutinize messages claiming to come from computer support or fraud detection. If a message seems suspect, try to verify its authenticity through other channels—in this case, poking around on the forum and asking other users before clicking on anything. If you ever absolutely must open a file, do so in an online document reader, like Google Drive, or try sending the link through a tool like VirusTotal, but try to avoid opening suspicious files whenever possible.

For more information to help protect yourself, check out our guides for protecting yourself from malware and avoiding phishing attacks.

Alexis Hancock

Trump Calls On Congress To Pass The “Take It Down” Act—So He Can Censor His Critics

1 month 1 week ago

We've opposed the Take It Down Act because it could be easily manipulated to take down lawful content that powerful people simply don't like. Last night, President Trump demonstrated he has a similar view on the bill. He wants to sign the bill into law, then use it to remove content about — him. And he won't be the only powerful person to do so. 

Here’s what Trump said to a joint session of Congress:    

The Senate just passed the Take It Down Act…. Once it passes the House, I look forward to signing that bill into law. And I’m going to use that bill for myself too if you don’t mind, because nobody gets treated worse than I do online, nobody. 

%3Ciframe%20src%3D%22https%3A%2F%2Farchive.org%2Fembed%2Ftrump-take-it-down-act-cspan%22%20webkitallowfullscreen%3D%22true%22%20mozallowfullscreen%3D%22true%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22384%22%20frameborder%3D%220%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from archive.org


Video courtesy C-SPAN.

The Take It Down Act is an overbroad, poorly drafted bill that would create a powerful system to pressure removal of internet posts, with essentially no safeguards. While the bill is meant to address a serious problem—the distribution of non-consensual intimate imagery (NCII)—the notice-and-takedown system it creates is an open invitation for powerful people to pressure websites into removing content they dislike. There are no penalties for applying very broad, or even farcical definitions of what constitutes NCII, and then demanding that it be removed.  

take action

TELL CONGRESS: "Take It Down" Has No real Safeguards  

This Bill Will Punish Critics, and The President Wants It Passed Right Now 

Congress should believe Trump when he says he would use the Take It Down Act simply because he's "treated badly," despite the fact that this is not the intention of the bill. There is nothing in the law, as written, to stop anyone—especially those with significant resources—from misusing the notice-and-takedown system to remove speech that criticizes them or that they disagree with.  

Trump has frequently targeted platforms carrying content and speakers of entirely legal speech that is critical of him, both as an elected official and as a private citizen.  He has filed frivolous lawsuits against media defendants which threaten to silence critics and draw scarce resources away from important reporting work.   

Now that Trump issued a call to action for the bill in his remarks, there is a possibility that House Republicans will fast track the bill into a spending package as soon as next week. Non-consensual intimate imagery is a serious problem that deserves serious consideration, not a hastily drafted, overbroad bill that sweeps in legal, protected speech. 

How The Take It Down Act Could Silence People 

A few weeks ago, a "deepfake" video of President Trump and Elon Musk was displayed across various monitors in the Housing and Urban Development office. The video was subsequently shared on various platforms. While most people wouldn't consider this video, which displayed faked footage of Trump kissing Elon Musk's feet, "nonconsensual intimate imagery," the takedown provision of the bill applies to an “identifiable individual” engaged in “sexually explicit conduct.” This definition leaves much room for interpretation, and nudity or graphic displays are not necessarily required.  

Moreover, there are no penalties whatsoever to dissuade a requester from simply insisting that content is NCII. Apps and websites only have 48 hours to remove content once they receive a request, which means they won’t be able to verify claims. Especially if the requester is an elected official with the power to start an investigation or prosecution, what website would stand up to such a request?  

The House Must Not Pass This Dangerous Bill 

Congress should focus on enforcing and improving the many existing civil and criminal laws that address NCII, rather than opting for a broad takedown regime that is bound to be abused. Take It Down would likely lead to the use of often-inaccurate automated filters that are infamous for flagging legal content, from fair-use commentary to news reporting. It will threaten encrypted services, which may respond by abandoning encryption entirely in order to be able to monitor content—turning private conversations into surveilled spaces.   

Protecting victims of NCII is a legitimate goal. But good intentions alone are not enough to make good policy. Tell your Member of Congress to oppose censorship and to oppose H.R.633. 

take action

Tell the house to stop "Take it down" 

Jason Kelley

Meet Rayhunter: A New Open Source Tool from EFF to Detect Cellular Spying

1 month 2 weeks ago

At EFF we spend a lot of time thinking about Street Level Surveillance technologies—the technologies used by police and other authorities to spy on you while you are going about your everyday life—such as automated license plate readers, facial recognition, surveillance camera networks, and cell-site simulators (CSS). Rayhunter is a new open source tool we’ve created that runs off an affordable mobile hotspot that we hope empowers everyone, regardless of technical skill, to help search out CSS around the world. 

CSS (also known as Stingrays or IMSI catchers) are devices that masquerade as legitimate cell-phone towers, tricking phones within a certain radius into connecting to the device rather than a tower

CSS operate by conducting a general search of all cell phones within the device’s radius. Law enforcement use CSS to pinpoint the location of phones often with greater accuracy than other techniques such as cell site location information (CSLI)  and without needing to involve the phone company at all. CSS can also log International Mobile Subscriber Identifiers (IMSI numbers) unique to each SIM card, or hardware serial numbers (IMEIs) of all of the mobile devices within a given area. Some CSS may have advanced features allowing law enforcement to intercept communications in some circumstances.

What makes CSS especially interesting, as compared to other street level surveillance, is that so little is known about how commercial CSS work. We don’t fully know what capabilities they have or what exploits in the phone network they take advantage of to ensnare and spy on our phones, though we have some ideas

We also know very little about how cell-site simulators are deployed in the US and around the world. There is no strong evidence either way about whether CSS are commonly being used in the US to spy on First Amendment protected activities such as protests, communication between journalists and sources, or religious gatherings. There is some evidence—much of it circumstantial—that CSS have been used in the US to spy on protests. There is also evidence that CSS are used somewhat extensively by US law enforcement, spyware operators, and scammers. We know even less about how CSS are being used in other countries, though it's a safe bet that in other countries CSS are also used by law enforcement.

Much of these gaps in our knowledge are due to a lack of solid, empirical evidence about the function and usage of these devices. Police departments are resistant to releasing logs of their use, even when they are kept. The companies that manufacture CSS are unwilling to divulge details of how they work. 

Until now, to detect the presence of CSS, researchers and users have had to either rely on Android apps on rooted phones, or sophisticated and expensive software-defined radio rigs. Previous solutions have also focused on attacks on the legacy 2G cellular network, which is almost entirely shut down in the U.S. Seeking to learn from and improve on previous techniques for CSS detection we have developed a better, cheaper alternative that works natively on the modern 4G network.

Introducing Rayhunter

To fill these gaps in our knowledge, we have created an open source project called Rayhunter.1 It is developed to run on an Orbic mobile hotspot (Amazon, Ebay) which is available for $20 or less at the time of this writing. We have tried to make Rayhunter as easy as possible to install and use, regardless of your level of technical knowledge. We hope that activists, journalists, and others will run these devices all over the world and help us collect data about the usage and capabilities of cell-site simulators (please see our legal disclaimer.) 

Rayhunter works by intercepting, storing, and analyzing the control traffic (but not user traffic, such as web requests) between the mobile hotspot Rayhunter runs on and the cell tower to which it’s connected. Rayhunter analyzes the traffic in real-time and looks for suspicious events, which could include unusual requests like the base station (cell tower) trying to downgrade your connection to 2G which is vulnerable to further attacks, or the base station requesting your IMSI under suspicious circumstances. 

Rayhunter notifies the user when something suspicious happens and makes it easy to access those logs for further review, allowing users to take appropriate action to protect themselves, such as turning off their phone and advising other people in the area to do the same. The user can also download the logs (in PCAP format) to send to an expert for further review. 

The default Rayhunter user interface is very simple: a green (or blue in colorblind mode) line at the top of the screen lets the user know that Rayhunter is running and nothing suspicious has occurred. If that line turns red, it means that Rayhunter has logged a suspicious event. When that happens the user can connect to the device's WiFi access point and check a web interface to find out more information or download the logs. 


Rayhunter in action

Installing Rayhunter is relatively simple. After buying the necessary hardware, you’ll need to download the latest release package, unzip the file, plug the device into your computer, and then run an install script for either Mac or Linux (we do not support Windows as an installation platform at this time.)

We have a few different goals with this project. An overarching goal is to determine conclusively if CSS are used to surveil free expression such as protests or religious gatherings, and if so, how often it’s occurring. We’d like to collect empirical data (through network traffic captures, i.e. PCAPs) about what exploits CSS are actually using in the wild so the community of cellular security researchers can build better defenses. We also hope to get a clearer picture of the extent of CSS usage outside of the U.S., especially in countries that do not have legally enshrined free speech protections.

Once we have gathered this data, we hope we can help folks more accurately engage in threat modeling about the risks of cell-site simulators, and avoid the fear, uncertainty, and doubt that comes from a lack of knowledge. We hope that any data we do find will be useful to those who are fighting through legal process or legislative policy to rein in CSS use where they live. 

If you’re interested in running Rayhunter for yourself, pick up an Orbic hotspot (Amazon, Ebay), install Rayhunter, check out the project's Frequently Asked Questions, and help us collect data about how IMSI catchers operate! Together we can find out how cell site simulators are being used, and protect ourselves and our communities from this form of surveillance.

Legal disclaimer: Use Rayhunter at your own risk. We believe running this program does not currently violate any laws or regulations in the United States. However, we are not responsible for civil or criminal liability resulting from the use of this software. If you are located outside of the US please consult with an attorney in your country to help you assess the legal risks of running this program

  • 1. A note on the name: Rayhunter is named such because Stingray is a brand name for cell-site simulators which has become a common term for the technology. One of the only natural predators of the stingray in the wild is the orca, some of which hunt stingrays for pleasure using a technique called wavehunting. Because we like Orcas, we don’t like stingray technology (though the animals are great!), and because it was the only name not already trademarked, we chose Rayhunter.
Cooper Quintin

Ninth Circuit Correctly Rules That Dating App Isn’t Liable for Matching Users

1 month 2 weeks ago

The U.S. Court of Appeals for the Ninth Circuit correctly held that Grindr, a popular dating app, can’t be held responsible for matching users and enabling them to exchange messages that led to real-world harm. EFF and the Woodhull Freedom Foundation filed an amicus brief in the Ninth Circuit in support of Grindr.

Grindr and other dating apps are possible thanks to strong Section 230 immunity. Without this protection, dating apps—and other platforms that host user-generated content—would have more incentive to censor people online. While real-world harms do happen when people connect online, these can be directly redressed by holding perpetrators who did the harm accountable.

The case, Doe v. Grindr, was brought by a plaintiff who was 15 years old when he signed up for Grindr but claimed to be over 18 years old to use the app. He was matched with other users and exchanged messages with them. This led to four in-person meetings that resulted in three out of four adult men being prosecuted and sentenced for rape.

The plaintiff brought various state law claims against Grindr centering around the idea that the app was defectively designed, enabling him to be matched with and to communicate with the adults. The plaintiff also brought a federal civil sex trafficking claim.

Grindr invoked Section 230, the federal statute that has ensured a free and open internet for nearly 30 years. Section 230(c)(1) specifically provides that online services are generally not responsible for “publishing” harmful user-generated content. Section 230 protects users’ online speech by protecting the intermediaries we all rely on to communicate via dating apps, social media, blogs, email, and other internet platforms.

The Ninth Circuit rightly affirmed the district court’s dismissal of all of the plaintiff’s claims. The court held that Section 230 bars nearly all of plaintiff’s claims (except the sex trafficking claim, which is exempted from Section 230). The court stated:

Each of Doe’s state law claims necessarily implicates Grindr’s role as a publisher of third-party content. The theory underpinning Doe’s claims for defective design, defective manufacturing, and negligence faults Grindr for facilitating communication among users for illegal activity….

The Ninth Circuit’s holding is important because many plaintiffs have tried in recent years to plead around Section 230 by framing their cases as seeking to hold internet platforms responsible for their own “defective designs,” rather than third-party content. Yet, a closer look at a plaintiff’s allegations often reveals that the plaintiff’s harm is indeed premised on third-party content—that’s true in this case, where the plaintiff exchanged messages with the adult men. As we argued in our brief:

Plaintiff’s claim here is based not on mere access to the app, but on the actions of a third party once John Doe logged in—messages exchanged between a third party and Doe, and ultimately, on unlawful acts occurring between them because of those communications.

Additionally, courts generally have concluded that an internet platform’s features that relate to how users can engage with the app and how third-party content is displayed and organized, are also “publishing” activities protected by Section 230.

As for the federal civil sex trafficking claim, the Ninth Circuit held that the plaintiff’s allegations failed to meet the statutory requirements. The court stated:

Doe must plausibly allege that Grindr ‘knowingly’ sex trafficked a person by a list of specified means. But the [complaint] merely shows that Grindr provided a platform that facilitated sharing of messages between users.

While the facts of this case are no doubt difficult, the Ninth Circuit reached the correct conclusion. Our modern communications are mediated by private companies, and any weakening of Section 230 immunity for internet platforms would stifle everyone’s ability to communicate, as companies would be incentivized to engage in greater censorship of users to mitigate their legal exposure.

This does not leave victims without redress—they may seek to hold perpetrators responsible directly. Importantly in this case, three of the perpetrators were held criminally liable. And should facts show that an online service participated in criminal conduct, Section 230 would not block a federal prosecution. The court’s ruling demonstrates that Section 230 is working as Congress intended.

Sophia Cope

EFF In Conversation With Ron Deibert on Chasing Shadows

1 month 2 weeks ago

Join EFF's Cindy Cohn and Eva Galperin in conversation with Ron Deibert of the University of Toronto’s Citizen Lab, to discuss Ron’s latest book: Chasing Shadows: Cyber Espionage, Subversion and the Global Fight for Democracy. Chasing Shadows provides a front-row seat to a dark underworld of digital espionage, dark PR, and subversion. The book provides a gripping account of how the Citizen Lab, the world’s foremost digital watchdog, has uncovered dozens of cyber espionage cases and protects people in countries around the world. Called “essential reading” by Margaret Atwood, it’s a chilling reminder of the invisible invasions happening on smartphones and computers around the world.

LEARN MORE


When:

Monday, March 10, 2025 
7:oo pm - 9:o0 pm (PT)

Where:

In-person:
City Lights Bookstore
261 Columbus Avenue
San Francisco, CA 94133

Virtual:
Zoom

About the Author:

Ronald J. Deibert is the founder and director of the Citizen Lab, a world-renowned digital security research center at the University of Toronto. The bestselling author of Reset: Reclaiming the Internet for Civil Society and Black Code: Surveillance, Privacy, and the Dark Side of the Internet, he has also written many landmark articles and reports on espionage operations that infiltrated government and NGO computer networks. His team’s exposés of the spyware that attacks journalists and anti-corruption advocates around the world have been featured in The New York Times, The Washington Post, Financial Times, and other media. Deibert has received multiple honors for his cutting-edge work, and in 2022 he was appointed an Officer of the Order of Canada—the country’s second-highest honor of merit.

Melissa Srago

Fresh Threats to Privacy Around the Globe | EFFector 37.2

1 month 2 weeks ago

EFF is here to keep you up-to-date with the latest news in the world of civil liberties and human rights online with our EFFector newsletter!

This edition of the newsletter covers Apple's recent decision to turn off Advanced Data Protection for users in the U.K., our how-to guide for limiting Meta's ability to collect and monetize your personal data, and our recent victory against the government's use of Section 702 to spy on Americans.

You can read the full newsletter here, and even get future editions directly to your inbox when you subscribe! Additionally, we've got an audio edition of EFFector on the Internet Archive, or you can view it by clicking the button below:

LISTEN ON YouTube

EFFECTOR 37.2 - Fresh Threats to Privacy Around the Globe

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Christian Romero

EFF to California's Supreme Court: Protect the Privacy of Internet Users' Communications

1 month 2 weeks ago

EFF asked the California Supreme Court not to weaken the Stored Communications Act, a 1986 federal law that restricts how providers can disclose the content of your communications to the government or private parties.

The law is built on the principle that you have a reasonable expectation of privacy that providers like Snap and Meta will not disclose your communications to third parties, even though the providers have access to those communications as they are stored on their systems. In an amicus brief, we urged the court to uphold these privacy protections, as they have for the past 40 years. EFF filed the brief along with the Center for Democracy & Technology and the Mozilla Corporation.

A lower court decision got it wrong. And we are urging the California Supreme Court to overrule that decision. If the lower court's ruling is affirmed, Meta, Snap, and other providers would be permitted to voluntarily disclose the content of their users' communications to any other corporation, the government, or any individual for any reason.

We previously helped successfully urge the California Supreme Court to hear this case. 

Mario Trujillo

Anti-Surveillance Mapmaker Refuses Flock Safety's Cease and Desist Demand

1 month 2 weeks ago

Flock Safety loves to crow about the thousands of local law enforcement agencies around the United States that have adopted its avian-themed automated license plate readers (ALPRs). But when a privacy activist launched a website to map out the exact locations of these pole-mounted devices, the company tried to clip his wings.  

The company sent DeFlock.me and its creator Will Freeman a cease-and-desist letter, claiming that the project dilutes its trademark. Suffice it to say, and to lean into ornithological wordplay, the letter is birdcage liner.  

Representing Freeman, EFF sent Flock Safety a letter rejecting the demand, pointing out that the grassroots project is well within its First Amendment rights.  

Flock Safety’s car-tracking cameras have been spreading across the United States like an invasive species, preying on public safety fears and gobbling up massive amounts of sensitive driver data. The technology not only tracks vehicles by their license plates, but also creates “fingerprints” of each vehicle, including the make, model, color and other distinguishing features. This is a mass surveillance technology that collects information on everyone, regardless of whether they are connected to a crime. It has been misused by police to spy on their ex-partners and could be used to target people engaged in First Amendment activities or seeking medical care.  

Through crowdsourcing and open-source research, DeFlock.me aims to “shine a light on the widespread use of ALPR technology, raise awareness about the threats it poses to personal privacy and civil liberties, and empower the public to take action.”  While EFF’s Atlas of Surveillance project has identified more than 1,700 agencies using ALPRs, DeFlock has mapped out more than 16,000 individual camera locations, more than a third of which are Flock Safety devices.  

Flock Safety is so integrated into law enforcement, it’s not uncommon to see law enforcement agencies actually promoting the company by name on their websites. The Sussex County Sheriff’s website in Virginia has only two items in its menu bar: Accident Reports and Flock Safety. The name “DeFlock,” EFF told the vendor, represents the project’s goal of “ending ALPR usage and Flock’s status as one of the most widely used ALPR providers.” It’s accurate, appropriate, effective, and most importantly, legally protected.  

 We wrote:  

Your claims of dilution by blurring and/or tarnishment fail at the threshold, without even needing to address why dilution is unlikely. Federal anti-dilution law includes express carve-outs for any noncommercial use of a mark and for any use in connection with criticizing or commenting on the mark owner or its products. Mr. Freeman’s use of the name “DeFlock” is both.

Flock Safety’s cease and desist later is just the latest in a long list of groups turning to bogus intellectual property claims to silence their critics. Frequently, these have no legal basis and are designed to frighten under-resourced activists and advocacy groups with high-powered law firm letterheads. EFF is here to stand up against these trademark bullies, and in the case of Flock Safety, flip them the bird.  

Dave Maass

EFF to UK PM Starmer: Call Sisi to Free Alaa and Save Laila

1 month 3 weeks ago

UK Prime Minister Keir Starmer made a public commitment on February 14 to Laila Soueif, the mother of Alaa Abd El Fattah, stating “I will do all that I can to secure the release of her son Alaa Abd el-Fattah and reunite him with his family.” While that commitment was welcomed by the family, it is imperative that it now be followed up with concrete action.

Laila has called on PM Starmer to speak directly to President Sisi of Egypt. Starmer has written to Sisi twice, in December and January, and his National Security Adviser, Jonathan Powell, discussed Alaa with Egyptian authorities in Cairo on January 2. UK authorities have not made public any further contact with Egypt since.

“all she wants is for [Alaa] to be free now that he served the full five year sentence, and after they stole 11 years of his and [his son] Khaled’s life.”

Laila, who has been on hunger strike since Alaa’s intended release date in September, was hospitalized on Monday night after her blood sugar dropped to worrying new levels. A letter published today from her NHS doctor states that there is now immediate risk to her life including further deterioration or death. Nevertheless, Laila remains steadfast in her commitment to refrain from eating until her son is freed.

In the words of Alaa’s sister Mona Seif: “all she wants is for [Alaa] to be free now that he served the full five year sentence, and after they stole 11 years of his and [his son] Khaled’s life.”

Alaa is a British citizen, and as such his government owes him more than mere lip service. The UK government can and must use every tactic available to them, including:

  • Changing travel advice on the Foreign Office’s website to reflect the fact that citizens arrested in Egypt cannot be guaranteed consular access
  • Convening a joint meeting of ministers and officials of the Foreign, Commonwealth and Development Office; Ministry of Defence; and Department of Business and Trade to discuss a unified strategy toward Alaa’s case
  • Summoning the Egyptian ambassador in London and restricting his access to Whitehall if Alaa is not released and returned to the UK
  • Announcing a moratorium on any governmental assistance or promotion of new Foreign Direct Investments into Egypt, as called for by 15 NGOs in November.

EFF once again calls on Prime Minister Starmer to pick up the phone and call Egyptian President Sisi to free Alaa and save Laila—before it’s too late.

Jillian C. York

The Senate Passed The TAKE IT DOWN Act, Threatening Free Expression and Due Process

1 month 3 weeks ago

Earlier this month, the Senate passed the TAKE IT DOWN Act (S. 146), by a voice vote. The bill is meant to speed up the removal of non-consensual intimate imagery, or NCII, including videos that imitate real people, a technology sometimes called “deepfakes.” 

Protecting victims of these heinous privacy invasions is a legitimate goal. But good intentions alone are not enough to make good policy. As currently drafted, the TAKE IT DOWN Act mandates a notice-and-takedown system that threatens free expression, user privacy, and due process, without addressing the problem it claims to solve. 

This misguided bill can still be stopped in the House of Representatives. Help us speak out against it now. 

take action

"Take It Down" Has No real Safeguards  

Before this vote, EFF, along with the Center for Democracy & Technology (CDT), Authors Guild, Demand Progress Action, Fight for the Future, Freedom of the Press Foundation, New America’s Open Technology Institute, Public Knowledge, Restore The Fourth, SIECUS: Sex Ed for Social Change, TechFreedom, and Woodhull Freedom Foundation, sent a letter to the Senate, asking them to change this legislation to protect legitimate speech that is not NCII. Changes are also needed to protect users who rely on encrypted services.

The letter explains that the bill’s “takedown” provision applies to a much broader category of content—potentially any images involving intimate or sexual content at all—than the narrower NCII definitions found elsewhere in the bill. The bill contains no protections against frivolous or bad-faith takedown requests. Lawful content—including satire, journalism, and political speech—could be wrongly censored. The legislation requires that apps and websites remove content within 48 hours, meaning that online service providers, particularly smaller ones, will have to comply so quickly to avoid legal risk that they won’t be able to verify claims

This would likely lead to the use of often-inaccurate automated filters that are infamous for flagging legal content, from fair-use commentary to news reporting. Communications providers that offer users end-to-end encrypted messaging, meanwhile, may be served with notices they simply cannot comply with, given the fact that these providers cannot view the contents of messages on their platforms. Platforms may respond by abandoning encryption entirely in order to be able to monitor content—turning private conversations into surveilled spaces. 

Congress should focus on enforcing and improving the many existing civil and criminal laws that address NCII, rather than opting for a broad takedown regime that is bound to be abused. Tell your Member of Congress to oppose censorship and to oppose S. 146. 

take action

Tell the house to stop "Take it down" 



Further reading:

India McKinney

New Yorkers Deserve Stronger Health Data Protections Now—Governor Hochul Can Make It Happen

1 month 3 weeks ago

With the rise of digital surveillance, securing our health data is no longer just a privacy issue—it's a matter of personal safety. In the wake of the Supreme Court's reversal of Roe v. Wade and the growing restrictions on abortion and gender-affirming care, protecting our personal health data has never been more important. And in a world where nearly half of U.S. states have either banned or are on the brink of banning abortion, unfettered access to personal health data is an even more dangerous threat.

That’s why EFF joins the New York Civil Liberties Union (NYCLU) in urging Governor Hochul to sign the New York Health Information Privacy Act (A.2141/S.929). This legislation is a crucial step toward safeguarding the digital privacy of New Yorkers at a time when health data is increasingly vulnerable to misuse.

Why Health Data Privacy Matters

When individuals seek reproductive health care or gender-affirming care, they leave behind a digital trail. Whether through search histories, email exchanges, travel itineraries, or data from period-tracker apps and smartwatches, every click, every action, and every step is tracked, often with little or no consent. And this kind of data—however collected—has already been used to criminalize individuals who were simply seeking health care

Unlike HIPAA, which regulates 'covered entities'—providers of treatment, payors/insurers—who are part of the traditional health care system and their ‘business associates,’ this bill would expand its reach to cover a broad range of 'new' entities. These include data brokers, tech companies, and others in the digital ecosystem, who can access and share this sensitive health information. The result is a growing web of entities collecting personal data, far beyond the scope of traditional health care providers.

For example, in some states, individuals have been investigated or even prosecuted based on their digital data, simply for obtaining abortion care. In a world where our health choices are increasingly monitored, the need for robust privacy protections is clearer than ever. The New York Health Information Privacy Act is the Empire State’s opportunity to lead the nation in protecting its residents.

What Does the Health Information Privacy Act Do?

At its core, the New York Health Information Privacy Act would provide vital protections for New Yorkers' electronic health data. Here’s what the bill does:

  • Prohibits the sale of health data: Health data is not a commodity to be bought and sold. This bill ensures that your most personal information is not used for profit by commercial entities without your consent.
  • Requires explicit consent: Before health data is processed, New Yorkers will need to provide clear, informed consent. The bill limits processing (storing, collecting, using) of personal data to “strictly necessary” purposes only, minimizing unnecessary collection.
  • Data deletion rights: Health data will be deleted by default after 60 days, unless the individual requests otherwise. This empowers individuals to control their data, ensuring that unnecessary information doesn’t linger.
  • Non-discrimination protections: Individuals will not face discrimination or higher costs for exercising their privacy rights. No one should be penalized for wanting to protect their personal information.
Why New York Needs This Bill Now

The need for these protections is urgent. As digital surveillance expands, so does the risk of personal health data being used against individuals. In a time when personal health decisions are under attack, it’s crucial that New Yorkers have control over their health information. By signing this bill, Governor Hochul would ensure that out-of-state actors cannot easily access New Yorkers’ health data without due process, protecting individuals from legal actions in states that criminalize reproductive and gender-affirming care.

However, this bill still faces a critical shortcoming—the absence of a private right of action (PRA). Without it, individuals cannot directly sue companies for privacy violations, leaving them vulnerable. Accountability would fall solely on the Attorney General, who would need the resources to quickly and consistently enforce the new law. Nonetheless, the Attorney General’s role will now be critical in ensuring this bill is upheld, and they must remain steadfast in implementing these protections effectively.

Governor Hochul: Sign A.2141/S.929

The importance of this legislation cannot be overstated—it is about protecting people from potential legal actions related to their health care decisions. By signing this bill, Governor Hochul would solidify New York’s position as a leader in health data privacy and take a firm stand against the misuse of personal information.

New York has the power to protect its residents and set a strong precedent for privacy protections across the nation. Let’s ensure that personal health data remains in the hands of those who own it—the individuals themselves.

Governor Hochul: This is your chance to make a difference. Let’s take action now to protect what matters most—our health, our data, and our rights. Sign A.2141/ S.929 today.

Rindala Alajaji

The Judicial Conference Should Continue to Liberally Allow Amicus Briefs, a Critical Advocacy Tool

1 month 3 weeks ago

EFF does a lot of things, including impact litigation, legislative lobbying, and technology development, all to fight for your civil liberties in the digital age. With litigation, we directly represent clients and also file “amicus” briefs in court cases.

An amicus brief, also called a “friend-of-the-court” brief, is when we don’t represent one of the parties on either side of the “v”—instead, we provide the court with a helpful outside perspective on the case, either on behalf of ourselves or other groups, that can help the court make its decision.

Amicus briefs are a core part of EFF’s legal work. Over the years, courts at all levels have extensively engaged with and cited our amicus briefs, showing that they value our thoughtful legal analysis, technical expertise, and public interest mission.

Unfortunately, the Judicial Conference—the body that oversees the federal court system—has proposed changes to the rule governing amicus briefs (Federal Rule of Appellate Procedure 29) that would make it harder to file such briefs in the circuit courts.

EFF filed comments with the Judicial Conference sharing our thoughts on the proposed rule changes (a total of 407 comments were filed). Two proposed changes are particularly concerning.

First, amicus briefs would be “disfavored” if they address issues “already mentioned” by the parties. This language is extremely broad and may significantly reduce the amount and types of amicus briefs that are filed in the circuit courts. As we said in our comments:

We often file amicus briefs that expand upon issues only briefly addressed by the parties, either because of lack of space given other issues that party counsel must also address on appeal, or a lack of deep expertise by party counsel on a specific issue that EFF specializes in. We see this often in criminal appeals when we file in support of the defendant. We also file briefs that address issues mentioned by the parties but additionally explain how the relevant technology works or how the outcome of the case will impact certain other constituencies.

We then shared examples of EFF amicus briefs that may have been disfavored if the “already mentioned” standard had been in effect, even though our briefs provided help to the courts. Just two examples are:

  • In United States v. Cano, we filed an amicus brief that addressed the core issue of the case—whether the border search exception to the Fourth Amendment’s warrant requirement applies to cell phones. We provided a detailed explanation of the privacy interests in digital devices, and a thorough Fourth Amendment analysis regarding why a warrant should be required to search digital devices at the border. The Ninth Circuit extensively engaged with our brief to vacate the defendant’s conviction.
  • In NetChoice, LLC v. Attorney General of Florida, a First Amendment case about social media content moderation (later considered by the Supreme Court), we filed an amicus brief that elaborated on points only briefly made by the parties about the prevalence of specialized social media services reflecting a wide variety of subject matter focuses and political viewpoints. Several of the examples we provided were used by the 11th Circuit in its opinion.

Second, the proposed rules would require an amicus organization (or person) to file a motion with the court and get formal approval before filing an amicus brief. This would replace the current rule, which also allows an amicus brief to be filed if both parties in the case consent (which is commonly what happens).

As we stated in our comments: “Eliminating the consent provision will dramatically increase motion practice for circuit courts, putting administrative burdens on the courts as well as amicus brief filers.” We also argued that this proposed change “is not in the interests of justice.” We wrote:

Having to write and file a separate motion may disincentivize certain parties from filing amicus briefs, especially people or organizations with limited resources … The circuits should … facilitate the participation by diverse organizations at all stages of the appellate process—where appeals often do not just deal with discrete disputes between parties, but instead deal with matters of constitutional and statutory interpretation that will impact the rights of Americans for years to come.

Amicus briefs are a crucial part of EFF’s work in defending your digital rights, and our briefs provide valuable arguments and expertise that help the courts make informed decisions. That’s why we are calling on the Judicial Conference to reject these changes and preserve our ability to file amicus briefs in the federal appellate courts that make a difference.

Your support is essential in ensuring that we can continue to fight for your digital rights—in and out of court.

DONATE TO EFF

Sophia Cope

Cornered by the UK’s Demand for an Encryption Backdoor, Apple Turns Off Its Strongest Security Setting

1 month 3 weeks ago

Today, in response to the U.K.’s demands for a backdoor, Apple has stopped offering users in the U.K. Advanced Data Protection, an optional feature in iCloud that turns on end-to-end encryption for files, backups, and more.

Had Apple complied with the U.K.’s original demands, they would have been required to create a backdoor not just for users in the U.K., but for people around the world, regardless of where they were or what citizenship they had. As we’ve said time and time again, any backdoor built for the government puts everyone at greater risk of hacking, identity theft, and fraud.

This blanket, worldwide demand put Apple in an untenable position. Apple has long claimed it wouldn’t create a backdoor, and in filings to the U.K. government in 2023, the company specifically raised the possibility of disabling features like Advanced Data Protection as an alternative. Apple's decision to disable the feature for U.K. users could well be the only reasonable response at this point, but it leaves those people at the mercy of bad actors and deprives them of a key privacy-preserving technology. The U.K. has chosen to make its own citizens less safe and less free.

Although the U.K. Investigatory Powers Act purportedly authorizes orders to compromise security like the one issued to Apple, policymakers in the United States are not entirely powerless. As Senator Ron Wyden and Representative Andy Biggs noted in a letter to the Director of National Intelligence (DNI) last week, the US and U.K. are close allies who have numerous cybersecurity- and intelligence-sharing agreements, but “the U.S. government must not permit what is effectively a foreign cyberattack waged through political means.” They pose a number of key questions, including whether the CLOUD Act—an “encryption-neutral” law that enables special status for the U.K. to request data directly from US companies—actually allows the sort of demands at issue here. We urge Congress and others in the US to pressure the U.K. to back down and to provide support for US companies to resist backdoor demands, regardless of what government issues them.

Meanwhile, Apple is not the only company operating in the U.K. that offers end-to-end encryption backup features. For example, you can optionally enable end-to-end encryption for chat backups in WhatsApp or backups from Samsung Galaxy phones. Many cloud backup services offer similar protections, as do countless chat apps, like Signal, to secure conversations. We do not know if other companies have been approached with similar requests, but we hope they stand their ground as well.

If you’re in the U.K. and have not enabled ADP, you can no longer do so. If you have already enabled it, Apple will provide guidance soon about what to do. This change will not affect the end-to-end encryption used in Apple Messages, nor does it alter other data that’s end-to-end encrypted by default, like passwords and health data. But iCloud backups have long been a loophole for law enforcement to gain access to data otherwise not available to them on iPhones with device encryption enabled, including the contents of messages they’ve stored in the backup. Advanced Data Protection is an optional feature to close that loophole. Without it, U.K. users’ files and device backups will be accessible to Apple, and thus shareable with law enforcement.

We appreciate Apple’s stance against the U.K. government’s request. Weakening encryption violates fundamental rights. We all have the right to private spaces, and any backdoor would annihilate that right. The U.K. must back down from these overreaching demands and allow Apple—and others—to provide the option for end-to-end encrypted cloud storage.

Thorin Klosowski

EFF at RightsCon 2025

1 month 3 weeks ago

EFF is delighted to be attending RightsCon again—this year hosted in Taipei, Taiwan between 24-27 February.

RightsCon provides an opportunity for human rights experts, technologists, activists, and government representatives to discuss pressing human rights challenges and their potential solutions. 

Many EFFers are heading to Taipei and will be actively participating in this year's event. Several members will be leading sessions, speaking on panels, and be available for networking.

Our delegation includes:

  • Alexis Hancock, Director of Engineering, Certbot
  • Babette Ngene, Public Interest Technology Director
  • Christoph Schmon, International Policy Director
  • Cindy Cohn, Executive Director
  • Daly Barnett, Senior Staff Technologist
  • David Greene, Senior Staff Attorney and Civil Liberties Director
  • Jillian York, Director of International Freedom of Expression
  • Karen Gullo, Senior Writer for Free Speech and Privacy
  • Paige Collings, Senior Speech and Privacy Activist
  • Svea Windwehr, Assistant Director of EU Policy
  • Veridiana Alimonti, Associate Director For Latin American Policy

We hope you’ll have the opportunity to connect with us during the conference, especially at the following sessions: 

Day 0 (Monday 24 February)

Mutual Support: Amplifying the Voices of Digital Rights Defenders in Taiwan and East Asia

09:00 - 12:30, Room 101C
Alexis Hancock, Director of Engineering, Certbot
Host institutions: Open Culture Foundation, Odditysay Labs, Citizen Congress Watch and FLAME

This event aims to present Taiwan and East Asia’s digital rights landscape, highlighting current challenges faced by digital rights defenders and fostering resonance with participants' experiences. Join to engage in insightful discussions, learn from Taiwan’s tech community and civil society, and contribute to the global dialogue on these pressing issues. The form to register is here

Platform accountability in crisis? Global perspective on platform accountability frameworks

09:00 - 13:00, Room 202A
Christoph Schmon, International Policy Director; Babette Ngene, Public Interest Technology Director
Host institutions: Electronic Frontier Foundation (EFF), Access Now

This high level panel will reflect on alarming developments in platforms' content policies and their enforcement, and discuss whether existing frameworks offer meaningful tools to counter the current platform accountability crisis. The starting point for the discussion will be Access Now's recently launched report Platform accountability: a rule-of-law checklist for policymakers. The panel will be followed by a workshop, dedicated to the “Draft Viennese Principles for Embedding Global Considerations into Human-Rights-Centred DSA enforcement”. Facilitated by the DSA Human Rights Alliance, the workshop will provide a safe space for civil society organisations to strategize and discuss necessary elements of a human rights based approach to platform governance.

Day 1 (Tuesday 25 February) 

Criminalization of Tor in Ola Bini’s case? Lessons for digital experts in the Global South

09:00 - 10:00 (online)
Veridiana Alimonti, Associate Director For Latin American Policy
Host institutions: Access Now, Centro de Autonomía Digital (CAD), Observation Mission of the Ola Bini Case, Tor Project

This session will analyze how the use of Tor is criminalized in Ola Bini´s case and its implications for digital experts in other contexts of criminalization in the Global South, especially when they defend human rights online. Participants will work through various exercises to: 1- Analyze, from a technical perspective, the judicial criminalization of Tor in Ola Bini´s case, and 2- Collectively analyze how its criminalization can affect (judicially) the work of digital experts from the Global South and discuss possible support alternatives.

The counter-surveillance supply chain

11:30am - 12:30, Room 201F
Babette Ngene, Public Interest Technology Director
Host institution: Meta

The fight against surveillance and other malicious cyber adversaries is a whole-of-society problem, requiring international norms and policies, in-depth research, platform-level defenses, investigation, and detection. This dialogue focuses on the critical first link in this counter-surveillance supply chain; the on the ground organizations around the world who are the first contact for local activists and organizations dealing with targeted malware, and will include an open discussion on how to improve the global response to surveillance and surveillance-for-hire actors through a lens of local contextual knowledge and information sharing.

Day 3 (Wednesday 26 February) 

Derecho a no ser objeto de decisiones automatizadas: desafíos y regulaciones en el sector judicial

16:30 - 17:30, Room 101C
Veridiana Alimonti, Associate Director For Latin American Policy
Host institutions: Hiperderecho, Red en Defensa de los Derechos Digitales, Instituto Panamericano de Derecho y Tecnología

A través de este panel se analizarán casos específicos de México, Perú y Colombia para comprender las implicaciones éticas y jurídicas del uso de la inteligencia artificial en la redacción y motivación de sentencias judiciales. Con este diálogo se busca abordar el derecho a no ser objeto de decisiones automatizadas y las implicaciones éticas y jurídicas sobre la automatización de sentencias judiciales. Algunas herramientas pueden reproducir o amplificar estereotipos discriminatorios, además de posibles violaciones a los derechos de privacidad y protección de datos personales, entre otros.

Prying Open the Age-Gate: Crafting a Human Rights Statement Against Age Verification Mandates

16:30 - 17:30, Room 401 
David Greene, Senior Staff Attorney and Civil Liberties Director
Host institutions: Electronic Frontier Foundation (EFF), Open Net, Software Freedom Law Centre, EDRi

The session will engage participants in considering the issues and seeding the drafting of a global human rights statement on online age verification mandates. After a background presentation on various global legal models to challenge such mandates (with the facilitators representing Asia, Africa, Europe, US), participants will be encouraged to submit written inputs (that will be read during the session) and contribute to a discussion. This will be the start of an ongoing effort that will extend beyond RightsCon with the goal of producing a human rights statement that will be shared and endorsed broadly. 

Day 4 (Thursday 27 February) 

Let's talk about the elephant in the room: transnational policing and human rights

10:15 - 11:15, Room 201B
Veridiana Alimonti, Associate Director For Latin American Policy
Host institutions: Citizen Lab, Munk School of Global Affairs & Public Policy, University of Toronto

This dialogue focuses on growing trends surrounding transnational policing, which pose new and evolving challenges to international human rights. The session will distill emergent themes, with focal points including expanding informal and formal transnational cooperation and data-sharing frameworks at regional and international levels, the evolving role of borders in the development of investigative methods, and the proliferation of new surveillance technologies including mercenary spyware and AI-driven systems. 

Queer over fear: cross-regional strategies and community resistance for LGBTQ+ activists fighting against digital authoritarianism

11:30 - 12:30, Room 101D
Paige Collings, Senior Speech and Privacy Activist
Host institutions: Access Now, Electronic Frontier Foundation (EFF), De|Center, Fight for the Future

The rise of the international anti-gender movement has seen authorities pass anti-LGBTQ+ legislation that has made the stakes of survival even higher for sexual and gender minorities. This workshop will bring together LGBTQ+ activists from Africa, the Middle East, Eastern Europe, Central Asia and the United States to exchange ideas for advocacy and liberation from the policies, practices and directives deployed by states to restrict LGBTQ+ rights, as well as how these actions impact LGBTQ+ people—online and offline—particularly in regards to online organizing, protest and movement building.

Paige Collings

Utah Bill Aims to Make Officers Disclose AI-Written Police Reports

1 month 3 weeks ago

A bill headed to the Senate floor in Utah would require officers to disclose if a police report was written by generative AI. The bill, S.B. 180, requires a department to have a policy governing the use of AI. This policy would mandate that police reports created in whole or in part by generative AI have a disclaimer that the report contains content generated by AI and requires officers to legally certify that the report was checked for accuracy.

S.B. 180 is unfortunately a necessary step in the right direction when it comes to regulating the rapid spread of police using generative AI to write their narrative reports for them. EFF will continue to monitor this bill in hopes that it will be part of a larger conversation about more robust regulations. Specifically, Axon, the makers of tasers and the salespeople behind a shocking amount of police and surveillance tech, has recently rolled out a new product, Draft One, which uses body-worn camera audio to generate police reports. This product is spreading quickly in part because it is integrated with other Axon products which are already omnipresent in U.S. society.

But it’s going to take more than a disclaimer to curb the potential harms of AI-generated police reports.

As we’ve previously cautioned, the public should be skeptical of AI’s ability to accurately process and distinguish between the wide range of languages, dialects, vernacular, idioms, and slang people use. As online content moderation has shown, software may have a passable ability to capture words, but it often struggles with content and meaning. In a tense setting such as a traffic stop, AI mistaking a metaphorical statement for a literal claim could fundamentally change the content of a police report.

Moreover, so-called artificial intelligence taking over consequential tasks and decision-making has the power to obscure human agency. Police officers who deliberately exaggerate or lie to shape the narrative available in body camera footage now have even more of a veneer of plausible deniability with AI-generated police reports. If police were to be caught in a lie concerning what’s in the report, an officer might be able to say that they did not lie: the AI simply did not capture what was happening in the chaotic video.

As this technology spreads without much transparency, oversight, or guardrails, we are likely to see more cities, counties, and states push back against its use. Out of fear that AI-generated reports would complicate and compromise cases in the criminal justice system,prosecutors in King County, Washington (which includes Seattle) have instructed officers not to use the technology for now.

The use of AI to write police reports is troubling in ways we are accustomed to, but also in new ways. Not only do we not yet know how widespread use of this technology will affect the criminal justice system, but because of how the product is designed, there is a chance we won’t even know if AI has been used even if we are staring directly at the police report in question. For that reason, it’s no surprise that lawmakers in Utah have introduced this bill to require some semblance of transparency. We will likely see similar regulations and restrictions in other states and local jurisdictions, and possibly even stronger ones. 

Matthew Guariglia
Checked
2 hours 55 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed