No Postal Service Data Sharing to Deport Immigrants

3 months 1 week ago

The law enforcement arm of the U.S. Postal Service (USPS) recently joined a U.S. Department of Homeland Security (DHS) task force geared towards finding and deporting immigrants, according to a report from the Washington Post. Now, immigration officials want two sets of data from the U.S. Postal Inspection Service (USPIS). First, they want access to what the Post describes as the agency’s “broad surveillance systems, including Postal Service online account data, package- and mail-tracking information, credit card data and financial material and IP addresses.” Second, they want “mail covers,” meaning “photographs of the outside of envelopes and packages.”

Both proposals are alarming. The U.S. mail is a vital, constitutionally established system of communication and commerce that should not be distorted into infrastructure for dragnet surveillance. Immigrants have a human right to data privacy. And new systems of surveilling immigrants will inevitably expand to cover all people living in our country.

USPS Surveillance Systems

Mail is a necessary service in our society. Every day, the agency delivers 318 million letters, hosts 7 million visitors to its website, issues 209,000 money orders, and processes 93,000 address changes.

To obtain these necessary services, we often must provide some of our personal data to the USPS. According to the USPS’ Privacy Policy: “The Postal Service collects personal information from you and from your transactions with us.” It states that this can include “your name, email, mailing and/or business address, phone numbers, or other information that identifies you personally.” If you visit the USPS’s website, they “automatically collect and store” your IP address, the date and time of your visit, the pages you visited, and more. Also: “We occasionally collect data about you from financial entities to perform verification services and from commercial sources.”

The USPS should not collect, store, disclose, or use our data except as strictly necessary to provide us the services we request. This is often called “data minimization.” Among other things, in the words of a seminal 1973 report from the U.S. government: “There must be a way for an individual to prevent information about him that was obtained for one purpose from being used or made available for other purposes without [their] consent.” Here, the USPS should not divert customer data, collected for the purpose of customer service, to the new purpose of surveilling immigrants.

The USPS is subject to the federal Privacy Act of 1974, a watershed anti-surveillance statute. As the USPS acknowledges: “the Privacy Act applies when we use your personal information to know who you are and to interact with you.” Among other things, the Act limits how an agency may disclose a person’s records. (Sound familiar? EFF has a Privacy Act lawsuit against DOGE and the Office of Personnel Management.) While the Act only applies to citizens and lawful permanent residents, that will include many people who send mail to or receive mail from other immigrants. If USPS were to assert the “law enforcement” exemption from the Privacy Act’s non-disclosure rule, the agency would need to show (among other things) a written request for “the particular portion desired” of “the record.” It is unclear how dragnet surveillance like that reported by the Washington Post could satisfy this standard.

USPS Mail Covers

From 2015 to 2023, according to another report from the Washington Post, the USPS received more than 60,000 requests for “mail cover” information from federal, state, and local law enforcement. Each request could include days or weeks of information about the cover of mail sent to or from a person or address. The USPS approved 97% of these requests, leading to postal inspectors recording the covers of more than 312,000 letters and packages.

In 2023, a bipartisan group of eight U.S. Senators (led by Sen. Wyden and Sen. Paul) raised the alarm about this mass surveillance program:

While mail covers do not reveal the contents of correspondence, they can reveal deeply personal information about Americans’ political leanings, religious beliefs, or causes they support. Consequently, surveillance of this information does not just threaten Americans’ privacy, but their First Amendment rights to freely associate with political or religious organizations or peacefully assemble without the government watching.

The Senators called on the USPIS to “only conduct mail covers when a federal judge has approved this surveillance,” except in emergencies. We agree that, at minimum, a warrant based on probable cause should be required.

The USPS operates other dragnet surveillance programs. Its Mail Isolation Control and Tracking Program photographs the exterior of all mail, and it has been used for criminal investigations. The USPIS’s Internet Covert Operations Program (iCOP) conducts social media surveillance to identify protest activity. (Sound familiar? EFF has a FOIA lawsuit about iCOP.)

This is just the latest of many recent attacks on the data privacy of immigrants. Now is the time to restrain USPIS’s dragnet surveillance programs—not to massively expand them to snoop on immigrants. If this scheme goes into effect, it is only a matter of time before such USPIS spying is expanded against other vulnerable groups, such as protesters or people crossing state lines for reproductive or gender affirming health care. And then against everyone.

Adam Schwartz

Nominations Open for 2025 EFF Awards!

3 months 1 week ago

2025 nominations have closed. Be sure to join us at the EFF Awards this year, more info can be found here: www.eff.org/EFFAwards

Nominations are now open for the 2025 EFF Awards! The nomination window will be open until Friday, May 23rd at 2:00 PM Pacific time. You could nominate the next winner today!

For over thirty years, the Electronic Frontier Foundation presented awards to key leaders and organizations in the fight for freedom and innovation online. The EFF Awards celebrate the longtime stalwarts working on behalf of technology users, both in the public eye and behind the scenes. Past Honorees include visionary activist Aaron Swartz, human rights and security researchers The Citizen Lab, media activist Malkia Devich-Cyril, media group 404 Media, and whistle-blower Chelsea Manning.

The internet is a necessity in modern life and a continually evolving tool for communication, creativity, and human potential. Together we carry—and must always steward—the movement to protect civil liberties and human rights online. Will you help us spotlight some of the latest and most impactful work towards a better digital future?

Remember, nominations close on May 23rd at 2:00 PM Pacific time!

GO TO NOMINATION PAGE

Nominate your favorite digital rights Heroes now!

After you nominate your favorite contenders, we hope you will consider joining us on September 10 to celebrate the work of the 2025 winners. If you have any questions or if you'd like to receive updates about the event, please email events@eff.org.

The EFF Awards depend on the generous support of individuals and companies with passion for digital civil liberties. To learn about how you can sponsor the EFF Awards, please visit eff.org/thanks or contact tierney@eff.org for more information.

 

Melissa Srago

Beware the Bundle: Companies Are Banking on Becoming Your Police Department’s Favorite "Public Safety Technology” Vendor

3 months 1 week ago

When your local police department buys one piece of surveillance equipment, you can easily expect that the company that sold it will try to upsell them on additional tools and upgrades. 

At the end of the day, public safety vendors are tech companies, and their representatives are salespeople using all the tricks from the marketing playbook. But these companies aren't just after public money—they also want data. 

And each new bit of data that police collect contributes to a pool of information to which the company can attach other services: storage, data processing, cross-referencing tools, inter-agency networking, and AI analysis. The companies may even want the data to train their own AI model. The landscape of the police tech industry is changing, and companies that once specialized in a single technology (such as hardware products like automated license plate readers (ALPRs) or gunshot detection sensors) have developed new capabilities or bought up other tech companies and law enforcement data brokers—all in service of becoming the corporate giant that serves as a one-stop shop for police surveillance needs.

One of the most alarming trends in policing is that companies are regularly pushing police to buy more than they need. Vendors regularly pressure police departments to lock in the price now for a whole bundle of features and tools in the name of “cost savings,” often claiming that the cost à la carte for any of these tools will be higher than the cost of a package, which they warn will also be priced more expensively in the future. Market analysts have touted the benefits of creating “moats” between these surveillance ecosystems and any possible competitors. By making it harder to switch service providers due to integrated features, these companies can lock their cop customers into multi-year subscriptions and long-term dependence. 

Think your local police are just getting body-worn cameras (BWCs) to help with public trust or ALPRs to aid their hunt for stolen vehicles? Don’t assume that’s the end of it. If there’s already a relationship between a company and a department, that department is much more likely to get access to a free trial of whatever other device or software that company hopes the department will put on its shopping list. 

These vendors also regularly help police departments apply for grants and waivers, and provide other assistance to find funding, so that as soon as there’s money available for a public safety initiative, those funds can make their way directly to their business.

Companies like Axon have been particularly successful at using their relationships and leveraging the ability to combine equipment into receiving “sole source” designations. Typically, government agencies must conduct a bidding process when buying a new product, be it toilet paper, computers, or vehicles. For a company to be designated a sole-source provider, it is supposed to provide a product that no other vendor can provide. If a company can get this designation, it can essentially eliminate any possible competition for particular government contracts. When Axon is under consideration as a vendor for equipment like BWCs, for which there are multiple possible other providers, it’s not uncommon to see a police department arguing for a sole-source procurement for Axon BWCs based on the company’s ability to directly connect their cameras to the Fusus system, another Axon product. 

Here are a few of the big players positioning themselves to collect your movements, analyze your actions, and make you—the taxpayer—bear the cost for the whole bundle of privacy invasions. 

Axon Enterprise's ‘Suite’

Axon expects to have yet another year of $2 billion-plus in revenue in 2025. The company first got its hooks into police departments through the Taser, the electric stun gun. Axon then plunged into the BWC market amidst Obama-era outrage at police brutality and the flood of grant money flowing from the federal government to local police departments for BWCs, which were widely promoted as a police accountability tool. Axon parlayed its relationships with hundreds of police departments and capture and storage of growing terabytes of police footage into a menu of new technological offerings. 

In its annual year-end securities filing, Axon told investors it was "building the public safety operating system of the future” through its suite of “cloud-hosted digital evidence management solutions, productivity and real-time operations software, body cameras, in-car cameras, TASER energy devices, robotic security and training solutions” to cater to agencies in the federal, corrections, justice, and security sectors.”

Axon controls an estimated 85 percent of the police body-worn camera market. Its Evidence.com platform, once a trial add-on for BWC customers, is now also one of the biggest records management systems used by police. Its other tools and services include record management, video storage in the cloud, drones, connected private cameras, analysis tools, virtual reality training, and real-time crime centers. 

axon_flywheel_of_growth.png An image from the Quarter 4 2024 slide deck for investors, which describes different levels of the “Officer Safety Plan” (OSP) product package and highlights how 95% of Axon customers are tied to a subscription plan.

Axon has been adding AI to its repertoire, and it now features a whole “AI Era” bundle plan. One recent offering is Draft One, which connects to Axon’s body-worn cameras (BWCs) and uses AI to generate police reports based on the audio captured in the BWC footage. While use of the tool may start off as a free trial, Axon sees Draft One as another key product for capturing new customers, despite widespread skepticism of the accuracy of the reports, the inability to determine which reports have been drafted using the system, and the liability they could bring to prosecutions.

In 2024, Axon acquired a company called Fusus, a platform that combines the growing stores of data that police departments collect—notifications from gunshot detection and automated license plate reader (ALPR) systems; footage from BWCs, drones, public cameras, and sometimes private cameras; and dispatch information—to create “real-time crime centers.” The company now claims that Fusus is being used by more than 250 different policing agencies.

Fusus claims to bring the power of the real-time crime center to police departments of all sizes, which includes the ability to help police access and use live footage from both public and private cameras through an add-on service that requires a recurring subscription. It also claims to integrate nicely with surveillance tools from other providers. Recently, it has been cutting ties, most notably with Flock Safety, as it starts to envelop some of the options its frenemies had offered.

In the middle of April, Axon announced that it would begin offering fixed ALPR, a key feature of the Flock Safety catalogue, and an AI Assistant, which has been a core offering of Truleo, another Axon competitor.

Flock Safety's Bundles and FlockOS

Flock Safety is another major police technology company that has expanded its focus from one primary technology to a whole package of equipment and software services. 

Flock Safety started with ALPRs. These tools use a camera to read vehicle license plates, collecting the make, model, location, and other details which can be used for what Flock calls “Vehicle Fingerprinting.” The details are stored in a database that sometimes finds a match among a “hot list” provided by police officers, but otherwise just stores and shares data on how, where, and when everyone is driving and parking their vehicles. 

Founded in 2017, Flock Safety has been working to expand its camera-based offerings, and it now claims to have a presence in more than 5,000 jurisdictions around the country, including through law enforcement and neighborhood association customers. 

flock_proposal_for_brookhaven.png flock_proposal_for_brookhaven_2.png A list of FlockOS features proposed to Brookhaven Police Department in Georgia.

Among its tools are now the drone-as-first-responder system, gunshot detection, and a software platform meant to combine all of them. Flock also sells an option for businesses to use ALPRs to "optimize" marketing efforts and for analyzing traffic patterns to segment their patrons. Flock Safety offers the ability to integrate private camera systems as well.

flockos_hardware_software.png A price proposal for the FlockSafety platform made to Palatine, IL

Much of what Flock Safety does now comes together in their FlockOS system, which claims to bring together various surveillance feeds and facilitate real-time “situational awareness.”

Flock is optimistic about its future, recently opening a massive new manufacturing facility in Georgia.

Motorola Solutions' "Ecosystem"

When you think of Motorola, you may think of phones—but there’s a good chance that you missed the moment in 2011 when the phone side of the company, Motorola Mobility, split off from Motorola Solutions, which is now a big player in police surveillance.

On its website, Motorola Solutions claims that departments are better off using a whole list of equipment from the same ecosystem, boasting the tagline, “Technology that’s exponentially more powerful, together.” Motorola describes this as an "ecosystem of safety and security technologies" in its securities filings. In 2024, the company also reported $2 billion in sales, but unlike Axon, its customer base is not exclusively law enforcement and includes private entities like sports stadiums, schools, and hospitals.

Motorola’s technology includes 911 services, radio, BWCs, in-car cameras, ALPRs, drones, face recognition, crime mapping, and software that supposedly unifies it all. Notably, video can also come with artificial intelligence analysis, in some cases allowing law enforcement to search video and track individuals across cameras.

motorola_offerings_screenshot.png A screenshot from Motorola Solutions webpage on law enforcement technology.

In January 2019, Motorola Solutions acquired Vigilant Solutions, one of the big players in the ALPR market, as part of its takeover of Vaas International Holdings. Now the company (under the subsidiary DRN Data) claims to have billions of scans saved from police departments and private ALPR cameras around the country. Marketing language for its Vehicle Manager system highlights that “data is overwhelming,” because the amount of data being collected is “a lot.” It’s a similar claim made by other companies: Now that you’ve bought so many surveillance tools to collect so much data, you’re finding that it is too much data, so you now need more surveillance tools to organize and make sense of it.

SoundThinking's ‘SafetySmart Platform’

SoundThinking began as ShotSpotter, a so-called gunshot detection tool that uses microphones placed around a city to identify and locate sounds of gunshots. As news reports of the tool’s inaccuracy and criticisms have grown, the company has rebranded as SoundThinking, adding to its offerings ALPRs, case management, and weapons detection. The company is now marketing its SafetySmart platform, which claims to integrate different stores of data and apply AI analytics.

In 2024, SoundThinking laid out its whole scheme in its annual report, referring to it as the "cross-sell" component of their sales strategy. 

The "cross-sell" component of our strategy is designed to leverage our established relationships and understanding of the customer environs by introducing other capabilities on the SafetySmart platform that can solve other customer challenges. We are in the early stages of the upsell/cross-sell strategy, but it is promising - particularly around bundled sales such as ShotSpotter + ResourceRouter and CaseBuilder +CrimeTracer. Newport News, VA, Rocky Mount, NC, Reno, NV and others have embraced this strategy and recognized the value of utilizing multiple SafetySmart products to manage the entire life cycle of gun crime…. We will seek to drive more of this sales activity as it not only enhances our system's effectiveness but also deepens our penetration within existing customer relationships and is a proof point that our solutions are essential for creating comprehensive public safety outcomes. Importantly, this strategy also increases the average revenue per customer and makes our customer relationships even stickier.

Many of SoundThinking’s new tools rely on a push toward “data integration” and artificial intelligence. ALPRs can be integrated with ShotSpotter. ShotSpotter can be integrated with the CaseBuilder records management system, and CaseBuilder can be integrated with CrimeTracer. CrimeTracer, once known as COPLINK X, is a platform that SoundThinking describes as a “powerful law enforcement search engine and information platform that enables law enforcement to search data from agencies across the U.S.” EFF tracks this type of tool in the Atlas of Surveillance as a third-party investigative platform: software tools that combine open-source intelligence data, police records, and other data sources, including even those found on the dark web, to generate leads or conduct analyses. 

SoundThinking, like a lot of surveillance, can be costly for departments, but the company seems to see the value in fostering its existing police department relationships even if they’re not getting paid right now. In Baton Rouge, budget cuts recently resulted in the elimination of the $400,000 annual contract for ShotSpotter, but the city continues to use it

"They have agreed to continue that service without accepting any money from us for now, while we look for possible other funding sources. It was a decision that it's extremely expensive and kind of cost-prohibitive to move the sensors to other parts of the city," Baton Rouge Police Department Chief Thomas Morse told a local news outlet, WBRZ.

Beware the Bundle

Government surveillance is big business. The companies that provide surveillance and police data tools know that it’s lucrative to cultivate police departments as loyal customers. They’re jockeying for monopolization of the state surveillance market that they’re helping to build. While they may be marketing public safety in their pitches for products, from ALPRs to records management to investigatory analysis to AI everything, these companies are mostly beholden to their shareholders and bottom lines. 

The next time you come across BWCs or another piece of tech on your city council’s agenda or police department’s budget, take a closer look to see what other strings and surveillance tools might be attached. You are not just looking at one line item on the sheet—it’s probably an ongoing subscription to a whole package of equipment designed to challenge your privacy, and no sort of discount makes that a price worth paying.

To learn more about what surveillance tools your local agencies are using, take a look at EFF’s Atlas of Surveillance and our Street-Level Surveillance Hub

Beryl Lipton

Washington’s Right to Repair Bill Heads to the Governor

3 months 1 week ago

The right to repair just keeps on winning. Last week, thanks in part to messages from EFF supporters, the Washington legislature passed a strong consumer electronics right-to-repair legislation through both the House and Senate. The bill affirms our right to repair by banning restrictions that keep people and local businesses from accessing the parts, manuals, and tools they need for cheaper, easier repairs. It joined another strong right-to-repair bill for wheelchairs, ensuring folks can access the parts and manuals they need to fix their mobility devices. Both measures now head to Gov. Bob Ferguson. If you’re in Washington State, please urge the governor to sign these important bills.

TAKE ACTION

Washington State has come close to passing strong right-to-repair legislation before, only to falter at the last moments. This year, thanks to the work of our friends at the U.S. Public Interest Research Group (USPIRG) and their affiliate Washington PIRG, a coalition of groups got the bill through the legislature by emphasizing that the right to repair is good for people, good for small business, and good for the environment. Given the cost of new electronic devices is likely to increase, it’s also a pocketbook issue that more lawmakers should get behind.  

This spring marked the first time that all 50 states have considered right-to-repair legislation. Seven states—California, Colorado, Massachusetts, Minnesota, Maine, New York, and Oregon—have right-to-repair laws to date. If you’re in Washington, urge Gov. Ferguson to sign both bills and make your state the eighth to join this elite club. Let’s keep this momentum going!

TAKE ACTION

Hayley Tsukayama

Ninth Circuit Hands Users A Big Win: Californians Can Sue Out-of-State Corporations That Violate State Privacy Laws

3 months 1 week ago

Simple common sense tells us that a corporation’s decision to operate in every state shouldn’t mean it can’t be sued in most of them. Sadly, U.S. law doesn’t always follow common sense. That’s why we were so pleased with a recent holding from the Ninth Circuit Court of Appeals. Setting a crucial precedent, the court held that consumers can sue national or multinational companies in the consumers’ home courts if those companies violate state data privacy laws.

The case, Briskin v. Shopify, stems from a California resident’s allegations that Shopify, a company that offers back-end support to e-commerce companies around the U.S. and the globe, installed tracking software on his devices without his knowledge or consent, and used it to secretly collect data about him. Shopify also allegedly tracked users’ browsing activities across multiple sites and compiled that information into comprehensive user profiles, complete with financial “risk scores” that companies could use to block users’ future purchases. The Ninth Circuit initially dismissed the lawsuit for lack of personal jurisdiction, ruling that Shopify did not have a close enough connection to California to be fairly sued there. Collecting data on Californians along with millions of other users was not enough; to be sued in California, Shopify had to do something to target Californians in particular.  

Represented by nonprofit Public Citizen, Briskin asked the court to rehear the case en banc (meaning, review by the full court rather than just a three-judge panel). The court agreed and invited further briefing. After that review, the court vacated the earlier holding, agreeing with the plaintiff (and EFF’s argument in a supporting amicus brief) that Shopify’s extensive collection of information from users in other states should not prevent California plaintiffs from having their day in court in their home state.   

The key issue was whether Shopify’s actions were “expressly aimed” at California. Shopify argued that it was “mere happenstance” that its conduct affected a consumer in California, arising from the consumer’s own choices. The Ninth Circuit rejected that theory, noting:

Pre-internet, there would be no doubt that the California courts would have specific personal jurisdiction over a third party who physically entered a Californian’s home by deceptive means to take personal information from the Californian’s files for its own commercial gain. Here, though Shopify’s entry into the state of California is by electronic means, its surreptitious interception of Briskin’s personal identifying information certainly is a relevant contact with the forum state.

The court further noted that the harm in California was not “mere happenstance” because, among other things, Shopify allegedly knew plaintiff's location either prior to or shortly after installing its initial tracking software on his device as well as those of other Californians.

Importantly, the court overruled earlier cases that had suggested that “express aiming” required the plaintiff to show that a company “targeted” California in particular. As the court recognized, such a requirement would have the

perverse effect of allowing a corporation to direct its activities toward all 50 states yet to escape specific personal jurisdiction in each of those states for claims arising from or relating to the relevant contacts in the forum state that injure that state’s residents.

Instead, the question is whether Shopify’s own conduct connected it to California in a meaningful way. The answer was a resounding yes, for multiple reasons:

Shopify knows about its California consumer base, conducts its regular business in California, contacts California residents, interacts with them as an intermediary for its merchants, installs its software onto their devices in California, and continues to track their activities.

In other words, a company can’t deliberately collect a bunch of information about a person in a given state, including where they are located, use that information for its own commercial purposes, and then claim it has little or no relationship with that state.

As states around the country seek to fill the gaps left by Congress’ failure to pass comprehensive data privacy legislation, this ruling helps ensure that those state laws will have real teeth. In an era of ever-increasing corporate surveillance, that’s a crucial win.

Corynne McSherry

Age Verification in the European Union: The Commission's Age Verification App

3 months 2 weeks ago

This is the second part of a three-part series about age verification in the European Union. In this blog post, we take a deep dive into the age verification app solicited by the European Commission, based on digital identities. Part one gives an overview of the political debate around age verification in the EU and part three explores measures to keep all users safe that do not require age checks. 

In part one of this series on age verification in the European Union, we gave an overview of the state of the debate in the EU and introduced an age verification app, or mini-wallet, that the European Commission has commissioned. In this post, we will take a more detailed look at the app, how it will work and what some of its shortcomings are.

According to the original tender and the app’s recently published specifications, the Commission is soliciting the creation of a mobile application that will act as a digital wallet by storing a proof of age to enable users to verify their ages and access age-restricted content.

After downloading the app, a user would request proof of their age. For this crucial step, the Commission foresees users relying on a variety of age verification methods, including national eID schemes, physical ID cards, linking the app to another app that contains information about a user’s age, like a banking app, or age assessment through third parties like banks or notaries. 

In the next step, the age verification app would generate a proof of age. Once the user would access a website restricting content for certain age cohorts, the platform would request proof of the user’s age through the app. The app would then present proof of the user’s age via the app, allowing online services to verify the age attestation and the user would then access age-restricted websites or content in question. The goal is to build an app that will be aligned and allows for integration with the architecture of the upcoming EU Digital Identity Wallet

The user journey of the European Commission's age verification app

Review of the Commission’s Specifications for an Age Verification Mini-ID Wallet 

According to the specifications for the app, interoperability, privacy and security are key concerns for the Commission in designing the main requirements of the app. It acknowledges that the development of the app is far from finished, but an interactive process, and that key areas require feedback from stakeholders across industry and civil society. 

The specifications consider important principles to ensure the security and privacy of users verifying their age through the app, including data minimization, unlinkability (to ensure that only the identifiers required for specific linkable transactions are disclosed), storage limitations, transparency and measures to secure user data and prevent the unauthorized interception of personal data. 

However, taking a closer look at the specifications, many of the mechanisms envisioned to protect users’ privacy are not necessary requirements, but optional. For example, the app  should implement salted hashes and Zero Knowledge Proofs (ZKPs), but is not required to do so. Indeed, the app’s specifications seem to heavily rely on ZKPs, while simultaneously acknowledging that no compatible ZKP solution is currently available. This warrants a closer inspection of what ZKPs are and why they may not be the final answer to protecting users’ privacy in the context of age verification. 

A Closer Look at Zero Knowledge Proofs

Zero Knowledge Proofs provide a cryptographic way to not give something away, like your exact date of birth and age, while proving something about it. They can offer a “yes-or-no” claim (like above or below 18) to a verifier requiring a legal age threshold. Two properties of ZKPs are “soundness” and “zero knowledge.” Soundness is appealing to verifiers and to governments to make it hard for a prover to present forged information. Zero-Knowledge can be beneficial to the holder, because they don’t have to share explicit information, just the proof that said information exists. This is objectively more secure than uploading a picture of your ID  to multiple sites or applications, but it still requires an initial ID upload process as mentioned above for activation.

This scheme makes several questionable assumptions. First, that frequently used ZKPs will avoid privacy concerns, and second, that verifiers won’t combine this data with existing information, such as account data, profiles, or interests, for other purposes, such as advertising. The European Commission plans to test this assumption with extremely sensitive data: government-issued IDs. Though ZKPs are a better approach, this is a brand new system affecting millions of people, who will be asked to provide an age proof with potentially higher frequency than ever before. This rolls the dice with the resiliency of these privacy measures over time. Furthermore, not all ZKP systems are the same, and while there is  research about its use on mobile devices, this rush to implementation before the research matures puts all of the users at risk.

Who Can Ask for Proof of Your Age?

Regulation on verifiers (the service providers asking for age attestations) and what they can ask for is also just as important to limit a potential flood of verifiers that didn’t previously need age verification. This is especially true for non Know-Your-Customer (KYC) cases, in which service providers are not required to perform due diligence on their users. Equally important are rules that determine the consequences for when verifiers violate those regulations. Up until recently, the eIDAS framework, of which the technical implementation is still being negotiated, required registration certificates across all EU member states for verifiers. By forcing verifiers to register the data categories they intend to ask for, issues like illegal data requests were supposed to be mitigated. But now, this requirement has been rolled back again and the Commission’s planned mini-AV wallet will not require it in the beginning. Users will be asked to prove how old they are without the restraint on verifiers that protects from request abuse. Without verifier accountability, or at least industry-level data categories being given a determined scope, users are being asked to enter into an imbalanced relationship. An earlier mock-up gave some hope for  empowered selective disclosure, where a user could toggle giving discrete information on and off during the time of the verifier request. It would be more proactive to provide that setting to the holder in their wallet settings, before a request is made from a relying party.

Privacy tech is offered in this system as a concession to users forced to share information even more frequently, rather than as an additional way to bring equity in existing interactions with those who hold power, through mediating access to information, loans, jobs, and public benefits. Words mean things, and ZKPs are not the solution, but a part of one. Most ZKP systems are more focused on making proof and verification time more efficient than they are concerned with privacy itself. The result of the latest research with digital credentials are more privacy oriented ways to share information. But at this scale, we will need regulation and added measures on aggressive verification to complete the promise of better privacy for eID use.

Who Will Have Access to the Mini-ID Wallet, and Who Will Be Left Out?

Beyond its technical specifications, the proposed app raises a number of accessibility and participation issues. At its heart, the mini-ID wallet will rely on the verification of a user’s age through a proof of age. According to the tender, the wallet should support four methods for the issuance and proving of age of a user.

Different age verification methods foreseen by the app

The first options are national eID schemes, which is an obvious choice: Many Member States are currently working on (or have already notified) national eID schemes in the context of the eIDAS, Europe’s eID framework. The goal is to allow the mini-ID wallet to integrate with the eIDAS node operated by the European Commission to verify a user’s age. Although many Member States are working on national eID schemes, previous uptake of eIDs has been reluctant, and it's questionable whether an EU-wide rollout of eIDs will be successful. 

But even if an EU-wide roll out was achievable, many will not be able to participate. Those who are not in possession of ID cards, passports, residence permits, or documents like birth certificates will not be able to attain an eID and will be at risk of losing access to knowledge, information, and services. This is especially relevant for already marginalized groups like refugees or unhoused people who may lose access to critical resources. But also many children and teenagers will not be able to participate in eID schemes. There are no EU-wide rules on when children need to have government-issued IDs, and while some countries, like Germany, mandate that every citizen above the age of 16 possess an ID, others, like Sweden, don’t require their citizens to have an ID or passport. In most EU Member States, the minimum age at which children can apply for an ID without parental consent is 18. So even in cases where children and teenagers may have a legal option to get an ID, their parents might withhold consent, thereby making it impossible for a child to verify their age in order to access information or services online.

The second option are so-called smartcards, or physical eID cards, such as national ID cards, e-passports or other trustworthy physical eID cards. The same limitations as for eIDs apply. Additionally, the Commission’s tender suggests the mini-ID wallet will rely on biometric recognition software to compare a user to the physical ID card they are using to verify their age. This leads to a host of questions regarding the processing and storing of sensitive biometric data. A recent study by the National Institute of Standards and Technology compared different age estimation algorithms based on biometric data and found that certain ethnicities are still underrepresented in training data sets, thus exacerbating the risk age estimation systems of discriminating against people of color. The study also reports higher error rates for female faces compared to male faces and that overall accuracy is strongly influenced by factors people have no control over, including “sex, image quality, region-of-birth, age itself, and interactions between those factors.” Other studies on the accuracy of biometric recognition software have reported higher error rates for people with disabilities as well as trans and non-binary people

The third option foresees a procedure to allow for the verification of a user’s identity through institutions like a bank, a notary, or a citizen service center. It is encouraging that the Commission’s tender foresees an option for different, non-state institutions to verify a user’s age. But neither banks nor notary offices are especially accessible for people who are undocumented, unhoused, don’t speak a Member State’s official language, or are otherwise marginalized or discriminated against. Banks and notaries also often require a physical ID in order to verify a client’s identity, so the fundamental access issues outlined above persist.

Finally, the specification suggests that third party apps that already have verified a user's identity, like banking apps or mobile network operators, could provide age verification signals. In many European countries, however, showing an ID is a necessary prerequisite for opening a bank account, setting up a phone contract, or even buying a SIM card. 

In summary, none of the options the Commission considers to allow for proving someone’s age accounts for the obstacles faced by different marginalized groups, leaving potentially millions of people across the EU unable to access crucial services and information, thereby undermining their fundamental rights. 

The question of which institutions will be able to verify ages is only one dimension when considering the ramification of approaches like the mini-ID wallet for accessibility and participation. Although often forgotten in policy discussions, not everyone has access to a personal device. Age verification methods like the mini-ID wallet, which are device dependent, can be a real obstacle to people who share devices, or users who access the internet through libraries, schools, or internet cafés, which do not accommodate the use of personal age verification apps. The average number of devices per household has been  found to correlate strongly with income and education levels, further underscoring the point that it is often those who are already on the margins of society who are at risk of being left behind by age verification mandates based on digital identities. 

This is why we need to push back against age verification mandates. Not because child safety is not a concern – it is. But because age verification mandates risk undermining crucial access to digital services, eroding privacy and data protection, and limiting the freedom of expression. Instead, we must ensure that the internet remains a space where all voices can be heard, free from discrimination, and where we do not have to share sensitive personal data to access information and connect with each other.

Svea Windwehr

Congress Passes TAKE IT DOWN Act Despite Major Flaws

3 months 2 weeks ago

Today the U.S. House of Representatives passed the TAKE IT DOWN Act, giving the powerful a dangerous new route to manipulate platforms into removing lawful speech that they simply don't like. President Trump himself has said that he would use the law to censor his critics. The bill passed the Senate in February, and it now heads to the president's desk. 

The takedown provision in TAKE IT DOWN applies to a much broader category of content—potentially any images involving intimate or sexual content—than the narrower NCII definitions found elsewhere in the bill. The takedown provision also lacks critical safeguards against frivolous or bad-faith takedown requests. Services will rely on automated filters, which are infamously blunt tools. They frequently flag legal content, from fair-use commentary to news reporting. The law’s tight time frame requires that apps and websites remove speech within 48 hours, rarely enough time to verify whether the speech is actually illegal. As a result, online service providers, particularly smaller ones, will likely choose to avoid the onerous legal risk by simply depublishing the speech rather than even attempting to verify it.

Congress is using the wrong approach to helping people whose intimate images are shared without their consent. TAKE IT DOWN pressures platforms to actively monitor speech, including speech that is presently encrypted. The law thus presents a huge threat to security and privacy online. While the bill is meant to address a serious problem, good intentions alone are not enough to make good policy. Lawmakers should be strengthening and enforcing existing legal protections for victims, rather than inventing new takedown regimes that are ripe for abuse. 

Jason Kelley

EFF Leads Prominent Security Experts in Urging Trump Administration to Leave Chris Krebs Alone

3 months 2 weeks ago
Political Retribution for Telling the Truth Weakens the Entire Infosec Community and Threatens Our Democracy; Letter Remains Open for Further Sign-Ons

SAN FRANCISCO – The Trump Administration must cease its politically motivated investigation of former U.S. Cybersecurity and Infrastructure Security Agency Director Christopher Krebs, the Electronic Frontier Foundation (EFF) and dozens hundreds (see update below) of prominent cybersecurity and election security experts urged in an open letter. 

The letter – signed by preeminent names from academia, civil society, and the private sector – notes that security researchers play a vital role in protecting our democracy, securing our elections, and building, testing, and safeguarding government infrastructure. 

“By placing Krebs and SentinelOne in the crosshairs, the President is signaling that cybersecurity professionals whose findings do not align with his narrative risk having their businesses and livelihoods subjected to spurious and retaliatory targeting, the same bullying tactic he has recently used against law firms,” EFF’s letter said. “As members of the cybersecurity profession and information security community, we counter with a strong stand in defense of our professional obligation to report truthful findings, even – and especially – when they do not fit the playbook of the powerful. And we stand with Chris Krebs for doing just that.” 

President Trump appointed Krebs as Director of the Cybersecurity and Infrastructure Security Agency in the U.S. Department of Homeland Security in November 2018, and then fired him in November 2020 after Krebs publicly contradicted Trump's false claims of widespread fraud in the 2020 presidential election. 

Trump issued a presidential memorandum on April 9 directing Attorney General Pam Bondi and Homeland Security Secretary Kristi Noem to investigate Krebs, and directing Bondi and Director of National Intelligence Tulsi Gabbard to revoke security clearances held by Krebs and the cybersecurity company for which he worked, SentinelOne.  EFF’s letter urges that both of these actions be reversed immediately. 

“An independent infosec community is fundamental to protecting our democracy, and to the profession itself,” EFF’s letter said. “It is only by allowing us to do our jobs and report truthfully on systems in an impartial and factual way without fear of political retribution that we can hope to secure those systems. We take this responsibility upon ourselves with the collective knowledge that if any one of us is targeted for our work hardening these systems, then we all can be. We must not let that happen. And united, we will not let that happen.” 

EFF also has filed friend-of-the-court briefs supporting four law firms targeted for retribution in Trump’s unconstitutional executive orders. 

For the letter in support of Krebs: https://www.eff.org/document/chris-krebs-support-letter-april-28-2025

To sign onto the letter: https://eff.org/r.uq1r 

Update 04/29/2025: The letter now has over 400 signatures. You can view it here: https://www.eff.org/ChrisKrebsLetter

Contact:  WilliamBudingtonSenior Staff Technologistbill@eff.org
Josh Richman

Texas’s War on Abortion Is Now a War on Free Speech

3 months 2 weeks ago

UPDATE May 8, 2025: A committee substitute of SB 2880 passed the Texas Senate on April 30, 2025, with the provisions related to interactive computer services and providing information on how to obtain an abortion-inducing drug removed. These provisions, however, currently remain in the House version of the bill, HB 5510.

Once again, the Texas legislature is coming after the most common method of safe and effective abortion today—medication abortion.

Senate Bill (S.B.) 2880* seeks to prevent the sale and distribution of abortion pills—but it doesn’t stop there. By restricting access to certain information online, the bill tries to keep people from learning about abortion drugs, or even knowing that they exist.

If passed, S.B. 2880 would make it illegal to “provide information” on how to obtain an abortion-inducing drug. If you exchange e-mails or have an online chat about seeking an abortion, you could violate the bill. If you create a website that shares information about legal abortion services in other states, you could violate the bill. Even your social media posts could put you at risk.

On top of going after online speakers who create and post content themselves, the bill also targets social media platforms, websites, email services, messaging apps, and any other “interactive computer service” simply for hosting or making that content available.

In other words, Texas legislators not only want to make sure no one can start a discussion on these topics, they also want to make sure no one can find one. The goal is to wipe this information from the internet altogether. That creates glaring free-speech issues with this bill and, if passed, the consequences would be dire.

The bill is carefully designed to scare people into silence.

First, S.B. 2880 empowers average citizens to sue anyone that violates the law. An “interactive computer service” can also be sued if it “allows residents of [Texas] to access information or material that aids, abets, assists or facilitates efforts to obtain elective abortions or abortion-inducing drugs.”

So, similar to Texas Senate Bill 8, the bill encourages anyone to file lawsuits against those who merely speak about or provide access to certain information. This is intended to, and will, chill free speech. The looming threat of litigation can be used to silence those who seek to give women truthful information about their reproductive options—potentially putting their health or lives in danger.

Second, S.B. 2880 encourages online intermediaries to take down abortion-related content. For example, if sued under the law, a defendant platform can escape liability by showing that, once discovered, they promptly “block[ed] access to any information . . . that assists or facilitates efforts to obtain elective abortions or abortion-inducing drugs.”

The bill also grants them “absolute and nonwaivable immunity” against claims arising from takedowns, denials of service, or any other “action taken to restrict access to or availability of [this] information.” In other words, if someone sues a social media platform or internet service provider for censorship, they are well-shielded from facing consequences. This further tips the scales in favor of blocking more websites, posts, and users.

In three different provisions of the 43-page bill, the drafters go out of their way to assure us that S.B. 2880 should not be construed to prohibit speech or conduct that’s protected by the First Amendment. But simply stating that the law does not restrict free speech does not make it so. The obvious goal of this bill is to restrict access to information about abortion medications online. It’s hard to imagine what claims could be brought under such a bill that don’t implicate our free speech rights.

The bill’s imposition of civil and criminal liability also conflicts with a federal law that protects online intermediaries’ ability to host user-generated speech, 47 U.S.C. § 230 (“Section 230”), including speech about abortion medication. Although the bill explicitly states that it does not conflict with Section 230, that assurance remains meaningful only so long as Section 230’s protections remain robust. But Congress is currently considering revisions—or even a full repeal of Section 230. Any weakening of Section 230 will create more space for those empowered by this bill to use the courts to pressure intermediaries/platforms to remove information about abortion medication.

Whenever the government tries to restrict our ability to access information, our First Amendment rights are threatened. This is exactly what Texas lawmakers are trying to do with S.B. 2880. Anyone who cares about free speech—regardless of how they feel about reproductive care—should urge lawmakers to oppose this bill and others like it.

*H.B. 5510 is the identical House version of S.B. 2880.

Jennifer Pinsof

Trump Administration’s Targeting of International Students Jeopardizes Free Speech and Privacy Online

3 months 2 weeks ago

The federal government is using social media surveillance to target student visa holders living in the United States for online speech the Trump administration disfavors. The administration has initiated this new program, called “Catch and Revoke,” in an effort to revoke visas, and it appears to be a cross-agency collaboration between the State Department, the Department of Homeland Security (DHS), and the Department of Justice. It includes a dedicated task force and the use of AI and other data analytic tools to review the public social media accounts of tens of thousands of student visa holders. Though the full scope remains unclear, current reports indicate that the administration is surveilling for “pro-Hamas” sentiment“antisemitic activity,” or even just “conduct that bears a hostile attitude toward U.S. citizens or U.S. culture.” At the time of publishing of this blog post, the federal government has already revoked over 1600 student visas for a variety of reasons.

This social media surveillance program is an alarming attack on freedom of speech and privacy—for both visa holders here in the United States and their American associates.

A Dangerous Erosion of Free Speech

While there is some nuance in the interplay between freedom of speech and immigration law, one principle is evident: foreign nationals who currently reside in the U.S.—including student visa holders—are protected by the First Amendment. The Supreme Court stated in Bridges v. Wixon (1945) that “[f]reedom of speech and of press is accorded aliens residing in this country.”

First Amendment-Protected Political Speech

Revoking student visas based, in part, on what students have said publicly on social media is especially constitutionally problematic given that the Trump administration is targeting core First Amendment-protected political speech. As the Supreme Court stated in Mills v. Alabama (1966), a central purpose of the First Amendment is to “protect the free discussion of governmental affairs,” whether on political issues, public officials, or how the government should operate.

The administration is targeting non-citizen students for “pro-Hamas,” antisemitic, and even just pro-Palestinian speech. Yet what falls under these categories is vague and not clearly defined. For example, the administration detained a Georgetown University researcher due to social media posts that are critical of Israel, but do not express support for Hamas.

More importantly, even controversial or offensive speech falls within the protections of the First Amendment. There are several categories of speech that do not enjoy First Amendment protection, including true threats of violence, inciting imminent violence, and providing material support for terrorism. However, short of rising to that level, the student speech targeted by the administration is protected by the First Amendment. Worse still, the administration is broadly going after students who simply appear to be “social activists” or are engaged in speech that is generically “anti-American.”

Such an overbroad social media surveillance and visa revocation program—one that sweeps in wholly lawful speech—strikes at the heart of what the First Amendment was intended to protect against.

Chilling Effect

Social media surveillance motivated by the government’s desire to punish political speech will chill (and certainly has already chilled) student visa holders from speaking out online.

The Supreme Court stated in Lamont v. Postmaster General (1965) that a government policy that causes individuals “to feel some inhibition” in freely expressing themselves “is at war with the ‘uninhibited, robust, and wide-open’ debate and discussion that are contemplated by the First Amendment.” More recently, Supreme Court Justice Sotomayor expressed in a concurring opinion that “[a]wareness that the Government may be watching chills associational and expressive freedoms” guaranteed by the First Amendment.

In other words, student visa holders are more likely to engage in self-censorship and refrain from expressing dissenting or controversial political views when they know they're being surveilled. Or they may choose to disengage from social media entirely, to avoid the risk that even seemingly harmless posts will affect their visa status and their ability to continue their education in the United States.

Student visa holders may also limit whom they connect with on social media, particularly if they fear those connections will have political views the current administration doesn’t like. The administration has not expressly stated that it will limit its surveillance only to the social media posts of student visa holders, which means it may also look at posts made by those in the students’ networks. This, too, undermines the First Amendment. The freedom to associate and express political views as a group—“particularly controversial ones”—is a fundamental aspect of freedom of speech, as the Supreme Court stated in its landmark NAACP v. Alabama (1958) decision.

American Citizens Impacted

Because student visa holders’ social networks undoubtedly include U.S. citizens, those citizens may also be subject to social media scrutiny, and therefore will also be chilled from freely speaking or associating online. Government agents have previously held visa holders responsible for the activity of their social media connections. Knowing this, a U.S. citizen who has a non-citizen friend or family member in the U.S. on a student visa might hesitate to post criticisms of the government—even if fully protected by the First Amendment—fearing the posts could negatively impact their loved one. A general climate of government surveillance may also lead U.S. citizens to self-censor on social media, even without any foreign national friends or family.

A Threat to Digital Privacy

Social media surveillance, even of publicly available profiles and especially with automated tools, can invade personal privacy. The Supreme Court has repeatedly held that the government’s collection and aggregation of publicly available personal information—particularly when enhanced by technology—can implicate privacy interests. The government can obtain personal information it otherwise would not have access to or that would usually be difficult to find across disparate locations.

Social media aggregates personal information in one place, including some of the most intimate details of our lives, such as our health information, likes and dislikes, political views and religious beliefs, and people with whom we associate. And automated tools can easily search for and help find this information. Even people who choose not to post much personal information on social media might still be exposed by comments and tags made by other users.

Constitutional Harms are Exacerbated by Automated Tools

The Trump administration is reportedly deploying artificial intelligence and other automated tools to assist in its review of student visa holders’ social media posts. While facts are still coming to light, any form of automation is likely to amplify speech and privacy harms to student visa holders.

By the government’s own assessment in another context—evaluating the admissibility of visa applicants (discussed below)—social media surveillance has not proven effective at assessing security threats.

Human review of public social media posts is itself prone to problems. Social media posts are highly context-specific, and government officials often have trouble differentiating between sarcasm, parody, and exaggeration from unlawful support for controversial causes. This leads to mistakes and misinterpretations. For example, in 2012 an Irish citizen was turned back at the border because DHS agents misinterpreted two of his Twitter posts: one, that he was going to “destroy America” – slang for partying – and two, that he was going to “dig up Marilyn Monroe’s grave” – a joke. These mistakes are even more likely when the posts are not in English or when they contain cultural references .

Human review augmented by automated tools is just as bad. Automated tools also have difficulty understanding the nuances of language, as well as the broader context in which a statement was made. These algorithms are also designed to replicate patterns in existing datasets, but if the data is biased, the technology simply reinforces those biases. As such, automated tools are similarly prone to mistakes and misinterpretations. Yet people often defer to automated outputs thinking they are correct or fair simply because a computer was used to produce them. And in some cases, decision-makers may even use these tools to justify or cover their own biases.

Most concerning would be if automated systems were permitted to make final visa revocation decisions without any human review. As EFF has repeatedly stated, automated tools should never get the final say on whether a person should be policed, arrested, denied freedom, or, in this case, stripped of a student visa and forcibly barred from completing their education.

Government Social Media Surveillance is Not New—and is Expanding

That the Trump administration is using social media surveillance on student visa holders residing in the United States is a disturbing apparent escalation of a longstanding trend.

EFF has long sounded the alarm on the civil liberty harms of government social media surveillance. In particular, since 2019, visa applicants have been required to disclose all social media accounts they have used in the last five years to the U.S. government. That policy is the subject of an ongoing lawsuit, Doc Society v. Pompeo, in which EFF filed an amicus brief.

Secretary of State Marco Rubio recently upped the ante by ordering officials to deny visas to new or returning student applicants if their social media broadly demonstrates “a hostile attitude toward U.S. citizens or U.S. culture (including government, institutions, or founding principles).” Notably, Rubio indicated this standard could also apply to current student visa holders. The State Department also announced it will review the social media of any visa applicant who has been to Gaza since 2007.

The Trump administration has also proposed dramatically expanding social media scrutiny by requiring non-citizens already legally residing in the U.S. to disclose social media accounts on a variety of forms related to immigration benefits, such as people seeking lawful permanent residency or naturalization. U.S. Citizenship and Immigration Services (USCIS), a component of DHS, also announced it would look for “antisemitic activity” on social media to deny immigration benefits to individuals currently in the country.

Protecting Your Accounts

There are general steps you can take to better protect your social media accounts from surveillance. Understand, however, that the landscape is shifting rapidly and not all protections are foolproof. Law enforcement may be able to get a warrant for your private information and messages if a judge is convinced there is preliminary evidence supporting probable cause of criminal activity. And non-governmental individuals and groups have recently used other forms of technology like face recognition to identify and report student activists for potential deportation. You should conduct your own individualized risk assessment to determine what online activity is safe for you.

Still, it never hurts to better secure your online privacy. For your current social media accounts, consider locking them down:

  • Make public accounts private and ensure only approved connections can see your content. Note that if your past public posts have already been copied and saved by an outside party, making your account private will not undo this. It will, however, better protect your future posts.
  • Some platforms make certain information publicly viewable, even if you’ve made your account private. Other information may be public by default, but can be made private. Review each platform’s privacy settings to limit what information is shared publicly, including friend lists, contact information, and location information.
  • You should also review your friends or followers list to ensure you know every person you’ve approved, especially when making a once-public account private.

If you create a new social media account:

  • Query whether you want to attach your legal name to it. Many platforms allow you to have a pseudonymous account.
  • When setting up the account, don’t provide more personal information than is necessary.

EFF’s Surveillance Self-Defense guide provides additional information on protecting your social media accounts from a variety of actors. If you're not sure what information is publicly available about you on social networks or other sites, consider doing some research to see what, if anything, others would find.

By targeting international students for broad categories of online speech, this administration is fostering a climate of fear, making students anxious that a single post or errant “like” could cost them their U.S. visa or even lead to detention and deportation. This will, ultimately, stifle political debate and silence dissent–for non-citizens and citizens alike–undermining the open dialogue crucial to democracy.

Lisa Femia

IRS-ICE Immigrant Data Sharing Agreement Betrays Data Privacy and Taxpayers’ Trust

3 months 2 weeks ago

In an unprecedented move, the U.S. Department of Treasury and the U.S. Department of Homeland Security (DHS) recently reached an agreement allowing the IRS to share with Immigration and Customs Enforcement (ICE) taxpayer information of certain immigrants. The redacted 15-page memorandum of understanding (MOU) was exposed in a court case, Centro de Trabajadores Unidos v. Bessent, which seeks to prevent the IRS from unauthorized disclosure of taxpayer information for immigration enforcement purposes. Weaponizing government data vital to the functioning and funding of public goods and services by repurposing it for law enforcement and surveillance is an affront to a democratic society. In addition to the human rights abuses this data-sharing agreement empowers, this move threatens to erode trust in public institutions in ways that could bear consequences for decades. 

Specifically, the government justifies the MOU by citing Executive Order 14161, which was issued on January 20, 2025. The Executive Order directs the heads of several agencies, including DHS, to identify and remove individuals unlawfully present in the country. Making several leaps, the MOU states that DHS has identified “numerous” individuals who are unlawfully present and have final orders of removal, and that each of these individuals is “under criminal investigation” for violation of federal law—namely, “failure to depart” the country under 8 U.S.C. § 1253(a)(1). The MOU uses this basis for the IRS disclosing to ICE taxpayer information that is otherwise confidential under the tax code.  

In practice, this new data-sharing process works like this: ICE makes a request for an individual’s name and address, taxable periods for which the return information pertains, the federal criminal statute being investigated, and reasons why disclosure of this information is relevant to the criminal investigation. Once the IRS receives this request from ICE, the agency reviews it to determine whether it falls under an exception to the statutory authority requiring confidentiality and provides an explanation if the request cannot be processed. 

But there are two big reasons why this MOU fails to pass muster. 

First, as the NYU Tax Law Center identified:

“While the MOU references criminal investigations, DHS recently reportedly told IRS officials that ‘they would hope to use tax information to help deport as many as seven million people.’ That is far more people than the government could plausibly investigate, or who are plausibly subject to criminal immigration penalties, and suggests DHS’s actual reason for pursuing the tax data is to locate people for civil deportation, making any ‘criminal investigation’ a false pretext to get around the law.” 

Second, it’s unclear how the IRS would verify the accuracy of ICE’s requests. Recent events have demonstrated that ICE’s deportation mandate trumps all else—with ICE obfuscating, ignoring, or outright lying about how they conduct their operations and who they target. While ICE has fueled narratives about deporting “criminals” to a notorious El Salvador prison, reports have repeatedly shown that most of those deported had no criminal histories. ICE has even arrested U.S. citizens based on erroneous information and blatant racial profiling. But ICE’s lack of accuracy isn’t new—in fact, a recent settlement in the case Gonzalez v. ICE bars ICE from relying on its network of erroneous databases to issue detainer requests. In that case, EFF filed an amicus brief identifying the dizzying array of ICE’s interconnected databases, many of which were out of date and incomplete and yet were still relied upon to deprive people of their liberty. 

In the wake of the MOU’s signing, several top IRS officials have resigned. For decades, the agency expressed interest in only collecting tax revenue and promised to keep that information confidential. Undocumented immigrants were encouraged to file taxes, despite being unable to reap benefits like Social Security because of their status. Many did, often because any promise of a future pathway to legalizing their immigration status hinged on having fulfilled their tax obligations. Others did because as part of mixed-status families, they were able to claim certain tax benefits for their U.S. citizen children. The MOU weaponizes that trust and puts immigrants in an impossible situation—either fail to comply with tax law or risk facing deportation if their tax data ends up in ICE’s clutches. 

This MOU is also sure to have a financial impact. In 2023, it was estimated that undocumented immigrants contributed $66 billion in federal and payroll taxes alone. Experts anticipate that due to the data-sharing agreement, fewer undocumented immigrants will file taxes, resulting in over $313 billion in lost tax revenue over 10 years. 

This move by the federal government not only betrays taxpayers and erodes vital trust in necessary civic institutions—it also reminds us of how little we have learned from U.S. history. After all, it was a piece of legislation passed in a time of emergency, the Second War Powers Act, that included the provision that allowed once-protected census data to assist in the incarceration of Japanese Americans during World War II. As the White House wrote in a report on big data in 2014, “At its core, public-sector use of big data heightens concerns about the balance of power between government and the individual. Once information about citizens is compiled for a defined purpose, the temptation to use it for other purposes can be considerable.” Rather than heeding this caution, this data-sharing agreement seeks to exploit it. This is yet another attempt by the current administration to sweep up and disclose large amounts of sensitive and confidential data. Courts must put a stop to these efforts to destroy data privacy, especially for vulnerable groups.

Matthew Guariglia

Leaders Must Do All They Can to Bring Alaa Home

3 months 2 weeks ago

It has now been nearly two months since UK Prime Minister Starmer spoke with Egyptian President Abdel Fattah el-Sisi, yet there has been no tangible progress in the case of Alaa Abd El Fattah, the British-Egyptian writer, activist, and technologist who remains imprisoned in Egypt.

In yet another blow to his family and supporters, who have been tirelessly advocating for his release, we’ve now learned that Alaa has fallen ill while on a sustained hunger strike protesting his incarceration. Alaa’s sentence was due to end last September.

Alaa’s mother, Laila Soueif, initiated a hunger strike beginning on his intended release date to amplify demands for her son’s release. Soueif, too, is facing deteriorating health, having to shift from a full hunger strike to a partial strike allowing for 300 liquid calories a day after being hospitalized in London, and following Starmer’s subsequent call with el-Sisi. Risking serious complications, today  marks the 208th day of her hunger strike in protest at her son’s continued imprisonment in Egypt. Calling for her son’s freedom, Soueif has warned that she will resume a full hunger strike if progress is not made soon on Alaa’s case.

As of April 24, Alaa is on Day 55 of a hunger strike that he began on 1 March. He is surviving on a strict ration of herbal tea, black coffee, and rehydration salts, and is now being treated in Wadi El-Natrun prison for severe stomach pains. In a letter to his family on April 20, Alaa described worsening conditions and side effects from medications administered by prison doctors: “the truth is the inflammation is getting worse … all these medicines are making me dizzy and yesterday my vision was hazy and I saw distant objects double.”

Responding to Alaa’ illness in prison, Alaa’s sister Sanaa Seif stated in a press release: “We are all so exhausted. My mum and my brother are literally putting their bodies on the line, just to give Alaa the freedom he deserves. Their health is so precarious, I’m always afraid that we are on the verge of a tragedy. We need Keir Starmer to do all he can to bring Alaa home to us.”

Alaa’s case has galvanized support from across the UK political spectrum, with more than 50 parliamentarians urging immediate action. Prime Minister Starmer has publicly committed to pressing for Alaa’s release, but these words must now be matched by action. As Alaa’s health deteriorates, and his family’s ordeal drags on, the need for decisive intervention has never been more urgent. The time to secure Alaa’s freedom—and prevent further tragedy—is now.

EFF continues to work with the campaign to free Alaa: his case is a critical test of digital rights, free expression, and international justice. 

Jillian C. York
Checked
1 hour 30 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed