San Diegans Push Back on Flock ALPR Surveillance

3 hours 57 minutes ago

Approaching San Diego’s first annual review of the city's controversial Flock Safety contract, a local coalition is calling on the city council to roll back this dangerous and costly automated license plate reader (ALPR) program.

The TRUST Coalition—a grassroots alliance including Electronic Frontier Alliance members Tech Workers Coalition San Diego and techLEAD—has rallied to stop the unchecked spread of ALPRs in San Diego. We’ve previously covered the coalition’s fight for surveillance oversight, a local effort kicked off by a “smart streetlight” surveillance program five years ago. 

In 2024, San Diego installed hundreds of AI-assisted ALPR cameras throughout the city to document what cars are driving where and when, then making that data accessible for 30 days.

ALPRs like Flock’s don’t prevent crime—they just vacuum up data on everyone who drives past. The resulting error-prone dragnet can then chill speech and be weaponized against marginalized groups, like immigrants and those seeking trans or reproductive healthcare

Despite local and state restrictions barring the sharing of ALPR with federal and out of state agencies, San Diego Police have reportedly disclosed license plate data to federal agencies—including Homeland Security Investigations and Customs and Border Patrol.

Also, despite a local ordinance requiring city council approval before deployment of surveillance technology, San Diego police have reportedly deployed ALPRs and smart streetlights at Comic-Con and Pride without the required approval.

The local coalition is not alone in these concerns. The San Diego Privacy Board recently recommended the city reject the Surveillance Use Policy for this technology. All of this costs the community over $3.5 million last year alone. That is why the TRUST coalition is calling on the city to reject this oppressive surveillance system, and, instead, invest in other essential services which improve day-to-day life for residents.

San Diegans who want to push back can get involved by signing the TRUST Coalition’s  petition, follow the campaign online, and contact their council members to demand the city end its contract with Flock and start respecting the privacy rights of everyone who lives, works, or visits through their community.

Rory Mir

Hell No: The ODNI Wants to Make it Easier for the Government to Buy Your Data Without Warrant

4 hours 19 minutes ago

New reporting has revealed that the Office of the Director of National Intelligence (ODNI) is attempting to create the Intelligence Community’s Data Consortium–a centralized online marketplace where law enforcement and spy agencies can peruse and buy very personal digital data about you collected by data brokers. Not only is this a massive escalation of the deeply unjust data broker loophole: it’s also another repulsive signal that your privacy means nothing to the intelligence community.

Imagine a mall where every store is run by data brokers whose goods include your information that has been collected by smartphone applications. Depending on your permissions and what applications are on your phone, this could include contacts, behavioral data, financial information, and even your constant geolocation. Now imagine that the only customers in this mall are federal law enforcement officers and intelligence agents who should be going to a judge, presenting their evidence, and hoping the judge grants a warrant for this information. But now, they don’t need evidence or to justify the reason why they need your data. Now they just need taxpayer money, and this newly centralized digital marketplace provides the buying opportunities.

This is what the Office of the Director of National Intelligence wants to build according to recently released contract documents.

Across the country, states are trying desperately to close the loophole that allows the government to buy private data it would otherwise need a warrant to get. Montana just became the first state to make it illegal for police to purchase data, like geolocation data harvested by apps on smartphones. At the federal level, EFF has endorsed Senator Ron Wyden’s Fourth Amendment is Not for Sale Act, which closes this data broker loophole. The bill passed the House last year, but was rejected by the Senate.

And yet, the federal government is doubling down on this very obviously unjust and unpopular policy.

An ODNI that wants to minimize harms against civil liberties would be pursuing the opposite tact. They should not be looking for ways to formalize and institutionalize surveillance loopholes. That is why we not only call on the ODNI to reverse course and scrap the Intelligence Community’s Data Consortium–we also call on lawmakers to finish what they started and pass the Fourth Amendment is Not for Sale Act and close the databroker loophole at the federal level once and for all. We urge all of our supporters to do the same and help us keep the government accountable.

Matthew Guariglia

The Right to Repair Is Law in Washington State

7 hours 21 minutes ago

Thanks in part to your support, the right to repair is now law in Washington.

Gov. Bob Ferguson signed two bills guaranteeing Washingtonians' right to access tools, parts, and information so they can fix personal electronics, appliances, and wheelchairs. This is the epitome of common-sense legislation. When you own something, you should have the final say about who fixes, adapts, or modifies it—and how.

When you own something, you should have the final say about who fixes, adapts, or modifies it—and how.

Advocates in Washington have worked for years to pass a strong right-to-repair law in the state. In addition to Washington’s Public Interest Research Group, the consumer electronics bill moved forward with a growing group of supporting organizations, including environmental advocates, consumer advocates, and manufacturers such as Google and Microsoft. Meanwhile, advocacy from groups including  Disability Rights Washington and the Here and Now Project made the case for the wheelchair's inclusion in the right-to-repair bill, bringing their personal stories to Olympia to show why this bill was so important.

And it’s not just states that recognize the need for people to be able to fix their own stuff.  Earlier this month, U.S. Army Secretary Dan Driscoll issued a memo stating that the Army should “[identify] and propose contract modifications for right to repair provisions where intellectual property constraints limit the Army's ability to conduct maintenance and access the appropriate maintenance tools, software, and technical data – while preserving the intellectual capital of American industry.” The memo said that the Army should seek this in future procurement contracts and also to amend existing contracts to include the right to repair.

This is a bedrock of sound procurement with a long history in America. President Lincoln only bought rifles with standardized tooling to outfit the Union Army, for the obvious reason that it would be a little embarrassing for the Commander in Chief to have to pull his troops off the field because the Army’s sole supplier had decided not to ship this week’s delivery of ammo and parts. Somehow, the Department of Defense forgot this lesson over the ensuing centuries, so that today, billions of dollars in public money are spent on material and systems that the US military can only maintain by buying service from a “beltway bandit.”

This recognizes what millions of people have said repeatedly: limiting people’s ability to fix their own stuff stands in the way of needed repairs and maintenance. That’s true whether you’re a farmer with a broken tractor during harvest, a homeowner with a misbehaving washing machine or a cracked smartphone screen, a hospital med-tech trying to fix a ventilator, or a soldier struggling with a broken generator.

The right to repair is gaining serious momentum. All 50 states have now considered some form of right-to-repair legislation. Washington is the eighth state to pass one of these bills into law—let’s keep it up.

Hayley Tsukayama

The Federal Government Demands Data from SNAP—But Says Nothing About Protecting It

7 hours 27 minutes ago

Last month, the U.S. Department of Agriculture issued a troubling order to all state agency directors of Supplemental Nutrition Assistance Programs (SNAP): hand over your data.

This is part of a larger effort by the Trump administration to gain “unfettered access to comprehensive data from all state programs that receive federal funding,” through Executive Order 14243. While the order says this data sharing is intended to cut down on fraud, it is written so broadly that it could authorize almost any data sharing. Such an effort flies in the face of well-established data privacy practices and places people at considerable risk. 

A group SNAP recipients and organizations have thankfully sued to try and block the data sharing granted through the Executive Order.  And the state of New Mexico has even refused to comply with the order, “due to questions and concerns regarding the legality of USDA’s demand for the information,” according to Source NM.

The federal government has said very little about how they will use this information. Several populations targeted by the Trump Administration are eligible to be on the SNAP program, including asylum seekers, refugees, and victims of trafficking. Additionally, although undocumented immigrants are not eligible for SNAP benefits, their household members who are U.S. citizens or have other eligible immigration statuses may be—raising the distinct concern that SNAP information could be shared with immigration or other enforcement authorities.

We all deserve privacy rights. Accessing public benefits to feed yourself shouldn't require you to give those up.

EFF has long advocated for privacy policies that ensure that information provided in one context is not used for other reasons. People who hand over their personal information should do so freely and with full information about how their information will be used. Whether you're seeking services from the government or a company, we all deserve privacy rights. Accessing public benefits to feed yourself shouldn't require you to give those up.

It's particularly important to respect privacy for government programs that provide essential support services to vulnerable populations such as SNAP.  SNAP supports people who need assistance buying food—arguably the most basic need. Often, fear of reprisal and inappropriate government data sharing, such as immigration status of household members not receiving benefits, prevents eligible people from enrolling in food assistance despite need.  Discouraging eligible people from enrolling in SNAP benefits runs counterproductive to the goals of the program, which aim to reduce food insecurity, improve health outcomes, and benefit local economies.

This is just the latest government data-sharing effort that raises alarm bells for digital rights. No one should worry that asking their government for help with hunger will get them in trouble. The USDA must promise it will not weaponize programs that put food on the table during times of need. 

Hayley Tsukayama

The PERA and PREVAIL Acts Would Make Bad Patents Easier to Get—and Harder to Fight

8 hours 47 minutes ago

Two dangerous bills have been reintroduced in Congress that would reverse over a decade of progress in fighting patent trolls and making the patent system more balanced. The Patent Eligibility Restoration Act (PERA) and the PREVAIL Act would each cause significant harm on their own. Together, they form a one-two punch—making it easier to obtain vague and overly broad patents, while making it harder for the public to challenge them.

These bills don’t just share bad ideas—they share sponsors, a coordinated rollout, and backing from many of the same lobbying groups. Congress should reject both.

TAKE ACTION

Tell Congress: Don't Bring Back The Worst Patents

PERA Would Legalize Patents on Basic Software—and Human Genes

PERA would overturn long-standing court decisions that have helped keep some of the worst patents out of the system. This includes the Supreme Court’s Alice v. CLS Bank decision, which bars patents on abstract ideas, and Myriad v. AMP, which correctly ruled that naturally occurring human genes cannot be patented.

Thanks to the Alice decision, courts have invalidated a rogue’s gallery of terrible software patents—such as patents on online photo contests, online bingo, upselling, matchmaking, and scavenger hunts. These patents didn’t describe real inventions—they merely applied old ideas to general-purpose computers.

PERA would wipe out the Alice framework and replace it with vague, hollow exceptions. For example: it would ban patents on “dance moves” and “marriage proposals,” but would allow nearly anything involving a computer or machine—even if it only mentions the use of a computer. This is the same language used in many bad software patents that patent trolls have wielded for years. If PERA passes, patent claims  that are currently seen as weak will become much harder to challenge. 

Adding to that, PERA would bring back patents on human genes—exactly what was at stake in the Myriad case. EFF joined that fight, alongside scientists and patients, to prevent patents that interfered with essential diagnostic testing. Congress should not undo that victory. Some things just shouldn’t be patented. 

PERA’s requirement that living genes can constitute an invention if they are “isolated” is meaningless; every gene used in science is “isolated” from the human body. This legal wordplay was used to justify human gene patents for decades, and it’s deeply troubling that some U.S. Senators are on board with bringing them back. 

PREVAIL Weakens the Public’s Best Defense Against Patent Abuse

While PERA makes it easier to obtain a bad patent, the PREVAIL Act makes it harder to get rid of one.

PREVAIL would severely limit inter partes review (IPR), the most effective process for challenging wrongly granted patents. This faster, more affordable process—administered by the U.S. Patent and Trademark Office—has knocked out thousands of invalid patents that should never have been issued.

EFF has used IPR to protect the public. In 2013, we challenged and invalidated a patent on podcasting, which was being used to threaten creators across the internet. Thousands of our supporters chipped in to help us bring that case. Under PREVAIL, that challenge wouldn’t have been allowed. The bill would significantly limit IPR petitions unless you’ve been directly sued or threatened—a major blow to nonprofits, open source advocates, and membership-based defense groups that act in the public interest. 

PREVAIL doesn’t stop at limiting who can file an IPR. It also undermines the fairness of the IPR process itself. It raises the burden of proof, requiring challengers to overcome a presumption that the patent is valid—even when the Patent Office is the one reviewing it. The bill forces an unfair choice: anyone who challenges a patent at the Patent Office would have to give up the right to fight the same patent in court, even though key legal arguments (such as those involving abstract subject matter) can only be made in court.

It gets worse. PREVAIL makes it easier for patent owners to rewrite their claims during review, taking advantage of hindsight about what’s likely to hold up. And if multiple parties want to challenge the same patent, only the first to file may get heard. This means that patents used to threaten dozens or even hundreds of targets could get extra protection, just because one early challenger didn’t bring the best arguments.

These changes aren’t about improving the system. They’re about making it easier for a small number of patent owners to extract settlements, and harder for the public to push back.

A Step Backward, Not Forward

Supporters of these bills claim they’re trying to restore balance to the patent system. But that’s not what PERA and PREVAIL do. They don’t fix what’s broken—they break what’s working.

Patent trolling is still a severe problem. In 2024, patent trolls filed a stunning 88% of all patent lawsuits in the tech sector

At the same time, patent law has come a long way over the past decade. Courts can now reject abstract software patents earlier and more easily. The IPR process has become a vital tool for holding the Patent Office accountable and protecting real innovators. And the Myriad decision has helped keep essential parts of human biology in the public domain.

PERA and PREVAIL would undo all of that.

These bills have support from a variety of industry groups, including those representing biotech firms, university tech transfer offices, and some tech companies that rely on aggressive patent licensing. While those voices deserve to be heard, the public deserves better than legislation that makes it easier to secure a 20-year monopoly on an idea, and harder for anyone else to challenge it.

Instead of PERA and PREVAIL, Congress should focus on helping developers, creators, and small businesses that rely on technology—not those who exploit it through bad patents.

Some of that legislation is already written. Congress should consider making end-users immune from patent threats, closing loopholes that allow certain patent-holders to avoid having their patents reviewed, and adding transparency requirements so that people accused of patent infringement can at least figure out who’s making the allegations. 

But right now, EFF is fighting back, and we need your help. These bills may be dressed up as reform, but we’ve seen them before—and we know the damage they’d do.

TAKE ACTION

Tell Congress: Reject PERA and PREVAIL

Joe Mullin

The Defense Attorney’s Arsenal In Challenging Electronic Monitoring

1 day 3 hours ago

In criminal prosecutions, electronic monitoring (EM) is pitched as a “humane alternative" to incarceration – but it is not. The latest generation of “e-carceration” tools are burdensome, harsh, and often just as punitive as imprisonment. Fortunately, criminal defense attorneys have options when shielding their clients from this over-used and harmful tech.

Framed as a tool that enhances public safety while reducing jail populations, EM is increasingly used as a condition of pretrial release, probation, parole, or even civil detention. However, this technology imposes serious infringements on liberty, privacy, and due process for not only those placed on it but also for people they come into contact with. It can transform homes into digital jails, inadvertently surveil others, impose financial burdens, and punish every misstep—no matter how minor or understandable.

Even though EM may appear less severe than incarceration, research and litigation reveal that these devices often function as a form of detention in all but name. Monitored individuals must often remain at home for long periods, request permission to leave for basic needs, and comply with curfews or “exclusion zones.” Violations, even technical ones—such as a battery running low or a dropped GPS signal—can result in arrest and incarceration. Being able to take care of oneself and reintegrate into the world becomes a minefield of compliance and red tape. The psychological burden, social stigma, and physical discomfort associated with EM are significant, particularly for vulnerable populations.   

For many, EM still evokes bulky wrist or ankle “shackles” that can monitor a subject’s location, and sometimes even their blood alcohol levels. These devices have matured with digital technology however,  increasingly imposed through more sophisticated devices like smartwatches or mobile phones applications. Newer iterations of EM have also followed a trajectory of collecting much more data, including biometrics and more precise location information.

This issue is more pressing than ever, as the 2020 COVID pandemic led to an explosion in EM adoption. As incarceration and detention facilities became superspreader zones, judges kept some offenders out of these facilities by expanding the use of EM; so much so that some jurisdictions ran out of classic EM devices like ankle bracelets.

Today the number of people placed on EM in the criminal system continues to skyrocket. Fighting the spread of EM requires many tactics, but on the front lines are the criminal defense attorneys challenging EM impositions. This post will focus on the main issues for defense attorneys to consider while arguing against the imposition of this technology.

PRETRIAL ELECTRONIC MONITORING

We’ve seen challenges to EM programs in a variety of ways, including attacking the constitutionality of the program as a whole and arguing against pretrial and/or post-conviction imposition. However, it is likely that the most successful challenges will come from individualized challenges to pretrial EM.

First, courts have not been receptive to arguments that entire EM programs are unconstitutional. For example, in Simon v. San Francisco et.al, 135 F.4th 784 (9 Cir. 2025), the Ninth Circuit held that although San Francisco’s EM program constituted a Fourth Amendment search, a warrant was not required. The court explained their decision by stating that the program was a condition of pretrial release, included the sharing of location data, and was consented to by the individual (with counsel present) by signing a form that essentially operated as a contract. This decision exemplifies the court’s failure to grasp the coercive nature of this type of “consent” that is pervasive in the criminal legal system.

Second, pretrial defendants have more robust rights than they do after conviction. While a person’s expectation of privacy may be slightly diminished following arrest but before trial, the Fourth Amendment is not entirely out of the picture. Their “privacy and liberty interests” are, for instance, “far greater” than a person who has been convicted and is on probation or parole. United States v. Scott, 450 F.3d 863, 873 (9th Cir. 2006). Although individuals continue to retain Fourth Amendment rights after conviction, the reasonableness analysis will be heavily weighted towards the state as the defendant is no longer presumed innocent. However, even people on probation have a “substantial” privacy interest. United States v. Lara, 815 F.3d 605, 610 (9th Cir. 2016). 

THE FOURTH AMENDMENT

The first foundational constitutional rights threatened by the sheer invasiveness of EM are those protected by the Fourth Amendment. This concern is only heightened as the technology improves and collects increasingly detailed information. Unlike traditional probation or parole supervision, EM often tracks individuals with no geographic limitations or oversight, and can automatically record more than just approximate location information.

Courts have increasingly recognized that this new technology poses greater and more novel threats to our privacy than earlier generations. In Grady v. North Carolina, 575 U.S. 306 (2015), the Supreme Court, relying on United States v. Jones, 565 U.S. 400 (2012) held that attaching a GPS tracking device to a person—even a convicted sex offender—constitutes a Fourth Amendment search and is thus subject to the inquiry of reasonableness. A few years later, the monumental decision in Carpenter v. United States, 138 S. Ct. 2206 (2018), firmly established that Fourth Amendment analysis is affected by the advancement of technology, holding that that long-term cell-site location tracking by law enforcement constituted a search requiring a warrant.

As criminal defense attorneys are well aware, the Fourth Amendment’s ostensibly powerful protections are often less effective in practice. Nevertheless, this line of cases still forms a strong foundation for arguing that EM should be subjected to exacting Fourth Amendment scrutiny.

DUE PROCESS

Three key procedural due process challenges that defense attorneys can raise under the Fifth and Fourteenth Amendments are: inadequate hearing, lack of individualized assessment, and failure to consider ability to pay.

Many courts impose EM without adequate consideration of individual circumstances or less restrictive alternatives. Defense attorneys should demand evidentiary hearings where the government must prove that monitoring is necessary and narrowly tailored. If the defendant is not given notice, hearing, or the opportunity to object, that could arguably constitute a violation of due process. For example, in the previously mentioned case, Simon v. San Francisco, the Ninth Circuit found that individuals who were not informed of the details regarding the city’s pretrial EM program in the presence of counsel had their rights violated.

Second, imposition of EM should be based on an individualized assessment rather than a blanket rule. For pretrial defendants, EM is frequently used as a condition of bail. Although under both federal and state bail frameworks, courts are generally required to impose the least restrictive conditions necessary to ensure the defendant’s court appearance and protect the community, many jurisdictions have included EM as a default condition rather than individually assessing whether EM is appropriate. The Bail Reform Act of 1984, for instance, mandates that release conditions be tailored to the individual’s circumstances. Yet in practice, many jurisdictions impose EM categorically, without specific findings or consideration of alternatives. Defense counsel should challenge this practice by insisting that judges articulate on the record why EM is necessary, supported by evidence related to flight risk or danger. Where clients have stable housing, employment, and no history of noncompliance, EM may be more restrictive than justified.

Lastly, financial burdens associated with EM may also implicate due process where a failure to pay can result in violations and incarceration. In Bearden v. Georgia, 461 U.S. 660 (1983), the Supreme Court held that courts cannot revoke probation for failure to pay fines or restitution without first determining whether the failure was willful. Relying on Bearden, defense attorneys can argue that EM fees imposed on indigent clients amount to unconstitutional punishment for poverty. Similarly, a growing number of lower courts have agreed, particularly where clients were not given the opportunity to contest their ability to pay. Defense attorneys should request fee waivers, present evidence of indigence, and challenge any EM orders that functionally condition liberty on wealth.

STATE LAW PROTECTIONS

State constitutions and statutes often provide stronger protections than federal constitutional minimums. In addition to state corollaries to the Fourth and Fifth Amendment, some states have also enacted statutes to govern pretrial release and conditions. A number of states have established a presumption in favor of release on recognizance or personal recognizance bonds. In those jurisdictions, the state has to overcome this presumption before the court can impose restrictive conditions like EM. Some states require courts to impose the least restrictive conditions necessary to achieve legitimate purposes, making EM appropriate only when less restrictive alternatives are inadequate.

Most pretrial statutes list specific factors courts must consider, such as community ties, employment history, family responsibilities, nature of the offense, criminal history, and risk of flight or danger to community. Courts that fail to adequately consider these factors or impose generic monitoring conditions may violate statutory requirements.

For example, Illinois's SAFE-T Act includes specific protections against overly restrictive EM conditions, but implementation has been inconsistent. Defense attorneys in Illinois and states with similar laws should challenge monitoring conditions that violate specific statutory requirements.

TECHNOLOGICAL ISSUES

Attorneys should also consider the reliability of EM technology. Devices frequently produce false violations and alerts, particularly in urban areas or buildings where GPS signals are weak. Misleading data can lead to violation hearings and even incarceration. Attorneys should demand access to raw location data, vendor records, and maintenance logs. Expert testimony can help demonstrate technological flaws, human error, or system limitations that cast doubt on the validity of alleged violations.

In some jurisdictions, EM programs are operated by private companies under contracts with probation departments, courts, or sheriffs. These companies profit from fees paid by clients and have minimal oversight. Attorneys should request copies of contracts, training manuals, and policies governing EM use. Discovery may reveal financial incentives, lack of accountability, or systemic issues such as racial or geographic disparities in monitoring. These findings can support broader litigation or class actions, particularly where indigent individuals are jailed for failing to pay private vendors.

Recent research provides compelling evidence that EM fails to achieve its stated purposes while creating significant harms. Studies have not found significant relationships between EM of individuals on pretrial release and their court appearance rates or likelihood of arrest. Nor do they show that law enforcement is employing EM on individuals they would otherwise put in jail.

To the contrary, studies indicate that law enforcement is using EM to surveil and constrain the liberty of those who wouldn't otherwise be detained, as the rise in the number of people placed on EM has not coincided with a decrease in detention. This research demonstrates that EM represents an expansion of government control rather than a true alternative to detention.

Additionally, EM devices may be rife with technical issues as described above. Communication system failures that prevent proper monitoring, and device malfunctions that cause electronic shocks. Cutting of ankle bracelets is a common occurrence among users, especially when the technology is malfunctioning or hurting them. Defense attorneys should document all technical issues and argue that unreliable technology cannot form the basis for liberty restrictions or additional criminal charges.

CREATING A RECORD FOR APPEAL

Attorneys should always make sure they are creating a record on which the EM imposition can be appealed, should the initial hearing be unsuccessful. This will require lawyers to include the factual basis for challenge and preserve the appropriate legal arguments. The modern generation of EM has yet to undergo the extensive judicial review that ankle shackles have been subjected to, making it integral to make an extensive record of the ways in which it is more invasive and harmful, so that it can be properly argued to an appellate court that the nature of the newest EM requires more than perfunctory application of decades-old precedence. As we saw with Carpenter, the rapid advancement of technology may push the courts to reconsider older paradigms for constitutional analysis and find them wanting. Thus, a comprehensive record would be critical to show EM as it is—an extension of incarceration—rather than a benevolent alternative to detention. 

Defeating electronic monitoring will require a multidimensional approach that includes litigating constitutional claims, contesting factual assumptions, exposing technological failures, and advocating for systemic reforms. As the carceral state evolves, attorneys must remain vigilant and proactive in defending the rights of their clients.

Hannah Zhao

The EU’s “Encryption Roadmap” Makes Everyone Less Safe

1 day 3 hours ago

EFF has joined more than 80 civil society organizations, companies, and cybersecurity experts in signing a letter urging the European Commission to change course on its recently announced “Technology Roadmap on Encryption.” The roadmap, part of the EU’s ProtectEU strategy, discusses new ways for law enforcement to access encrypted data. That framing is dangerously flawed. 

Let’s be clear: there is no technical “lawful access” to end-to-end encrypted messages that preserves security and privacy. Any attempt to circumvent encryption—like client-side scanning—creates new vulnerabilities, threatening the very people governments claim to protect.

This letter is significant in not just its content, but in who signed it. The breadth of the coalition makes one thing clear: civil society and the global technical community overwhelmingly reject the idea that weakening encryption can coexist with respect for fundamental rights.

Strong encryption is a pillar of cybersecurity, protecting everyone: activists, journalists, everyday web users, and critical infrastructure. Undermining it doesn’t just hurt privacy. It makes everyone’s data more vulnerable and weakens the EU’s ability to defend against cybersecurity threats.

EU officials should scrap any roadmap focused on circumvention and instead invest in stronger, more widespread use of end-to-end encryption. Security and human rights aren’t in conflict. They depend on each other.

You can read the full letter here.

Joe Mullin

245 Days Without Justice: Laila Soueif’s Hunger Strike and the Fight to Free Alaa Abd el-Fattah

1 day 4 hours ago

Laila Soueif has now been on hunger strike for 245 days. On Thursday night, she was taken to the hospital once again. Soueif’s hunger strike is a powerful act of protest against the failures of two governments. The Egyptian government continues to deny basic justice by keeping her son, Alaa Abd el-Fattah, behind bars—his only “crime” was sharing a Facebook post about the torture of a fellow detainee. Meanwhile, the British government, despite Alaa’s citizenship, has failed to secure even a single consular visit. Its muted response reflects an unacceptable unwillingness to stand up for the rights of its own citizens.

This is the second time this year that Soueif’s health has collapsed due to her hunger strike. Now, her condition is dire. Her blood sugar is dangerously low, and every day, her family fears it could be her last. Doctors say it’s a miracle she’s still alive.

Her protest is a call for accountability—a demand that both governments uphold the rule of law and protect human rights, not only in rhetoric, but through action.

Late last week, after an 18-month investigation, the United Nations Working Group on Arbitrary Detention (UNWGAD) issued its Opinion on Abd el-Fattah’s case, stating that he is being held unlawfully by the Egyptian government. That Egypt will not provide the United Kingdom with consular access to its citizen further violates the country’s obligations under international law. 

As stated in a letter to British Prime Minister Keir Starmer by 21 organizations, including EFF, the UK must now use every tool it has at its disposal to ensure that Alaa Abd el-Fattah is released immediately.

Jillian C. York

CCTV Cambridge: Digital Equity in 2025

4 days 3 hours ago

EFF has long advocated for affordable, accessible, and future-proof internet access for all. Digital equity, the condition in which everyone has access to technology that allows them to participate in society, is an issue that I’ve been proud to organize around. So, it’s awesome to connect with a group that's doing something to address it in their community.

Recently I got the chance to catch up with Maritza Grooms, Director of Community Relations at EFA member CCTV Cambridge, who told me about the results of their work and the impact it's having on their local community.

How’s your digital inclusion work going and what's been the results within the community?

CCTV has had a year of transition and change. One of the biggest was the establishing of the Digital Navigator Pilot Program in collaboration with multiple partners funded in part by Masshire Metro North Workforce Investment Board through the Mass Broadband Institute. This program has already had a great impact in Cambridge since its official launch in August 2024, serving 492 community members! This program demonstrates the clear need for digital navigator services in Cambridge and beyond. Our community has used this service to get devices that have allowed them restart their career journey or go back to school, and take digital literacy classes to gain new skills to help them along the way.

The Electronic Frontier Alliance works to uphold the principles of free expression, information security, privacy, creativity, and access to knowledge. What guides your organization and how does digital equity tie into it?

CCTV's mission is to nurture a strong, equitable, and diverse community by providing tools and training to foster free speech, civic engagement, access to knowledge, and creative expression. The Digital Navigator program fulfills this mission not only for the community we serve, but in the ripple effects that generate from our community members having the tools to participate in our society. The Digital Navigator Pilot Program aims to bridge the digital divide in Cambridge, specifically supporting BIPOC, immigrant, and low-income communities to enhance economic mobility.

How can people support and plug-in to what you’re doing?

We cannot do this alone. It takes a village, from partners in the work like our friends at EFF, and supporters alike. We encourage anyone to reach out to maritza@cctvcambridge.org to find out how you can support this program or visit cctvcambridge.org/support to support today and invite donations at your convenience. Follow us on social media @cctvcambridge!

Thanks again to Maritza for speaking with us. If you're inspired by CCTV Cambridge's work, consider joining a local EFA ally, or bringing your own group into the alliance today!

Christopher Vines

She Got an Abortion. So A Texas Cop Used 83,000 Cameras to Track Her Down.

4 days 5 hours ago

In a chilling sign of how far law enforcement surveillance has encroached on personal liberties, 404 Media recently revealed that a sheriff’s office in Texas searched data from more than 83,000 automated license plate reader (ALPR) cameras to track down a woman suspected of self-managing an abortion. The officer searched 6,809 different camera networks maintained by surveillance tech company Flock Safety, including states where abortion access is protected by law, such as Washington and Illinois. The search record listed the reason plainly: “had an abortion, search for female.”

screenshot_2025-05-30_at_11.08.40_am.png

Screenshot of data

After the U.S. Supreme Court’s 2022 Dobbs v. Jackson Women’s Health Organization decision overturned Roe v. Wade, states were given sweeping authority to ban and even criminalize abortion. In Texas—where the officer who conducted this search is based—abortion is now almost entirely banned. But in Washington and Illinois, where many of the searched Flock cameras are located, abortion remains legal and protected as a fundamental right up to fetal viability.

The post-Dobbs legal landscape has also opened the door for law enforcement to exploit virtually any form of data—license plates, phone records, geolocation data—to pursue individuals across state lines. EFF’s Atlas of Surveillance has documented more than 1,800 agencies have deployed ALPRs, but at least 4,000 agencies are able to run searches through some agencies in Flock's network. Many agencies share the data freely with other agencies across the country, with little oversight, restriction, or even standards for accessing data. 

While this particular data point explicitly mentioned an abortion, scores of others in the audit logs released through public records requests simply list "investigation" as the reason for the plate search, with no indication of the alleged offense. That means other searches targeting someone for abortion, or another protected right in that jurisdiction, could be effectively invisible.

This case underscores our growing concern: that the mass surveillance infrastructure—originally sold as a tool to find stolen cars or missing persons—is now being used to target people seeking reproductive healthcare. This unchecked, warrant-less access that allows law enforcement to surveil across state lines blurs the line between “protection” and persecution.

From Missing Cars to Monitoring Bodies

EFF has long warned about the dangers of ALPRs, which scan license plates, log time and location data, and build a detailed picture of people's movements. Companies like Flock Safety and Motorola Solutions offer law enforcement agencies access to nationwide databases of these readers, and in some cases, allow them to stake out locations like abortion clinics, or create “hot lists” of license plates to track in real time. Flock's technology also allows officers to search for a vehicle based on attributes like color, make and model, even without a plate number.

The threat is compounded by how investigations often begin. A report published by If/When/How on the criminalization of self-managed abortion found that about a quarter of adult cases (26%) were reported to law enforcement by acquaintances entrusted with information, such as “friends, parents, or intimate partners” and another 18% through “other” means. This means that with ALPR tech, a tip from anyone can instantly escalate into a nationwide manhunt. And as Kate Bertash of the Digital Defense Fund explained to 404 Media, anti-abortion activists have long been documenting the plates of patients and providers who visit reproductive health facilities—data that can now be easily cross-referenced with ALPR databases.

The 404 Media report proves that this isn’t a hypothetical concern. In 2023, a months-long EFF investigation involving hundreds of public records requests uncovered that many California police departments were sharing records containing detailed driving profiles of local residents with out-of-state agencies, despite state laws explicitly prohibiting this. This means that even in so-called “safe” states, your data might end up helping law enforcement in Texas or Idaho prosecute you—or your doctor. 

That’s why we demanded that 75 California police departments stop sharing ALPR data with anti-abortion states, an effort that has largely been successful.

Surveillance and Reproductive Freedom Cannot Coexist

We’ve said it before, and we’ll say it again: Lawmakers who support reproductive rights must recognize that abortion access and mass surveillance are incompatible. 

The systems built to track stolen cars and issue parking tickets have become tools to enforce the most personal and politically charged laws in the country. What began as a local concern over privacy has escalated into a national civil liberties crisis.

Yesterday’s license plate readers have morphed into today’s reproductive dragnet. Now, it’s time for decisive action. Our leaders must roll back the dangerous surveillance systems they've enabled. We must enact strong, enforceable state laws to limit data sharing, ensure proper oversight, and dismantle these surveillance pipelines before they become the new normal–or even just eliminate the systems altogether.

Donate

Help protect privacy online & offline for everyone

Rindala Alajaji

California’s Cities and Counties Must Step Up Their Privacy Game. A.B. 1337 Can Do That.

5 days 7 hours ago

“The right to privacy is being threatened by the indiscriminate collection, maintenance, and dissemination of personal information and the lack of effective laws and legal remedies,” some astute California lawmakers once wrote. “The increasing use of computers and other sophisticated information technology has greatly magnified the potential risk to individual privacy that can occur from the maintenance of personal information.”

Sound familiar? These words may sound like a recent push back on programs that want to slurp up the information sitting in ever-swelling government databases. But they’re not. They come from a nearly 50-year-old California law.

The “Information Practices Act of 1977”—or the IPA for short—is a foundational state privacy law and one of several privacy laws directly responding to the Watergate scandal, such the federal Privacy Act of 1974 and California’s own state constitutional right to privacy.

Now, as we confront a new era of digital surveillance and face our own wave of concern about government demands for data, it's time to revisit and update the IPA.

TAKE ACTION

The IPA puts a check on government use of personal information by establishing guardrails for how state agencies maintain, collect, and disseminate data. It also gives people the right to access and correct their information.

While the need for the law has not changed, the rest of the world has. Particularly, since the IPA passed in 1977, far more data collection is now done at the county and city level. Yet local and county government entities have no standard protections in the state of California. And those entities have troves of data, whether it’s the health data collected from vaccine programs or held by county-administered food programs.

As demand for this type of local data grows, we need to tap back into the energy of the ‘70s. It’s time to update the IPA so it can respond to the world we live in today. That’s why EFF is proud to co-sponsor A.B. 1337, authored by Assemblymember Chris Ward (D-San Diego), with our close friends at Oakland Privacy.

Specifically, A.B. 1337, also known as the IPA Reform Act:

  • Expands the definition of covered entities in the IPA to include local agencies, offices, departments and divisions.
  • Prevents information collected from being used for unintended or secondary purposes without consent.
  • Makes harmful negligent and improper release of personal information punishable as a misdemeanor.
  • Requires that IPA disclosure records be kept for three years and cannot be destroyed prior to that period.
  • Aligns the definition of personal information and sensitive personal information with the California Privacy Rights Act to include location data, online browsing records, IP addresses, citizenship status, and genetic information.

Privacy is foundational to trust in government. That’s part of the lesson we learned from the 1970s. (And trust in government is lower today than it was then.)

We need to be confident that the government is respecting our personal information and our privacy. More than ever, California residents face imminent danger of being targeted, persecuted, or prosecuted for seeking reproductive healthcare, their immigration status, practicing a particular religion, being of a particular race, gender identity, or sexual orientation—or simply for exercising their First Amendment rights.

California is a national leader on consumer privacy protections, having passed a landmark comprehensive privacy law and established the nation’s first state privacy agency. Now, its local governments must catch up.

We cannot afford to wait for these protections any longer. Passing A.B. 1337 is good governance, good policy, and just good sense. If you’re a California resident, tell your Assemblymember to support the bill today.

TAKE ACTION

Hayley Tsukayama

The Insidious Effort to Privatize Public Airwaves | EFFector 37.5

6 days 6 hours ago

School is almost out for summer! You know what that means? Plenty of time to catch up on the latest digital rights news! Don't worry, though—EFF has you covered with our EFFector newsletter.

This edition of EFFector explains why efforts to privatize public airwaves would harm American TV viewers; goes over how KOSA is still a very bad censorship bill, especially for young people; and covers how Signal, WhatsApp, and other encrypted chat apps back up your conversations.

You can read the full newsletter here, and even get future editions directly to your inbox when you subscribe! Additionally, we've got an audio edition of EFFector on the Internet Archive, or you can view it by clicking the button below:

LISTEN ON YouTube

EFFECTOR 37.5 - The Insidious Effort to Privatize Public Airwaves

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

Podcast Episode: Love the Internet Before You Hate On It

1 week 6 days ago

There’s a weird belief out there that tech critics hate technology. But do movie critics hate movies? Do food critics hate food? No! The most effective, insightful critics do what they do because they love something so deeply that they want to see it made even better. The most effective tech critics have had transformative, positive online experiences, and now unflinchingly call out the surveilled, commodified, enshittified landscape that exists today because they know there has been – and still can be – something better.

%3Ciframe%20height%3D%2252px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2F185a41be-b203-47a2-9f26-6a4c4a5fd08b%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com

   

(You can also find this episode on the Internet Archive and on YouTube.)

That’s what drives Molly White’s work. Her criticism of the cryptocurrency and technology industries stems from her conviction that technology should serve human needs rather than mere profits. Whether it’s blockchain or artificial intelligence, she’s interested in making sure the “next big thing” lives up to its hype, and more importantly, to the ideals of participation and democratization that she experienced. She joins EFF’s Cindy Cohn and Jason Kelley to discuss working toward a human-centered internet that gives everyone a sense of control and interaction – open to all in the way that Wikipedia was (and still is) for her and so many others: not just as a static knowledge resource, but as something in which we can all participate. 

In this episode you’ll learn about:

  • Why blockchain technology has built-in incentives for grift and speculation that overwhelm most of its positive uses
  • How protecting open-source developers from legal overreach, including in the blockchain world, remains critical
  • The vast difference between decentralization of power and decentralization of compute
  • How Neopets and Wikipedia represent core internet values of community, collaboration, and creativity
  • Why Wikipedia has been resilient against some of the rhetorical attacks that have bogged down media outlets, but remains vulnerable to certain economic and political pressures
  • How the Fediverse and other decentralization and interoperability mechanisms provide hope for the kind of creative independence, self-expression, and social interactivity that everyone deserves  

Molly White is a researcher, software engineer, and writer who focuses on the cryptocurrency industry, blockchains, web3, and other tech in her independent publication, Citation Needed. She also runs the websites Web3 is Going Just Great, where she highlights examples of how cryptocurrencies, web3 projects, and the industry surrounding them are failing to live up to their promises, and Follow the Crypto, where she tracks cryptocurrency industry spending in U.S. elections. She has volunteered for more than 15 years with Wikipedia, where she serves as an administrator (under the name GorillaWarfare) and functionary, and previously served three terms on the Arbitration Committee. She’s regularly quoted or bylined in news media, speaks at major conferences including South by Southwest and Web Summit; guest lectures at universities including Harvard, MIT, and Stanford; and advises policymakers and regulators around the world. 

Resources:

What do you think of “How to Fix the Internet?” Share your feedback here.

Transcript

MOLLY WHITE: I was very young when I started editing Wikipedia. I was like 12 years old, and when it said the encyclopedia that anyone can edit, “anyone” means me, and so I just sort of started contributing to articles and quickly discovered that there was this whole world behind Wikipedia that a lot of us really don't see, where very passionate people are contributing to creating a repository of knowledge that anyone can access.
And I thought, I immediately was like, that's brilliant, that's amazing. And you know that motivation has really stuck with me since then, just sort of the belief in open knowledge and open access I think has, you know, it was very early for me to be introduced to those things and I, I sort of stuck with it, because it became, I think, such a formative part of my life.

CINDY COHN: That’s Molly White talking about a moment that is hopefully relatable to lots of folks who think critically about technology – that moment when you first experienced how, sometimes, the internet can feel like magic.
I'm Cindy Cohn, the Executive Director of the Electronic Frontier Foundation.

JASON KELLEY: And I'm Jason Kelley, EFF’s Activism Director. This is our podcast, How to Fix the Internet.

CINDY COHN: The idea behind this show is that we're trying to make our digital lives BETTER. A big part of our job at EFF is to envision the ways things can go wrong online-- and jumping into action to help when things then DO go wrong.
But this show is about optimism, hope and solutions – we want to share visions of what it looks like when we get it right.

JASON KELLEY: Our guest today is Molly White. She’s a journalist and web engineer, and is one of the strongest voices thinking and speaking critically about technology–specifically, she’s been an essential voice on cryptocurrency and what people often call Web3–usually a reference to blockchain technologies.. She runs an independent online newsletter called Citation Needed, and at her somewhat sarcastically named website “Web3 is going just great” she chronicles all the latest, often alarming, news, often involving scams and schemes that make those of us working to improve the internet pull our hair out.

CINDY COHN: But she’s not a pessimist. She comes from a deep love of the internet, but is also someone who holds the people that are building our digital world to account, with clear-eyed explanations of where things are going wrong, but also potential that exists if we can do it right. Welcome, Molly. Thanks for being here.

MOLLY WHITE: Thank you for having me.

CINDY COHN: So the theme of our show is what does it look like if we start to get things right in the digital world? Now you recognize, I believe, the value of blockchain technologies, what they could be.
But you bemoan how far we are from that right now. So let's start there. What does the world look like if we start to use the blockchain in a way that really lives up to its potential for making things better online?

MOLLY WHITE: I think that a lot of the early discussions about the potential of the blockchain were very starry-eyed and sort of utopian. Much in the way that early discussions of the internet were that way. You know, they promised that blockchains would somehow democratize everything we do on the internet, you know, make it more available to anyone who wanted to participate.
It would provide financial rails that were more equitable and had fewer rent seekers and intermediaries taking fees along the way. A lot of it was very compelling.
But I think as we've seen the blockchain industry, such as it is now, develop, we've seen that this technology brings with it a set of incentives that are incredibly challenging to grapple with. And it's made me wonder, honestly, whether blockchains can ever live up to the potential that they originally claimed, because those incentives have seemed to be so destructive that much of the time any promises of the technology are completely overshadowed by the negatives.

CINDY COHN: Yeah. So let's talk a little bit about those incentives, 'cause I think about that a lot as well. Where do you see those incentives popping up and what are they?

MOLLY WHITE: Well, any public blockchain has a token associated with it, which is the cryptocurrency that people are trading around, speculating on, you know, purchasing in hopes that the number will go up and they will make a profit. And in order to maintain the blockchain, you know, the actual system of records that is storing information or providing the foundation for some web platform, you need that cryptocurrency token.
But it means that whatever you're trying to do with the blockchain also has this auxiliary factor to it, which is the speculation on the cryptocurrency token.
And so time and time again, watching this industry and following projects, claiming that they will do wonderful, amazing things and use blockchains to accomplish those things, I've seen the goals of the projects get completely warped by the speculation on the token. And often the project's goals become overshadowed by attempts to just pump the price of the token, in often very inauthentic ways or in ways that are completely misaligned with the goals of the project. And that happens over and over and over again in the blockchain world.

JASON KELLEY: Have you seen that not happen with any project? Is there any project that you've said, wow, this is actually going well. It's like a perfect use of this technology, or because you focus on sort of the problems, is that just not something you've come across?

MOLLY WHITE: I think where things work well is when those incentives are perfectly aligned, which is to say that if there are projects that are solely focused on speculation, then the blockchain speculation works very well. Um, you know, and so we see people speculating on Bitcoin, for example, and, and they're not hoping necessarily that the Bitcoin ledger itself will do anything.
The same is true with meme coins. People are speculating on these tokens that have no purpose behind them besides, you know. Hoping that the price will go up. And in that case, you know, people sort of know what they're getting into and it can be lucrative for some people. And for the majority of people it's not, but you know, they sort of understand that going into it, or at least you would hope that they do.

CINDY COHN: I think of the blockchain as, you know, when they say this'll go down on your permanent record, this is the permanent record.

MOLLY WHITE: That’s usually a threat.

CINDY COHN: Yeah.

MOLLY WHITE: I try to point that out as well.

CINDY COHN: Now, you know, look, to be clear, we work with people who do international human rights work saving the records before a population gets destroyed in a way that that can't be destroyed by the people in power is, is, is one of the kind of classic things that you want a secure, permanent place to store stuff, um, happens. And so there's, you know, there's that piece. So where do you point people to when you're thinking about like, okay, what if you want a real permanent record, but you don't want all the dreck of the cryptocurrency blockchain world?

MOLLY WHITE: Well, it really depends on the project. And I really try to emphasize that because I think a lot of people in the tech world come at things somewhat backwards, especially when there is a lot of hype around a technology in the way that there was with blockchains and especially Web3.
And we saw a lot of people essentially saying, I wanna do something with a blockchain. Let me go find some problem I can solve using a blockchain, which is completely backwards to how most technologists are used to addressing problems, right? They're faced with a problem. They consider possible ways to solve it, and then they try to identify a technology that is best suited to solving that problem.
And so, you know, I try to encourage people to reverse the thinking back to the normal way of doing things where, sure, you might not get the marketing boosts that Web3 once brought in. And, you know, it certainly it was useful to attract investors for a while to have that attached to your project, but you will likely end up with a more sustainable product at the end of the day because you'll have something that works and is using technology that is well suited to the problem. And so, you know, when it comes to where would I direct people other than blockchains, it very much depends on their problem and, and the problem that they're trying to solve.
For example, if you don't need to worry about having a, a system that is maintained by a group of people who don't trust each other, which is the blockchain’s sort of stated purpose, then there are any number of databases that you can use that work in the more traditional manner where you rely on perhaps a group of trusted participants or a system like that.
If you're looking for a more distributed or decentralized solution, there are peer-to-peer technologies that are not blockchain based that allow this type of content sharing. And so, you know, like I said, it really just depends on the use case more than anything.

JASON KELLEY: Since you brought up decentralization, this is something we talk about a lot at EFF in different contexts, and I think a lot of people saw blockchain and heard decentralized and said, that sounds good.
We want less centralized power. But where do you see things like decentralization actually helping if this kind of Web3 tech isn't a place where it's necessarily useful or where the technology itself doesn't really solve a lot of the problems that people have said it would.

MOLLY WHITE: I think one of the biggest challenges with blockchains and decentralization is that when a lot of people talk about decentralization, they're talking about the decentralization of power, as you've just mentioned, and in the blockchain world, they're often talking about the decentralization of compute, which is not necessarily the same thing, and in some cases it very much different.

JASON KELLEY: If you can do a rug pull, it's not necessarily decentralized. Right?

MOLLY WHITE: Right. Or if you're running a blockchain and you're saying it's decentralized, but you run all of the validators or the miners for that blockchain, then you, you know, the computers may be physically located all over the world, or, you know, decentralized in that sort of sense. But you control all the power and so you do not have a truly decentralized system in that manner of speaking.
And I think a lot of marketing in the crypto world sort of relied on people not considering the difference between those two things, because there are a lot of crypto projects that, you know, use all of the buzzwords around decentralization and democratization and, you know, that type of thing that are very, very centralized, very similar to the traditional tech companies where, you know, all of Facebook servers may be located physically all around the world, but no one's under the. The impression that Facebook is a decentralized company. Right? And so I think that's really important to remember is that there's nothing about blockchain technology specifically that requires a blockchain project to be decentralized in terms of power.
It still requires very intentional decision making on the parts of the people who are running that project to decentralize the power and reduce the degree to which any one entity can control the network. And so I think that there is this issue where people sort of see blockchains and they think decentralized, and in reality you have to dig a lot deeper.

CINDY COHN: Yeah, EFF has participated in a couple of the sanctions cases and the prosecutions of people who have developed peace. Is of the blockchain world especially around mixers. TornadoCash is one that we participated in, and I think this is an area where we have a kind of similar view about the role of the open source community and kind of the average coder and when their responsibility should create liability and when they should be protected from liability.
And we've tried to continue to weigh in on these cases to make sure the courts don't overstep, right? Because the prosecution gets so mad. You're talking about a lot of money laundering and, and things like that, that the, you know, the prosecution just wants to throw the book at everybody who was ever involved in these kinds of things and trying to create this space where, you know, a coder who just participates in a GitHub developing some piece of code doesn't have a liability risk.
And I think you've thought about this as well, and I'm wondering, do you see the government overstepping and do you think it's right to continue to think about that, that overstepping?

MOLLY WHITE: Yeah, I mean, I think it's that those are the types of questions that are really important when it comes to tackling problems around blockchains and cryptocurrencies and the financial systems that are developing around these products.
Tou have to be really cautious that, you know, just because a bad thing is happening, you don't come in with a hammer that is, you know, much too big and start swinging it around and hitting sort of anyone in the vicinity because, you know, I think there are some things that should absolutely be protected, like, you know, writing software, for example.
A person who writes software should not necessarily be liable for everything that another person then goes and does with that software. And I think that's something that's been fairly well established through, you know, cryptography cases, for example, where people writing encryption algorithms and software to do strong encryption should not be considered liable for whatever anyone encrypts with that technology. We've seen it with virus writers, you know, it would be incredibly challenging for computer scientists to research and sort of think about new viruses and protect against vulnerabilities if they were not allowed to write that software.
But, you know, if they're not going and deploying this virus on the world or using it to, you know, do a ransomware attack, then they probably shouldn't be held liable for it. And so similar questions are coming up in these cryptocurrency cases or these cases around cryptocurrency mixers that are allowing people to anonymize their transactions in the crypto world more adequately.
And certainly that is heavily used in money laundering and in other criminal activities that are using cryptocurrencies. But simply writing the software to perform that anonymization is not itself, I think, a crime. Especially when there are many reasons you might want to anonymize your financial transactions that are otherwise publicly visible to anyone who wishes to see them, and, you know, can be linked to you if you are not cautious about your cryptocurrency addresses or if you publish them yourself.
And so, you know, I've tried to speak out about that a little bit because I think a lot of people see me as, you know, a critic of the cryptocurrency world and the blockchain world, and I think it should be banned or that anyone trading crypto should be put in jail or something like that, which is a very extreme interpretation of my beliefs and is, you know, absolutely not what I believe. I think that, you know, software engineers should be free to write software and then if someone takes that software and commits a crime with it, you know, that is where law enforcement should begin to investigate. Not at the, you know, the software developer's computer.

CINDY COHN: Yeah, I just think that's a really important point. I think it's easy, especially because there's so much fraud and scam and abuse in this space, to try to make sure that we're paying attention to where are we setting the liability rules because even if you don't like cryptocurrency or any of those kinds of things, like protecting anonymity is really important.
It's kind of a function of our times right now where people are either all one or all the other. And I really have appreciated, as you've kind of gone through this, thinking about a position that protects the things that we need to protect, even if we don't care about 'em in this context, because we might in another, and law of course, is kind of made up of things that get set in one context and then applied in another, while at the same time being, you know, kind of no holds barred, critical of the awful stuff that's going on in this world.

JASON KELLEY: Let’s take a quick moment to say thank you to our sponsor.
“How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.
And now back to our conversation with Molly White

JASON KELLEY: Some of the technologies you're talking about when sort of separated out from, maybe, the hype or the negatives that have like, overtaken the story. Things like peer-to-peer file sharing, cryptography. I mean, even, let's say, being able to send money to someone, you know, with your phone, if you want to call it that, are pretty incredible at some level, you know?
And you gave a talk in October that was about a time that you felt like the web was magic and you brought up a, a website that I'm gonna pretend that I've never heard of, so you can explain it to me, called Neopets. And I just wanna, for the listeners, could you explain a little bit about what Neopets was and sort of how it helped inform you about the way you want the web to work and, and things like that?

MOLLY WHITE: Yeah, so Neopets was a kids game. When I was a kid, you could adopt these little cartoon pets and you could like feed them and change their colors and do things, you know, play little games with them.

JASON KELLEY: Like Tamagotchis a little bit,

MOLLY WHITE: a little bit. Yeah. Yeah. There was also this aspect to the website where you could edit your user page and you could create little webpages in your account that were, it was pretty freewheeling, you know, you could edit the CSS and the HTML and you could make your own little website essentially. And as a kid that was really my first exposure to the idea that the internet and these websites that I was seeing, you know, sort of for the first time were not necessarily a read-only operation. You know, I was used to playing maybe little games on the internet  whatever kids were doing on the internet at the time.
And Neopets was really my first realization that I could add things to the internet or change the way they looked or interact with it in a way that was, you know, very participatory. And that later sort of turned into editing Wikipedia and then writing software and then publishing my writing on the web.
And that was really magical for me because it sort of informed me about the platform that was in front of me and how powerful it was to be able to, you know, edit something, create something, and then the whole world could see it.

JASON KELLEY: There's a really common critique right now that young people are sort of learning only bad things online or like only overusing the internet. And I mean, first of all, I think that's obviously not true. You know, every circumstance is different, but do you see places where like the way you experienced the internet growing up are still happening for young people?

MOLLY WHITE: Yeah, I mean, I think a lot of those, as you mentioned, I think a lot of those critiques are very misguided and they miss a lot of the incredibly powerful and positive aspects of the internet. I mean, the fact that you can go look something up and learn something new in half a second, is revolutionary. But then I think there are participatory examples, much like what I was experiencing when I was younger. You know, people can still edit Wikipedia the way that I was doing as a kid. That is a very powerful thing to do when you're young, to realize that knowledge is not this thing that is handed down from on high from some faceless expert who wrote history, but it's actually something that people are contributing to and improving constantly. And it can always be updated and improved and edited and shared, you know, in this sort of free and open way. I think that is incredibly powerful and is still open to people of any age who are, you know, able to access the site.

JASON KELLEY: I think it's really important to bring up some of these examples because something I've been thinking about a lot lately as these critiques and attacks on young people using the internet have sort of grown and even, you know, entered the state and congressional level in terms of bills, is that a lot of the people making these critiques, I feel like never liked the internet to begin with. They don't see it as magic in the way that I think you do and that, you know, a lot of our listeners do.
And it's a view that is a problem specifically because I feel like you have to have loved the internet before you can hate it. You know, like, it's not like you need to really be able to critique the specific things rather than just sort of throw out the whole thing. And one of the things you know, I like about the work that you do is that you clearly have this love for technology and for the internet, and that lets you, I think, find the problems. And other people can't see through into those specific individual issues. And so they just wanna toss the whole thing.

MOLLY WHITE: I think that's really true. I think that, you know, I think there is this weird belief, especially around tech critics, that tech critics hate technology. It's so divorced from reality because, you don't see that in other worlds where, you know, art critics are never told that they just hate all art. I think most people understand that art critics love art and that's why they are critics.
But with technology critics, there's sort of this weird, you know, this perception of us as people who just hate technology, we wanna tear it all down when in reality, you know, I know a lot of tech critics and, and most of us, if not all of us, that I can think of come from a, you know, a background of loving technology often from a very young age, and it is because of that love and the want to see technology to continue to allow people to have those transformative experiences that we criticize it.
And that's, for some reason, just a hard thing, I think for some people to wrap their minds around.

JASON KELLEY: I want to talk a little bit more about Wikipedia, the whole sort of organization and what it stands for and what it does has been under attack a lot lately as well. Again, I think that, you know, it's a lot of people misunderstanding how it works and, and, um, you know, maybe finding some realistic critiques of the fact that, that, you know, it's individually edited, so there's going to be some bias in some places and things like that, and sort of extrapolating out when they have a good faith argument to this other place. So I'm wondering if you've thought about how to protect Wikipedia, how to talk about it. How you know your experience with it has made you understand how it works better than most people.
And also just generally, you know how it can be used as a model for the way that the internet should be or the way we can build a better internet.

MOLLY WHITE: I think this ties back a little bit to the decentralization topic where Wikipedia is not decentralized in the sense that, you know, there is one company or one nonprofit organization that controls all the servers. And so there is this sort of centralization of power in that sense. But it is very decentralized in the editing world where there is no editorial board that is vetting every edit to the site.
There are, you know, numerous editors that contribute to any one article and no one person has the final say. There are different language versions of Wikipedia that all operate somewhat independently. And because of that, I think it has been challenging for people to attack it successfully.
Certainly there have been no shortage of those attacks. Um, but you know, it's not a company that someone could buy out and take over in ways that we've seen, you know, for example Elon Musk do with Twitter. There is no sort of editorial board that can be targeted and sort of pressured to change the language on the site. And, you know, I think that has helped to make Wikipedia somewhat resilient in ways that we've seen news organizations or other media publications struggle with recently where, you know, they have faced pressure from their buyers. The, you know, the people who own those organizations to be sure.
They've faced, you know, threats from the government in some cases. And Wikipedia is structured somewhat differently that I think helps us remain more protected from those types of attacks. But, you know, I, I am cautious to note that, you know, there are still vulnerabilities.
The attacks on Wikipedia need to be vociferously opposed. And so we have to be very cautious about this because the incredible resource that Wikipedia is, is is something that doesn't just sort of happen in a vacuum, you know, outside of any individual's actions.
It requires constant support, constant participation, constant editing. And so, you know, it's certainly a model to look to in terms of how communities can organize and contribute to, um, you know, projects on the internet. But it's also something that has to be very carefully maintained.

CINDY COHN: Yeah, I mean, this is just a lesson for our times, right? You know, there isn't a magical tech that can protect against all attacks. And there isn't a magical, you know, nonprofit 501-C3 that can be resistant against all the attacks. And we're in a time where they're coming fast and furious against our friends at Wikimedia, along with a lot of other, other things.
And I think the impetus is on communities to show up and, and, you know, not just let these things slide or think that, you know, uh, the internet might be magic in some ways, but it's not magic in these ways. Like we have to show up and fight for them. Um, I wanted to ask you a little bit about, um, kind of big tech's embrace of AI.
Um, you've been critical of it. We've been critical of it as well in many ways. And, and I, I wanna hear kind of your concerns about it and, um, and, and kind of how you see AI’s, you know, role in a better world. But, you know, also think about the ways in which it's not working all that well right now.

MOLLY WHITE: I generally don't have this sort of hard and fast view of AI is good or AI is bad, but it really comes down to how that technology is being used. And I think the widespread use of AI in ways that exploit workers and creatives and those who have decided to publish something online for example, and did not expect for that publication to be used by big tech companies that are then profiting off of it, that is incredibly concerning. Um, as well as the ways that AI is marketed to people. Again, this sort of mirrors my criticism, surround the crypto industry, but a lot of the marketing around AI is incredibly misleading. Um, you know, they're making promises that are not born out in reality.
They are selling people a product that will lie to you, you know, that will tell you things that are inaccurate. So I have a lot of concerns around AI, especially as we've seen it being used in the broadest, and sort of by the largest companies. But you know, I also acknowledge that there are ways in which some of this technology has been incredibly useful. And so, you know, it is one of these things where it has to be viewed with nuance, I think, around the ways it's being developed, the ways it's being deployed, the ways it's being marketed.

CINDY COHN: Yeah, there is a, a kinda eerie familiarity around the hype around AI and the hype around crypto. That, it's just kind of weird. It feels like we're going through like a, you know, groundhog day. Like we're living through the, another hype cycle that feels like the last, I think, you know, for us at EFF, we're really, we, we've tried to focus a lot on governmental use of AI's systems and AI systems that are trying to predict human behavior, right?
The digital equivalent of phrenology right? You know, let us, let us do sentiment analysis on the things that you said, and that'll tell us whether you're about to be a criminal or, you know, the right person for the job. I think those are the places that we've really identified, um, as, you know, problematic on a number of levels. You know, number one, it, it doesn't work nearly as well as,

MOLLY WHITE: That is a major problem!

CINDY COHN: It seems like that ought to be number one. Right. And this, you know, especially spending your time in Wikipedia where you're really working hard to get it right. Right. And you see the kind of back and forth of the conversation. But the, the central thing about Wikipedia is it's trying to actually give you truthful information and then watching the world get washed over with these AI assistants that are really not at all focused on getting it right, you know, or really focused on predicting the next word or, or however that works, right. Like, um, it's gotta be kind of strange from where you sit, I suspect, to see this.

MOLLY WHITE: Yeah, it's, it's very frustrating. And, you know, I, I like to think we lived in a world at one time where people wanted to produce technology that helped people, technology that was accurate, technology that worked in the ways that they said it did. And it's been very weird to watch, especially over the last few years that sort of, uh, those goals degrade where, well, maybe it's okay if it gets things wrong a lot, you know, or maybe it's okay if it doesn't work the way that we've said it does or maybe never possibly can.
That's really frustrating to watch as someone who, again, loves technology and loves the possibilities of technology to then see people just sort of using technology to, to deliver things that are, you know, making things worse for people in many ways.

CINDY COHN: Yeah, so I wanna flip it around a little bit. You, like EFF, we kind of sometimes spend a lot of time in all the ways that things are broken, and how do you think about how to get to a place where things are not broken, or how do you even just keep focusing on a better place that we could get to?

MOLLY WHITE: Well, I've, like I said, you know, a lot of my criticism really comes down to the industries and the sort of exploitative practices of a lot of these companies in the tech world. And so to the extent possible, separating myself from those companies and from their control has been really powerful to sort of regain some of that independence that I once remembered the web enabling where, you know, if you had your own website, you know, you could kind of do anything you wanted. And you didn't have to stay within the 280 characters if you had an idea, you know, and you could publish, uh, you know, a video that was longer than 10 minutes long or, or whatever it might be.
So sort of returning to some of those ideals around creating my own spaces on the web where I have that level of creative freedom, and certainly freedom in other ways, has been very powerful. And then finding communities of people who believe in those same things. I've taken a lot of hope in the Fediverse and the communities that have emerged around those types of technologies and projects where, you know, they're saying maybe there is an alternative out there to, you know, highly centralized big tech, social media being what everyone thinks of as the web. Maybe we could create different spaces outside of that walled garden where we all have control over what we do and say, and who we interact with. And we set the terms on which we interact with people.
And sort of push away the, the belief that, you know, a tech company needs to control an algorithm to show you what it is that you want to see, when in reality, maybe you could make those decisions for yourself or choose the algorithm or, you know, design a system for yourself using the technologies that are available to everyone, but have been sort of walled in by a large or many of the large players in the web these days.

CINDY COHN: Thank you, Molly. Thank you very much for coming on and, and spending your time with us. We really appreciate the work that you're doing, um, and, and the way that you're able to boil down some pretty complicated situations into, you know, kind of smart and thoughtful ways of reflecting on them. So thank you.

MOLLY WHITE: Yeah. Thank you.

JASON KELLEY: It was really nice to talk to someone who has that enthusiasm for the internet. You know, I think all of our guests probably do, but when we brought up Neo pets, that excitement was palpable, and I hope we can find a way to get more of that enthusiasm back.
That's one of the things I'm taking away from that conversation was that more people need to be enthusiastic about using the internet and whatever that takes. What did you take away from chatting with Molly that we need to do differently Cindy?

CINDY COHN: Well, I think that the thing that made the enthusiasm pop in her voice was the idea that she could control things. That she was participating and, and participating not only in Neopets, but the participation on Wikipedia as well, right?
That she could be part of trying to make truth available to people and recognizing that truth in some ways isn't an endpoint, it's an evolving conversation among people to try to keep getting at getting it right.
That doesn't mean there isn't any truth, but it does mean that there is an open community of people who are working towards that end. And, you know, you hear that enthusiasm as well. It's, you know, the more you give in, the more you get out of the internet and trying to make that a more common experience of the internet that things aren't just handed to you or taught to you, but really it's a two-way street, that's where the enthusiasm came from for her, and I suspect for a lot of other people.

JASON KELLEY: Yeah, and what you're saying about truth, I think she sort of applies this in a lot of different ways. Even specific technologies, I think most people realize this, but you have to say it again and again, aren't necessarily right or wrong for everything. You know, AI isn't right or wrong for every scenario. It's sort of, things are always evolving. How we use them is evolving. Whether or not something is correct today doesn't mean it will be correct tomorrow. And there's just a sort of nuance and awareness that she had to how these different things work and when they make sense that I hope we can continue to see in more people instead of just a sort of, uh, flat across the board dislike of, you know, quote unquote the internet or quote unquote social media and things like that.

CINDY COHN: Yeah, or the other way around, like whatever it is, there's a hype cycle and it's just hyped over and over again. And that she's really charting a middle ground in the way she writes and talks about these things that I think is really important. I think the other thing I really liked was her framing of decentralization as thinking about decentralizing power, not decentralizing compute, and that difference being something that is often elided or not made clear.
But that can really help us see where, you know, where decentralization is happening in a way that's empowering people, making things better. You have to look for decentralized power, not just decentralized compute. I thought that was a really wise observation.

JASON KELLEY: And I think could be applied to so many other things where a term like decentralized may be used because it's accessible from everywhere or something like that. Right? And it's just, these terms have to be examined. And, and it sort of goes to her point about marketing, you know, you can't necessarily trust the way the newest fad is being described by its purveyors.
You have to really understand what it's doing at the deeper level, and that's the only way you can really determine if it's, if it's really decentralized, if it's really interoperable, if it's really, you know, whatever the new thing is. AI

CINDY COHN: Mm-hmm. Yeah, I think that's right. And you know, luckily for us, we have Molly who digs deep into the details of this for so many technologies, and I think we need to, you know, support and defend, all the people who are doing that. Kind of that kind of careful work for us, because we can't do all of it, you know, we're humans.
But having people who will do that for us in different places who are trusted and who aren't, you know who whose agenda is clear and understandable, that's kind of the best we can hope for. And the more of that we build and support and create spaces for on the, you know, uncontrolled open web as opposed to the controlled tech giants and walled gardens, as she said, I think the better off we'll be.

JASON KELLEY: Thanks for joining us for this episode of How to Fix the Internet.
If you have feedback or suggestions, we'd love to hear from you. Visit EFF dot org slash podcast and click on listener feedback. While you're there, you can become a member, donate, maybe even pick up some merch and just see what's happening in digital rights this week and every week.
Our theme music is by Nat Keefe of BeatMower with Reed Mathis
And How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.
We’ll see you next time.
I’m Jason Kelley…

CINDY COHN: And I’m Cindy Cohn.

MUSIC CREDITS: This podcast is licensed Creative Commons attribution 4.0 international and includes the following music licensed Creative Commons 3.0 unported by its creators: Drops of H2O, the filtered water treatment, by J. Lang. Additional beds by Gaetan Harris.

Josh Richman

Please Drone Responsibly: C-UAS Legislation Needs Civil Liberties Safeguards

2 weeks 1 day ago

Today, the Senate Judiciary Committee is holding a hearing titled “Defending Against Drones: Setting Safeguards for Counter Unmanned Aircraft Systems Authorities.” While the government has a legitimate interest in monitoring and mitigating drone threats, it is critical that those powers are narrowly tailored. Robust checks and oversight mechanisms must exist to prevent misuse and to allow ordinary, law-abiding individuals to exercise their rights. 

Unfortunately, as we and many other civil society advocates have highlighted, past proposals have not addressed those needs. Congress should produce well-balanced rules that address all these priorities, not grant de facto authority to law enforcement to take down drone flights whenever they want. Ultimately, Congress must decide whether drones will be a technology that mainly serves government agencies and big companies, or whether it might also empower individuals. 

To make meaningful progress in stabilizing counter unmanned aerial system (“C-UAS”) authorities and addressing emerging issues, Congress should adopt a more comprehensive approach that considers the full range of risks and implements proper safeguards. Future C-UAS legislation include the following priorities, which are essential to protecting civil liberties and ensuring accountability:

  • Strong and explicit safeguards for First Amendment-protected activities 
  • Ensure transparency and require detailed reporting
  • Provide due process and recourse for improper counter-drone activities 
  • Require C-UAS mitigation to involve least-invasive methods
  • Maintain reasonable retention limits on data collection
  • Maintain sunset for C-UAS powers as drone uses continue to evolve

Congress can—and should—address public safety concerns without compromising privacy and civil liberties. C-UAS authorities should only be granted with the clear limits outlined above to help ensure that counter-drone authorities are wielded responsibly.

The American Civil Liberties Union (ACLU), Center for Democracy & Technology (CDT), Electronic Frontier Foundation (EFF), and Electronic Privacy Information Center (EPIC) shared these concerns with the Committee in a joint Statement For The Record.

India McKinney

Security Theater REALized and Flying without REAL ID

2 weeks 4 days ago

After multiple delays of the REAL ID Act of 2005 and its updated counterpart, the REAL ID Modernization Act, in the United States, the May 7th deadline of REAL ID enforcement has finally arrived. Does this move our security forward in the skies? The last 20 years says we got along fine without it. There were and are issues along the way that REAL ID does impose on everyday people, such as potential additional costs and rigid documentation, even if you already have a state issued ID. While TSA states this is not a national ID or a federal database, but a set of minimum standards required for federal use, we are still watchful of the mechanisms that have pivoted to potential privacy issues with the expansion of digital IDs.

But you don’t need a REAL ID just to fly domestically. There are alternatives.

The most common alternatives are passports or passport cards. You can use either instead of a REAL ID, which might save you an immediate trip to the DMV. And the additional money for a passport at least provides you the extra benefit of international travel.

Passports and passport cards are not the only alternatives to REAL ID. Additional documentation is also accepted as well: (this list is subject to change by the TSA):

  • REAL ID-compliant driver's licenses or other state photo identity cards issued by Department of Motor Vehicles (or equivalent and this excludes a temporary driver’s license)
  • State-issued Enhanced Driver's License (EDL) or Enhanced ID (EID)
  • U.S. passport
  • U.S. passport card
  • DHS trusted traveler cards (Global Entry, NEXUS, SENTRI, FAST)
  • U.S. Department of Defense ID, including IDs issued to dependents
  • Permanent resident card
  • Border crossing card
  • An acceptable photo ID issued by a federally recognized Tribal Nation/Indian Tribe, including Enhanced Tribal Cards (ETCs)
  • HSPD-12 PIV card
  • Foreign government-issued passport
  • Canadian provincial driver's license or Indian and Northern Affairs Canada card
  • Transportation Worker Identification Credential (TWIC)
  • U.S. Citizenship and Immigration Services Employment Authorization Card (I-766)
  • U.S. Merchant Mariner Credential
  • Veteran Health Identification Card (VHIC)

Foreign government-issued passports are on this list. However, using a foreign-government issued passport may increase your chances of closer scrutiny at the security gate. REAL ID and other federally accepted documents are supposed to be about verifying your identity, not about your citizenship status. Realistically, interactions with secondary screening and law enforcement are not out of the realm of possibility for non-citizens. The power dynamics of the border has now been brought to flying domestically thanks to REAL ID. The privileges of who can and can’t fly are more sensitive now.

REAL ID and other federally accepted documents are supposed to be about verifying your identity, not about your citizenship status

Mobile Driver’s Licenses (mDLs)

Many states have rolled out the option for a Mobile Driver's License, which acts as a form of your state-issued ID on your phone and is supposed to come with an exception for REAL ID compliance. This is something we asked for since mDLs appear to satisfy their fears of forgery and cloning. But the catch is that states had to apply for this waiver:

“The final rule, effective November 25, 2024, allows states to apply to TSA for a temporary waiver of certain REAL ID requirements written in the REAL ID regulations.”

TSA stated they would publish the list of states with this waiver. But we do not see it on the website where they stated it would be. This bureaucratic hurdle appears to have rendered this exception useless, which is disappointing considering the TSA pushed for mDLs to be used first in their context.

Google ID Pass

Another exception appears to bypass state issued waivers, Google Wallet’s “ID Pass”. If a state partnered with Google to issue mDLs, or if you have a passport, then that is acceptable to TSA. This is a large leap in terms of reach of the mDL ecosystem expanding past state scrutiny to partnering directly with a private company to bring acceptable forms of ID for TSA. There’s much to be said on our worries with digital ID and the rapid expansion of them outside of the airport context. This is another gateway that highlights how ID is being shaped and accepted in the digital sense.

Both with ID Pass and mDLs, the presentation flow allows for you to tap with your phone without unlocking it. Which is a bonus, but it is not clear if TSA has the tech to read these IDs at all airports nationwide and it is still encouraged to bring a physical ID for additional verification.

A lot of the privilege dynamics of flying appear through types of ID you can obtain, whether your shoes stay on, how long you wait in line, etc. This is mostly tied to how much you can spend on traveling and how much preliminary information you establish with TSA ahead of time. The end result is that less wealthy people are subjected to the most security mechanisms at the security gate. For now, you can technically still fly without a REAL ID, but that means being subject to additional screening to verify who you are.

REAL ID enforcement has some leg room for those who do not want or can’t get a REAL ID. But the progression of digital ID is something we are keeping watch of that continues to be presented as the solution to worries of fraud and forgery. Governments and private corporations alike are pushing major efforts for rapid digital ID deployments and more frequent presentation of one’s ID attributes. Your government ID is one of the narrowest, static verifications of who you are as a person. Making sure that information is not used to create a centralized system of information was as important yesterday with REAL ID as it is today with digital IDs.

Alexis Hancock

Standing Up for LGBTQ+ Digital Safety this International Day Against Homophobia

2 weeks 4 days ago

Lawmakers and regulators around the world have been prolific with passing legislation restricting freedom of expression and privacy for LGBTQ+ individuals and fueling offline intolerance. Online platforms are also complicit in this pervasive ecosystem by censoring pro-LGBTQ+ speech, forcing LGBTQ+ individuals to self-censor or turn to VPNs to avoid being profiled, harassed, doxxed, or criminally prosecuted.  

The fight for the safety and rights of LGBTQ+ people is not just a fight for visibility online (and offline)—it’s a fight for survival. This International Day Against Homophobia, Biphobia, and Transphobia, we’re sharing four essential tips for LGBTQ+ people to stay safe online.

Using Secure Messaging Services For Every Communication 

All of us, at least occasionally, need to send a message that’s safe from prying eyes. This is especially true for people who face consequences should their gender or sexual identity be revealed without their consent.

To protect your communications from being seen by others, install an encrypted messenger app such as Signal (for iOS or Android). Turn on disappearing messages, and consider shortening the amount of time messages are kept in the app if you are actually attending an event. If you have a burner device with you, be sure to save the numbers for emergency contacts.

Don’t wait until something sensitive arises: make these apps your default for all communications. As a side benefit, the messages and images sent to family and friends in group chats will be safe from being viewed by automated and human scans on services like Telegram and Facebook Messenger. 

Consider The Content You Post On Social Media 

Our decision to send messages, take pictures, and interact with online content has a real offline impact. And whilst we cannot control every circumstance, we can think about how our social media behaviour impacts those closest to us and those in our proximity, especially if these people might need extra protection around their identities. 

Talk with your friends about the potentially sensitive data you reveal about each other online. Even if you don’t have a social media account, or if you untag yourself from posts, friends can still unintentionally identify you, report your location, and make their connections to you public. This works in the offline world too, such as sharing precautions with organizers and fellow protesters when going to a demonstration, and discussing ahead of time how you can safely document and post the event online without exposing those in attendance to harm.

If you are organizing online or conversing on potentially sensitive issues, choose platforms that limit the amount of information collected and tracking undertaken. We know this is not always possible as perhaps people cannot access different applications. In this scenario, think about how you can protect your community on the platform you currently engage on. For example, if you currently use Facebook for organizing, work with others to keep your groups as private and secure as possible.

Create Incident Response Plans

Developing a plan for if or when something bad happens is a good practice for anyone, but especially for LGBTQ+ people who face increased risk online. Since many threats are social in nature, such as doxxing or networked harassment, it’s important to strategize with your allies around what to do in the event of such things happening. Doing so before an incident occurs is much easier than when you’re presently facing a crisis.

Only you and your allies can decide what belongs on such a plan, but some strategies might be: 

  • Isolating the impacted areas, such as shutting down social media accounts and turning off affected devices
  • Notifying others who may be affected
  • Switching communications to a predetermined more secure alternative
  • Noting behaviors of suspected threats and documenting these 
  • Outsourcing tasks to someone further from the affected circle who is already aware of this potential responsibility.
Consider Your Safety When Attending and Protests 

Given the increase in targeted harassment and vandalism towards LGBTQ+ people, it’s important to consider counterprotesters showing up at various events. Since the boundaries between events like pride parades and protest might be blurred, precautions are necessary. Our general guide for attending a protest covers the basics for protecting your smartphone and laptop, as well as providing guidance on how to communicate and share information responsibly. We also have a handy printable version available here.

This includes:

  • Removing biometric device unlock like fingerprint or FaceID to prevent police officers from physically forcing you to unlock your device with your fingerprint or face. You can password-protect your phone instead.
  • Logging out of accounts and uninstalling apps or disabling app notifications to avoid app activity in precarious legal contexts from being used against you, such as using queer dating apps in places where homosexuality is illegal. 
  • Turning off location services on your devices to avoid your location history from being used to identify your device’s comings and goings. For further protections, you can disable GPS, Bluetooth, Wi-Fi, and phone signals when planning to attend a protest.
LGBTQ+ Rights For Every Day 

Consider your digital safety like you would any aspect of bodily autonomy and self determination—only you get to decide what aspects of yourself you share with others, how you present to the world, and what things you keep private. With a bit of care, you can maintain privacy, safety, and pride in doing so. 

And in the meantime, we’re fighting to ensure that the internet can be a safe (and fun!) place for all LGBTQ+ people. Now more than ever, it’s essential for allies, advocates, and marginalized communities to push back against these dangerous laws and ensure that the internet remains a space where all voices can be heard, free from discrimination and censorship.

Paige Collings

House Moves Forward With Dangerous Proposal Targeting Nonprofits

2 weeks 4 days ago

This week, the U.S. House Ways and Means Committee moved forward with a proposal that would allow the Secretary of the Treasury to strip any U.S. nonprofit of its tax-exempt status by unilaterally determining the organization is a “Terrorist Supporting Organization.” This proposal, which places nearly unlimited discretion in the hands of the executive branch to target organizations it disagrees with, poses an existential threat to nonprofits across the U.S. 

This proposal, added to the House’s budget reconciliation bill, is an exact copy of a House-passed bill that EFF and hundreds of nonprofits across the country strongly opposed last fall. Thankfully, the Senate rejected that bill, and we urge the House to do the same when the budget reconciliation bill comes up for a vote on the House floor. 

The goal of this proposal is not to stop the spread of or support for terrorism; the U.S. already has myriad other laws that do that, including existing tax code section 501(p), which allows the government to revoke the tax status of designated “Terrorist Organizations.” Instead, this proposal is designed to inhibit free speech by discouraging nonprofits from working with and advocating on behalf of disadvantaged individuals and groups, like Venezuelans or Palestinians, who may be associated, even completely incidentally, with any group the U.S. deems a terrorist organization. And depending on what future groups this administration decides to label as terrorist organizations, it could also threaten those advocating for racial justice, LGBTQ rights, immigrant communities, climate action, human rights, and other issues opposed by this administration. 

On top of its threats to free speech, the language lacks due process protections for targeted nonprofit organizations. In addition to placing sole authority in the hands of the Treasury Secretary, the bill does not require the Treasury Secretary to disclose the reasons for or evidence supporting a “Terrorist Supporting Organization” designation. This, combined with only providing an after-the-fact administrative or judicial appeals process, would place a nearly insurmountable burden on any nonprofit to prove a negative—that they are not a terrorist supporting organization—instead of placing the burden where it should be, on the government. 

As laid out in letter led by ACLU and signed by over 350 diverse nonprofits, this bill would provide the executive branch with: 

“the authority to target its political opponents and use the fear of crippling legal fees, the stigma of the designation, and donors fleeing controversy to stifle dissent and chill speech and advocacy. And while the broadest applications of this authority may not ultimately hold up in court, the potential reputational and financial cost of fending off an investigation and litigating a wrongful designation could functionally mean the end of a targeted nonprofit before it ever has its day in court.” 

Current tax law makes it a crime for the President and other high-level officials to order IRS investigations over policy disagreements. This proposal creates a loophole to this rule that could chill nonprofits for years to come. 

There is no question that nonprofits and educational institutions – along with many other groups and individuals – are under threat from this administration. If passed, future administrations, regardless of party affiliation, could weaponize the powers in this bill against nonprofits of all kinds. We urge the House to vote down this proposal. 

Jennifer Lynch

The U.S. Copyright Office’s Draft Report on AI Training Errs on Fair Use

2 weeks 4 days ago

Within the next decade, generative AI could join computers and electricity as one of the most transformational technologies in history, with all of the promise and peril that implies. Governments’ responses to GenAI—including new legal precedents—need to thoughtfully address real-world harms without destroying the public benefits GenAI can offer. Unfortunately, the U.S. Copyright Office’s rushed draft report on AI training misses the mark.

The Report Bungles Fair Use

Released amidst a set of controversial job terminations, the Copyright Office’s report covers a wide range of issues with varying degrees of nuance. But on the core legal question—whether using copyrighted works to train GenAI is a fair use—it stumbles badly. The report misapplies long-settled fair use principles and ultimately puts a thumb on the scale in favor of copyright owners at the expense of creativity and innovation.

To work effectively, today’s GenAI systems need to be trained on very large collections of human-created works—probably millions of them. At this scale, locating copyright holders and getting their permission is daunting for even the biggest and wealthiest AI companies, and impossible for smaller competitors. If training makes fair use of copyrighted works, however, then no permission is needed.

Right now, courts are considering dozens of lawsuits that raise the question of fair use for GenAI training. Federal District Judge Vince Chhabria is poised to rule on this question, after hearing oral arguments in Kadrey v. Meta PlatformsThe Third Circuit Court of Appeals is expected to consider a similar fair use issue in Thomson Reuters v. Ross Intelligence. Courts are well-equipped to resolve this pivotal issue by applying existing law to specific uses and AI technologies. 

Courts Should Reject the Copyright Office’s Fair Use Analysis

The report’s fair use discussion contains some fundamental errors that place a thumb on the scale in favor of rightsholders. Though the report is non-binding, it could influence courts, including in cases like Kadrey, where plaintiffs have already filed a copy of the report and urged the court to defer to its analysis.   

Courts need only accept the Copyright Office’s draft conclusions, however, if they are persuasive. They are not.   

The Office’s fair use analysis is not one the courts should follow. It repeatedly conflates the use of works for training models—a necessary step in the process of building a GenAI model—with the use of the model to create substantially similar works. It also misapplies basic fair use principles and embraces a novel theory of market harm that has never been endorsed by any court.

The first problem is the Copyright Office’s transformative use analysis. Highly transformative uses—those that serve a different purpose than that of the original work—are very likely to be fair. Courts routinely hold that using copyrighted works to build new software and technology—including search engines, video games, and mobile apps—is a highly transformative use because it serves a new and distinct purpose. Here, the original works were created for various purposes and using them to train large language models is surely very different.

The report attempts to sidestep that conclusion by repeatedly ignoring the actual use in question—training —and focusing instead on how the model may be ultimately used. If the model is ultimately used primarily to create a class of works that are similar to the original works on which it was trained, the Office argues, then the intermediate copying can’t be considered transformative. This fundamentally misunderstands transformative use, which should turn on whether a model itself is a new creation with its own distinct purpose, not whether any of its potential uses might affect demand for a work on which it was trained—a dubious standard that runs contrary to decades of precedent.

The Copyright Office’s transformative use analysis also suggests that the fair use analysis should consider whether works were obtained in “bad faith,” and whether developers respected the right “to control” the use of copyrighted works.  But the Supreme Court is skeptical that bad faith has any role to play in the fair use analysis and has made clear that fair use is not a privilege reserved for the well-behaved. And rightsholders don’t have the right to control fair uses—that’s kind of the point.

Finally, the Office adopts a novel and badly misguided theory of “market harm.” Traditionally, the fair use analysis requires courts to consider the effects of the use on the market for the work in question. The Copyright Office suggests instead that courts should consider overall effects of the use of the models to produce generally similar works. By this logic, if a model was trained on a Bridgerton novel—among millions of other works—and was later used by a third party to produce romance novels, that might harm series author Julia Quinn’s bottom line.

This market dilution theory has four fundamental problems. First, like the transformative use analysis, it conflates training with outputs. Second, it’s not supported by any relevant precedent. Third, it’s based entirely on speculation that Bridgerton fans will buy random “romance novels” instead of works produced by a bestselling author they know and love.  This relies on breathtaking assumptions that lack evidence, including that all works in the same genre are good substitutes for each other—regardless of their quality, originality, or acclaim. Lastly, even if competition from other, unique works might reduce sales, it isn’t the type of market harm that weighs against fair use.

Nor is lost revenue from licenses for fair uses a type of market harm that the law should recognize. Prioritizing private licensing market “solutions” over user rights would dramatically expand the market power of major media companies and chill the creativity and innovation that copyright is intended to promote. Indeed, the fair use doctrine exists in part to create breathing room for technological innovation, from the phonograph record to the videocassette recorder to the internet itself. Without fair use, crushing copyright liability could stunt the development of AI technology.

We’re still digesting this report, but our initial review suggests that, on balance, the Copyright Office’s approach to fair use for GenAI training isn’t a dispassionate report on how existing copyright law applies to this new and revolutionary technology. It’s a policy judgment about the value of GenAI technology for future creativity, by an office that has no business making new, free-floating policy decisions.

The courts should not follow the Copyright Office’s speculations about GenAI. They should follow precedent.

Tori Noble

In Memoriam: John L. Young, Cryptome Co-Founder

2 weeks 5 days ago

John L. Young, who died March 28 at age 89 in New York City, was among the first people to see the need for an online library of official secrets, a place where the public could find out things that governments and corporations didn’t want them to know. He made real the idea – revolutionary in its time – that the internet could make more information available to more people than ever before.

John and architect Deborah Natsios, his wife, in 1996 founded Cryptome, an online library which collects and publishes data about freedom of expression, privacy, cryptography, dual-use technologies, national security, intelligence, and government secrecy. Its slogan: “The greatest threat to democracy is official secrecy which favors a few over the many.” And its invitation: “We welcome documents for publication that are prohibited by governments worldwide.”

Cryptome soon became known for publishing an encyclopedic array of government, court, and corporate documents. Cryptome assembled an indispensable, almost daily chronicle of the ‘crypto wars’ of the 1990s – when the first generation of internet lawyers and activists recognized the need to free up encryption from government control and undertook litigation, public activism and legislative steps to do so.  Cryptome became required reading for anyone looking for information about that early fight, as well as many others.    

John and Cryptome were also among the early organizers and sponsors of WikiLeaks, though like many others, he later broke with that organization over what he saw as its monetization. Cryptome later published Wikileaks’ alleged internal emails. Transparency was the core of everything John stood for.

John was one of the early, under-recognized heroes of the digital age.

John was a West Texan by birth and an architect by training and trade. Even before he launched the website, his lifelong pursuit of not-for-profit, public-good ideals led him to seek access to documents about shadowy public development entities that seemed to ignore public safety, health, and welfare. As the digital age dawned, this expertise in and passion for exposing secrets evolved into Cryptome with John its chief information architect, designing and building a real-time archive of seminal debates shaping cyberspace’s evolving information infrastructures.

The FBI and Secret Service tried to chill his activities. Big Tech companies like Microsoft tried to bully him into pulling documents off the internet. But through it all, John remained a steadfast if iconoclastic librarian without fear or favor.

John served in the United States Army Corps of Engineers in Germany (1953–1956) and earned degrees in philosophy and architecture from Rice University (1957–1963) and his graduate degree in architecture from Columbia University in 1969. A self-identified radical, he became an activist and helped create the community service group Urban Deadline, where his fellow student-activists initially suspected him of being a police spy. Urban Deadline went on to receive citations from the Citizens Union of the City of New York and the New York City Council.

John was one of the early, under-recognized heroes of the digital age. He not only saw the promise of digital technology to help democratize access to information, he brought that idea into being and nurtured it for many years.  We will miss him and his unswerving commitment to the public’s right to know.

Cindy Cohn

The Kids Online Safety Act Will Make the Internet Worse for Everyone

2 weeks 5 days ago

The Kids Online Safety Act (KOSA) is back in the Senate. Sponsors are claiming—again—that the latest version won’t censor online content. It isn’t true. This bill still sets up a censorship regime disguised as a “duty of care,” and it will do what previous versions threatened: suppress lawful, important speech online, especially for young people.

TAKE ACTION

KOSA Will silence kids and adults

KOSA Still Forces Platforms to Police Legal Speech

At the center of the bill is a requirement that platforms “exercise reasonable care” to prevent and mitigate a sweeping list of harms to minors, including depression, anxiety, eating disorders, substance use, bullying, and “compulsive usage.” The bill claims to bar lawsuits over “the viewpoint of users,” but that’s a smokescreen. Its core function is to let government agencies sue platforms, big or small, that don’t block or restrict content someone later claims contributed to one of these harms. 

When the safest legal option is to delete a forum, platforms will delete the forum.

This bill won’t bother big tech. Large companies will be able to manage this regulation, which is why Apple and X have agreed to support it. In fact, X helped negotiate the text of the last version of this bill we saw. Meanwhile, those companies’ smaller competitors will be left scrambling to comply. Under KOSA, a small platform hosting mental health discussion boards will be just as vulnerable as Meta or TikTok—but much less able to defend itself. 

To avoid liability, platforms will over-censor. It’s not merely hypothetical. It’s what happens when speech becomes a legal risk. The list of harms in KOSA’s “duty of care” provision is so broad and vague that no platform will know what to do regarding any given piece of content. Forums won’t be able to host posts with messages like “love your body,” “please don’t do drugs,” or “here’s how I got through depression” without fearing that an attorney general or FTC lawyer might later decide the content was harmful. Support groups and anti-harm communities, which can’t do their work without talking about difficult subjects like eating disorders, mental health, and drug abuse, will get caught in the dragnet. 

When the safest legal option is to delete a forum, platforms will delete the forum.

There’s Still No Science Behind KOSA’s Core Claims

KOSA relies heavily on vague, subjective harms like “compulsive usage.” The bill defines it as repetitive online behavior that disrupts life activities like eating, sleeping, or socializing. But here’s the problem: there is no accepted clinical definition of “compulsive usage” of online services.

There’s no scientific consensus that online platforms cause mental health disorders, nor agreement on how to measure so-called “addictive” behavior online. The term sounds like settled medical science, but it’s legislative sleight-of-hand: an undefined concept given legal teeth, with major consequences for speech and access to information.

Carveouts Don’t Fix the First Amendment Problem

The bill says it can’t be enforced based on a user’s “viewpoint.” But the text of the bill itself preferences certain viewpoints over others. Plus, liability in KOSA attaches to the platform, not the user. The only way for platforms to reduce risk in the world of KOSA is to monitor, filter, and restrict what users say.

If the FTC can sue a platform because minors saw a medical forum discussing anorexia, or posts about LGBTQ identity, or posts discussing how to help a friend who’s depressed, then that’s censorship. The bill’s stock language that “viewpoints are protected” won’t matter. The legal incentives guarantee that platforms will silence even remotely controversial speech to stay safe.

Lawmakers who support KOSA today are choosing to trust the current administration, and future administrations, to define what youth—and to some degree, all of us—should be allowed to read online. 

KOSA will not make kids safer. It will make the internet more dangerous for anyone who relies on it to learn, connect, or speak freely. Lawmakers should reject it, and fast. 

TAKE ACTION

TELL CONGRESS: OPPOSE KOSA

Joe Mullin
Checked
15 minutes 20 seconds ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed