🗣 Homeland Security Wants Names | EFFector 38.3

1 day ago

Criticize the government online? The Department of Homeland Security (DHS) might ask Google to cough up your name. By abusing an investigative tool called "administrative subpoenas," DHS has been demanding that tech companies hand over users' names, locations, and more. We're explaining how companies can stand up for users—and covering the latest news in the fight for privacy and free speech online—with our EFFector newsletter.

For over 35 years, EFFector has been your guide to understanding the intersection of technology, civil liberties, and the law. This latest issue tracks our campaign to expand end-to-end encryption protections, a bill to stop government face scans from Immigration and Customs Enforcement (ICE) and others, and why Section 230 remains the best available system to protect everyone’s ability to speak online.


Prefer to listen in? In our audio companion, EFF Senior Staff Attorney F. Mario Trujillo explains how Homeland Security's lawless subpoenas differ from court orders. Find the conversation on YouTube or the Internet Archive.

LISTEN TO EFFECTOR

EFFECTOR 38.3 - 🗣 Homeland Security Wants Names

Want to stay in the fight for privacy and free speech online? Sign up for EFF's EFFector newsletter for updates, ways to take action, and new merch drops. You can also fuel the fight against unlawful government surveillance when you support EFF today!

Christian Romero

“Free” Surveillance Tech Still Comes at a High and Dangerous Cost

1 day 1 hour ago

Surveillance technology vendors, federal agencies, and wealthy private donors have long helped provide local law enforcement “free” access to surveillance equipment that bypasses local oversight. The result is predictable: serious accountability gaps and data pipelines to other entities, including Immigration and Customs Enforcement (ICE), that expose millions of people to harm.

The cost of “free” surveillance tools — like automated license plate readers (ALPRs), networked cameras, face recognition, drones, and data aggregation and analysis platforms — is measured not in tax dollars, but in the erosion of civil liberties. 

The cost of “free” surveillance tools is measured not in tax dollars, but in the erosion of civil liberties.

The collection and sharing of our data quietly generates detailed records of people’s movements and associations that can be exposed, hacked, or repurposed without their knowledge or consent. Those records weaken sanctuary and First Amendment protections while facilitating the targeting of vulnerable people.   

Cities can and should use their power to reject federal grants, vendor trials, donations from wealthy individuals, or participation in partnerships that facilitate surveillance and experimentation with spy tech. 

If these projects are greenlit, oversight is imperative. Mechanisms like public hearings, competitive bidding, public records transparency, and city council supervision aid to ensure these acquisitions include basic safeguards — like use policies, audits, and consequences for misuse — to protect the public from abuse and from creeping contracts that grow into whole suites of products. 

Clear policies and oversight mechanisms must be in place before using any surveillance tools, free or not, and communities and their elected officials must be at the center of every decision about whether to bring these tools in at all.

Here are some of the most common methods “free” surveillance tech makes its way into communities.

Trials and Pilots

Police departments are regularly offered free access to surveillance tools and software through trials and pilot programs that often aren’t accompanied by appropriate use policies. In many jurisdictions, trials do not trigger the same requirements to go before decision-makers outside the police department. This means the public may have no idea that a pilot program for surveillance technology is happening in their city. 

The public may have no idea that a pilot program for surveillance technology is happening in their city.  

In Denver, Colorado, the police department is running trials of possible unmanned aerial vehicles (UAVs) for a drone-as-first-responder (DFR) program from two competing drone vendors: Flock Safety Aerodome drones (through August 2026) and drones from the company Skydio, partnering with Axon, the multi-billion dollar police technology company behind tools like Tasers and AI-generated police reports. Drones create unique issues given their vantage for capturing private property and unsuspecting civilians, as well as their capacity to make other technologies, like ALPRs, airborne. 

Functional, Even Without Funding 

We’ve seen cities decide not to fund a tool, or run out of funding for it, only to have a company continue providing it in the hope that money will turn up. This happened in Fall River, Massachusetts, where the police department decided not to fund ShotSpotter’s $90,000 annual cost and its frequent false alarms, but continued using the system when the company provided free access. 

 Police technology companies are developing more features and subscription-based models, so what’s “free” today frequently results in taxpayers footing the bill later.

In May 2025, Denver's city council unanimously rejected a $666,000 contract extension for Flock Safety ALPR cameras after weeks of public outcry over mass surveillance data sharing with federal immigration enforcement. But Mayor Mike Johnston’s office allowed the cameras to keep running through a “task force” review, effectively extending the program even after the contract was voted down. In response, the Denver Taskforce to Reimagine Policing and Public Safety and Transforming Our Communities Alliance launched a grassroots campaign demanding the city “turn Flock cameras off now,” a reminder that when surveillance starts as a pilot or time‑limited contract, communities often have to fight not just to block renewals but to shut the systems off.

 Importantly, police technology companies are developing more features and subscription-based models, so what’s “free” today frequently results in taxpayers footing the bill later. 

Gifts from Police Foundations and Wealthy Donors

Police foundations and the wealthy have pushed surveillance-driven agendas in their local communities by donating equipment and making large monetary gifts, another means of acquiring these tools without public oversight or buy-in.

In Atlanta, the Atlanta Police Foundation (APF) attempted to use its position as a private entity to circumvent transparency. Following a court challenge from the Atlanta Community Press Collective and Lucy Parsons Labs, a Georgia court determined that the APF must comply with public records laws related to some of its actions and purchases on behalf of law enforcement.
In San Francisco, billionaire Chris Larsen has financially supported a supercharging of the city’s surveillance infrastructure, donating $9.4 million to fund the San Francisco Police Department’s (SFPD) Real-Time Investigation Center, where a menu of surveillance technologies and data come together to surveil the city’s residents. This move comes after the billionaire backed a ballot measure, which passed in March 2025, eroding the city’s surveillance technology law and allowing the SFPD free rein to use new surveillance technologies for a full year without oversight.

Free Tech for Federal Data Pipelines

Federal grants and Department of Homeland Security funding are another way surveillance technology appears free to, only to lock municipalities into long‑term data‑sharing and recurring costs. 

Through the Homeland Security Grant Program, which includes the State Homeland Security Program (SHSP) and the Urban Areas Security (UASI) Initiative, and Department of Justice programs like Byrne JAG, the federal government reimburses states and cities for "homeland security" equipment and software, including including law‑enforcement surveillance tools, analytics platforms, and real‑time crime centers. Grant guidance and vendor marketing materials make clear that these funds can be used for automated license plate readers, integrated video surveillance and analytics systems, and centralized command‑center software—in other words, purchases framed as counterterrorism investments but deployed in everyday policing.

Vendors have learned to design products around this federal money, pitching ALPR networks, camera systems, and analytic platforms as "grant-ready" solutions that can be acquired with little or no upfront local cost. Motorola Solutions, for example, advertises how SHSP and UASI dollars can be used for "law enforcement surveillance equipment" and "video surveillance, warning, and access control" systems. Flock Safety, partnering with Lexipol, a company that writes use policies for law enforcement, offers a "License Plate Readers Grant Assistance Program" that helps police departments identify federal and state grants and tailor their applications to fund ALPR projects. 

Grant assistance programs let police chiefs fast‑track new surveillance: the paperwork is outsourced, the grant eats the upfront cost, and even when there is a formal paper trail, the practical checks from residents, councils, and procurement rules often get watered down or bypassed.

On paper, these systems arrive “for free” through a federal grant; in practice, they lock cities into recurring software, subscription, and data‑hosting fees that quietly turn into permanent budget lines—and a lasting surveillance infrastructure—as soon as police and prosecutors start to rely on them. In Santa Cruz, California, the police department explicitly sought to use a DHS-funded SHSP grant to pay for a new citywide network of Flock ALPR cameras at the city's entrances and exits, with local funds covering additional cameras. In Sumner, Washington, a $50,000 grant was used to cover the entire first year of a Flock system — including installation and maintenance — after which the city is on the hook for roughly $39,000 every year in ongoing fees. The free grant money opens the door, but local governments are left with years of financial, political, and permanent surveillance entanglements they never fully vetted.

The most dangerous cost of this "free" funding is not just budgetary; it is the way it ties local systems into federal data pipelines. Since 9/11, DHS has used these grant streams to build a nationwide network of at least 79–80 state and regional fusion centers that integrate and share data from federal, state, local, tribal, and private partners. Research shows that state fusion centers rely heavily on the DHS Homeland Security Grant Program (especially SHSP and UASI) to "mature their capabilities," with some centers reporting that 100 percent of their annual expenditures are covered by these grants. 

Civil rights investigations have documented how this funding architecture creates a backdoor channel for ICE and other federal agencies to access local surveillance data for their own purposes. A recent report by the Surveillance Technology Oversight Project (S.T.O.P.) describes ICE agents using a Philadelphia‑area fusion center to query the city’s ALPR network to track undocumented drivers in a self‑described sanctuary city.

Ultimately, federal grants follow the same script as trials and foundation gifts: what looks “free” ends up costing communities their data, their sanctuary protections, and their power over how local surveillance is used.

Protecting Yourself Against “Free” Technology

The most important protection against "free" surveillance technology is to reject it outright. Cities do not have to accept federal grants, vendor trials, or philanthropic donations. Saying no to "free" tech is not just a policy choice; it is a political power that local governments possess and can exercise. Communities and their elected officials can and should refuse surveillance systems that arrive through federal grants, vendor pilots, or private donations, regardless of how attractive the initial price tag appears. 

For those cities that have already accepted surveillance technology, the imperative is equally clear: shut it down. When a community has rejected use of a spying tool, the capabilities, equipment, and data collected from that tool should be shut off immediately. Full stop.

And for any surveillance technology that remains in operation, even temporarily, there must be clear rules: when and how equipment is used, how that data is retained and shared, who owns data and how companies can access and use it, transparency requirements, and consequences for any misuse and abuse. 

“Free” surveillance technology is never free. Someone profits or gains power from it. Police technology vendors, federal agencies, and wealthy donors do not offer these systems out of generosity; they offer them because surveillance serves their interests, not ours. That is the real cost of “free” surveillance.

Beryl Lipton

Open Letter to Tech Companies: Protect Your Users From Lawless DHS Subpoenas

1 day 20 hours ago

We are calling on technology companies like Meta and Google to stand up for their users by resisting the Department of Homeland Security's (DHS) lawless administrative subpoenas for user data. 

In the past year, DHS has consistently targeted people engaged in First Amendment activity. Among other things, the agency has issued subpoenas to technology companies to unmask or locate people who have documented ICE's activities in their community, criticized the government, or attended protests.   

These subpoenas are unlawful, and the government knows it. When a handful of users challenged a few of them in court with the help of ACLU affiliates in Northern California and Pennsylvania, DHS withdrew them rather than waiting for a decision. 

These subpoenas are unlawful, and the government knows it.

But it is difficult for the average user to fight back on their own. Quashing a subpoena is a fast-moving process that requires lawyers and resources. Not everyone can afford a lawyer on a moment’s notice, and non-profits and pro-bono attorneys have already been stretched to near capacity during the Trump administration.  

 That is why we, joined by the ACLU of Northern California, have asked several large tech platforms to do more to protect their users, including: 

  1.  Insist on court intervention and an order before complying with a DHS subpoena, because the agency has already proved that its legal process is often unlawful and unconstitutional;  
  2. Give users as much notice as possible when they are the target of a subpoena, so the user can seek help. While many companies have already made this promise, there are high-profile examples of it not happening—ultimately stripping users of their day in court;  
  3. Resist gag orders that would prevent companies from notifying their users that they are a target of a subpoena. 

 We sent the letter to Amazon, Apple, Discord, Google, Meta, Microsoft, Reddit, SNAP, TikTok, and X.  

Recipients are not legally compelled to comply with administrative subpoenas absent a court order 

 An administrative subpoena is an investigative tool available to federal agencies like DHS. Many times, these are sent to technology companies to obtain user data. A subpoena cannot be used to obtain the content of communications, but they have been used to try and obtain some basic subscriber information like name, address, IP address, length of service, and session times.  

Unlike a search warrant, an administrative subpoena is not approved by a judge. If a technology company refuses to comply, an agency’s only recourse is to drop it or go to court and try to convince a judge that the request is lawful. That is what we are asking companies to do—simply require court intervention and not obey in advance. 

It is unclear how many administrative subpoenas DHS has issued in the past year. Subpoenas can come from many places—including civil courts, grand juries, criminal trials, and administrative agencies like DHS. Altogether, Google received 28,622 and Meta received 14,520 subpoenas in the first half of 2025, according to their transparency reports. The numbers are not broken out by type.   

DHS is abusing its authority to issue subpoenas 

In the past year, DHS has used these subpoenas to target protected speech. The following are just a few of the known examples. 

On April 1, 2025, DHS sent a subpoena to Google in an attempt to locate a Cornell PhD student in the United States on a student visa. The student was likely targeted because of his brief attendance at a protest the year before. Google complied with the subpoena without giving the student an opportunity to challenge it. While Google promises to give users prior notice, it sometimes breaks that promise to avoid delay. This must stop.   

In September 2025, DHS sent a subpoena and summons to Meta to try to unmask anonymous users behind Instagram accounts that tracked ICE activity in communities in California and Pennsylvania. The users—with the help of the ACLU and its state affiliates— challenged the subpoenas in court, and DHS withdrew the subpoenas before a court could make a ruling. In the Pennsylvania case, DHS tried to use legal authority that its own inspector general had already criticized in a lengthy report.  

In October 2025, DHS sent Google a subpoena demanding information about a retiree who criticized the agency’s policies. The retiree had sent an email asking the agency to use common sense and decency in a high-profile asylum case. In a shocking turn, federal agents later appeared on that person’s doorstep. The ACLU is currently challenging the subpoena.  

Read the full letter here

Mario Trujillo

No One, Including Our Furry Friends, Will Be Safer in Ring's Surveillance Nightmare

1 day 22 hours ago

Amazon Ring’s Super Bowl ad offered a vision of our streets that should leave every person unsettled about the company’s goals for disintegrating our privacy in public.

In the ad, disguised as a heartfelt effort to reunite the lost dogs of the country with their innocent owners, the company previewed future surveillance of our streets: a world where biometric identification could be unleashed from consumer devices to identify, track, and locate anything — human, pet, and otherwise.

The ad for Ring’s “Search Party” feature highlighted the doorbell camera’s ability to scan footage across Ring devices in a neighborhood, using AI analysis to identify potential canine matches among the many personal devices within the network. 

Amazon Ring already integrates biometric identification, like face recognition, into its products via features like "Familiar Faces,” which depends on scanning the faces of those in sight of the camera and matching it against a list of pre-saved, pre-approved faces. It doesn’t take much to imagine Ring eventually combining these two features: face recognition and neighborhood searches. 

Ring’s “Familiar Faces” feature could already run afoul of biometric privacy laws in some states, which require explicit, informed consent from individuals before a company can just run face recognition on someone. Unfortunately, not all states have similar privacy protections for their residents. 

Ring has a history of privacy violations, enabling surveillance of innocents and protestors, and close collaboration with law enforcement, and EFF has spent years reporting on its many privacy problems.

The cameras, which many people buy and install to identify potential porch pirates or get a look at anyone that might be on their doorstep, feature microphones that have been found to capture audio from the street. In 2023, Ring settled with the Federal Trade Commission over the extensive access it gave employees to personal customer footage. At that time, just three years ago, the FTC wrote: “As a result of this dangerously overbroad access and lax attitude toward privacy and security, employees and third-party contractors were able to view, download, and transfer customers’ sensitive video data for their own purposes.”

The company has made law enforcement access a regular part of its business. As early as 2016, the company was courting police departments through free giveaways. The company provided law enforcement warrantless access to people’s footage, a practice they claimed to cut off in 2024. Not long after, though, the company established partnerships with major police companies Axon and Flock Safety to facilitate the integration of Ring cameras into police intelligence networks. The partnership allows law enforcement to again request Ring footage directly from users without a warrant. This supplements the already wide-ranging apparatus of data and surveillance feeds now available to law enforcement. 

This feature is turned on by default, meaning that Ring owners need to go into the controls to change it. According to Amazon Ring’s instructions, this is how to disable the “search party” feature: 

  1. Open the Ring app to the main dashboard.
  2. Tap the menu (☰).
  3. Tap Control Center.
  4. Select Search Party.
  5. Tap Disable Search for Lost Pets. Tap the blue Pet icon next to "Search for Lost Pets" to turn the feature off for each camera. (You also have the option to "Disable Natural Hazards (Fire Watch)" and the option to tap the blue Flame icon next to Natural Hazards (Fire Watch) to turn the feature on or off for each camera.)

The addition of AI-driven biometric identification is the latest entry in the company’s history of profiting off of public safety worries and disregard for individual privacy, one that turbocharges the extreme dangers of allowing this to carry on. People need to reject this kind of disingenuous framing and recognize the potential end result: a scary overreach of the surveillance state designed to catch us all in its net.

Beryl Lipton

Coalition Urges California to Revoke Permits for Federal License Plate Reader Surveillance

2 days 1 hour ago
Group led by EFF and Imperial Valley Equity & Justice Asks Gov. Newsom and Caltrans Director to Act Immediately

SAN FRANCISCO – California must revoke permits allowing federal agencies such as Customs and Border Patrol (CBP) and the Drug Enforcement Administration (DEA) to put automated license plate readers along border highways, a coalition led by the Electronic Frontier Foundation (EFF) and Imperial Valley Equity & Justice (IVEJ) demanded today. 

In a letter to Gov. Gavin Newsom and California Department of Transportation (Caltrans) Director Dina El-Tawansy, the coalition notes that this invasive mass surveillance – automated license plate readers (ALPRs) often disguised as traffic barrels – puts both residents and migrants at risk of harassment, abuse, detention, and deportation.  

“With USBP (U.S. Border Patrol) Chief Greg Bovino reported to be returning to El Centro sector, after leading a brutal campaign against immigrants and U.S. citizens alike in Los Angeles, Chicago, and Minneapolis, it is urgent that your administration take action,” the letter says. “Caltrans must revoke any permits issued to USBP. CBP, and DEA for these surveillance devices and effectuate their removal.” 

Coalition members signing the letter include the California Nurses Association; American Federation of Teachers Guild, Local 1931; ACLU California Action; Fight for the Future; Electronic Privacy Information Center; Just Futures Law; Jobs to Move America; Project on Government Oversight; American Friends Service Committee U.S./Mexico Border Program; Survivors of Torture, International; Partnership for the Advancement of New Americans; Border Angels; Southern California Immigration Project; Trust SD Coalition; Alliance San Diego; San Diego Immigrant Rights Consortium; Showing Up for Racial Justice San Diego; San Diego Privacy; Oakland Privacy; Japanese American Citizens League and its Florin-Sacramento Valley, San Francisco, South Bay, Berkeley, Torrance, and Greater Pasadena chapters; Democratic Socialists of America- San Diego; Center for Human Rights and Privacy; The Becoming Project Inc.; Imperial Valley for Palestine; Imperial Liberation Collaborative; Comité de Acción del Valle Inc.; CBFD Indivisible; South Bay People Power; and queercasa

California law prevents state and local agencies from sharing ALPR data with out-of-state agencies, including federal agencies involved in immigration enforcement. However, USBP, CBP, and DEA are bypassing these regulations by installing their own ALPRs. 

EFF researchers have released a map of more than 40 of these covert ALPRs along highways in San Diego and Imperial counties that are believed to belong to federal agencies engaged in immigration enforcement.  In response to a June 2025 public records request, Caltrans has released several documents showing CBP and DEA have applied for permits for ALPRs, with more expected as Caltrans continues to locate records responsive to the request. 

“California must not allow Border Patrol and other federal agencies to use surveillance on our roadways to unleash violence and intimidation on San Diego and Imperial Valley residents,” the letter says. “We ask that your administration investigate and release the relevant permits, revoke them, and initiate the removal of these devices. No further permits for ALPRs or tactical checkpoints should be approved for USBP, CBP, or DEA.” 

"The State of California must not allow Border Patrol to exploit our public roads and bypass state law," said Sergio Ojeda, IVEJ’s Lead Community Organizer for Racial and Economic Justice Programs.  "It's time to stop federal agencies from installing hidden cameras that they use to track, target and harass our communities for travelling between Imperial Valley, San Diego and Yuma." 

For the letter: https://www.eff.org/document/coalition-letter-re-covert-alprs

For the map of the covert ALPRs: https://www.eff.org/covertALPRmap

For high-res images of two of the covert ALPRs: https://www.eff.org/node/111725

For more about ALPRs: https://sls.eff.org/technologies/automated-license-plate-readers-alprs 

 

Contact:  DaveMaassDirector of Investigationsdm@eff.org
Josh Richman

Speaking Freely: Yazan Badran

2 days 1 hour ago

Interviewer: Jillian York

Yazan Badran is an assistant professor in international media and communication studies at the Vrije Universiteit Brussel, and a researcher at the Echo research group. His research focuses on the intersection between media, journalism and politics particularly in the MENA region and within its exilic and diasporic communities.

*This interview has been edited for length and clarity. 

Jillian York: What does free speech or free expression mean to you?

Yazan Badran: So I think there are a couple of layers to that question. There's a narrow conception of free speech that is related to, of course, your ability to think about the world.

And that also depends on having the resources to be able to think about the world, to having resources of understanding about the world, having resources to translate that understanding into thoughts and analysis yourself, and then being able to express that in narratives about yourself with others in the world. And again, that also requires resources of expression, right?

So there's that layer, which means that it's not simply the absence of constraints around your expression and around your thinking, but actually having frameworks that activate you expressing yourself in the world. So that's one element of free expression or free speech, or however you want to call it. 

But I feel that remains too narrow if we don't account also for the counterpart, which is having frameworks that listen to you as you express yourself into the world, right? Having people, institutions, frameworks that are actively also listening, engaging, recognizing you as a legitimate voice in the world. And I think these two have to come together in any kind of broad conception of free speech, which entangles you then in a kind of ethical relationship that you have to listen to others as well, right? It becomes a mutual responsibility from you towards the other, towards the world, and for the world towards you, which also requires access to resources and access to platforms and people listening to you.

So I think these two are what I, if I want to think of free speech and free expression, I would have to think about these two together. And most of the time there is a much narrower focus on the first, and somewhat neglecting the second, I think.

JY: Yeah, absolutely. Okay, now I have to ask, what is an experience that shaped these views for you?

YB: I think two broad experiences. One is the…let's say, the 2000s, the late 2000s, so early 2010 and 2011, where we were all part of this community that was very much focused on expression and on limiting the kind of constraints around expression and thinking of tools and how resources can be brought towards that. And there were limits to where that allowed us to go at a certain point.

And I think the kind of experiences of the Arab uprisings and what happened afterwards and the kind of degeneration across the worlds in which we lived kind of became a generative ground to think of how that experience went wrong or how that experience fell short.

And then building on that, I think when I started doing research on journalism and particularly on exiled journalists and thinking about their practice and their place in the world and the fact that in many ways there were very little constraints on what they could do and what they could voice and what they could express, et cetera.

Not that there are no constraints, there are always constraints, but that the nature of constraints were different - they were of the order of listening; who is listening to this? Who is on the other side? Who are you engaged in a conversation with? And that was, from speaking to them, a real kind of anxiety that came through to me.

JY: I think you're sort of alluding to theory of change…

YB: Yes, to some extent, but also to…when we think about our contribution into the world, to what kind of the normative framework we imagine. As people who think about all of these structures that circulate information and opinion and expressions, et cetera, there is often a normative focus, where there should be, about opening up constraints around expression and bringing resources to bear for expression, and we don't think enough of how these structures need also to foster listening and to foster recognition of these expressions.

And that is the same with, when we think about platforms on the internet and when we think about journalism, when we think about teaching… For example, in my field, when we think about academic research, I think you can bring that framework in different places where expression is needed and where expression is part of who we are. Does that make sense?

JY:  Absolutely. It absolutely makes sense. I think about this all the time. I'm teaching now too, and so it's very, very valuable. Okay, so let's shift a little bit. You're from Syria. You've been in Brussels for a long time. You were in Japan in between. You have a broad worldview, a broad perspective. Let’s talk about press freedom.

YB: Yeah, I've been thinking about this because, I mean, I work on journalism and I'm trying to do some work on Syria and what is happening in Syria now. And I feel there are times where people ask me about the context for journalistic work in Syria. And the narrow answer and the clear answer is that we've never had more freedom to do journalism in the country, right? And there are many reasons. Part of it is that this is a new regime that perhaps doesn't still have complete control over the ground. There are differentiated contexts where in some places it's very easy to go out and to access information and to speak to people. In other places, it's less easy, it's more dangerous, etc. So it's differentiated and it's not the same everywhere.

But it's clear that journalists come out and in from Syria. They can do their job relatively unmolested, which is a massive kind of change, contrast to the last thirteen or fourteen years where Syria was an information black hole. You couldn't do anything.

But that remains somewhat narrow in thinking about journalism in Syria. What is journalism about Syria in this context? What kind of journalism do we need to be thinking about? In a place that is in, you know, ruins, if not material destruction, then economic and societal disintegration, et cetera. So there are, I think, two elements. Sure, you can do journalism, but what kind of journalism is being done in Syria? I feel that we have to be asking a broader question about what is the role of information now more broadly in Syria? 

And that is a more difficult question to answer, I feel. Or a more difficult question to answer positively. Because it highlights questions about who has access to the means of journalism now in Syria? What are they doing with it? Who has access to the sources, and can provide actual understanding about the political or economic developments that are happening in the country. Very few people who have genuine understanding of the processes are going into building a new regime, a new state. In general, we have very, very little access. There are few avenues to participate and gain access to what is happening there.

So sure, you can go on the ground, you can take photos, you can speak to people, but in terms of participating in that broader nation-building exercise that is happening; this is happening at a completely different level to the places that we have access to. And with few exceptions, journalism as practiced now is not bringing us closer to these spaces. 

In a narrow sense, it's a very exciting time to be looking at experiments in doing journalism in Syria, to also be seeing the interaction between international journalists and local journalists and also the kind of tensions and collaborations and discussion around structural inequalities between them; especially from a researcher’s perspective. But it remains very, very narrow. In terms of the massive story, which is a complete revolution in the identity of the country, in its geopolitical arrangement, in its positioning in the world, and that we have no access to whatsoever. This is happening well over our heads—we are almost bystanders. 

JY:  That makes sense. I mean, it doesn't make sense, but it makes sense. What role does the internet and maybe even specifically platforms or internet companies play in Syria? Because with sanctions lifted, we now have access to things that were not previously available. I know that the app stores are back, although I'm getting varied reports from people on the ground about how much they can actually access, although people can download Signal now, which is good. How would you say things have changed online in the past year?

YB:  In the beginning, platforms, particularly Facebook, and it's still really Facebook, were the main sphere of information in the country. And to a large extent, it remains the main sphere where some discussions happen within the country.

These are old networks that were reactivated in some ways, but also public spheres that were so completely removed from each other that opened up on each other after December. So you had really almost independent spheres of activity and discussion. Between areas that were controlled by the regime, areas that were controlled by the opposition, which kind of expanded to areas of Syrian refugees and diaspora outside.

And these just collapsed on each other after 8th of December with massive chaos, massive and costly chaos in some ways. The spread of disinformation, organic disinformation, in the first few months was mind-boggling. I think by now there's a bit of self-regulation, but also another movement of siloing, where you see different clusters hardening as well. So that kind of collapse over the first few months didn't last very long.

You start having conversations in isolation of each other now. And I'm talking mainly about Facebook, because that is the main network, that is the main platform where public discussions are happening. Telegram was the public infrastructure of the state for a very long time, for the first six months. Basically, state communication happened through Telegram, through Telegram channels, also causing a lot of chaos. But now you have a bit more stability in terms of having a news agency. You have the television, the state television. So the importance of Telegram has waned off, but it's still a kind of parastructure of state communication, it remains important.

I think more structurally, these platforms are basically the infrastructure of information circulation because of the fact that people don't have access to electricity, for example, or for much of the time they have very low access to bandwidth. So having Facebook on their phone is the main way to keep in touch with things. They can't turn on the television, they can't really access internet websites very easily. So Facebook becomes materially their access to the world. Which comes with all of the baggage that these platforms bring with them, right? The kind of siloing, the competition over attention, the sensationalism, these clustering dynamics of these networks and their algorithms.

JY: Okay, so the infrastructural and resource challenges are real, but then you also have the opening up for the first time of the internet in many, many years, or ever, really. And as far as I understand from what friends who’ve been there have reported, is that nothing being blocked yet. So what impact do you see or foresee that having on society as people get more and more online? I know a lot of people were savvy, of course, and got around censorship, but not everyone, right?

YB: No, absolutely, absolutely not everyone. Not everyone has the kind of digital literacy to understand what going online means, right? Which accounts for one thing, the avalanche of fake information and disinformation that is now Syria, basically.

JY: It's only the second time this has happened. I mean, Tunisia is the only other example I can think of where the internet just opened right up.

YB: Without having gateways and infrastructure that can kind of circulate and manage and curate this avalanche of information. While at the same time, you have a real disintegration in the kind of social institutions that could ground a community. So you have really a perfect storm of a thin layer of digital connectivity, for a lot of people who didn't have access to even that thin layer, but it's still a very thin layer, right? You're connecting from your old smartphone to Facebook. You're getting texts, et cetera, and perhaps you're texting with the family over WhatsApp. And a real collapse of different societal institutions that also grounded you with others, right? The education system, of different clubs and different neighborhoods, small institutions that brought different communities together of the army, for example, universities, all of these have been disrupted over the past year in profound ways and along really communitarian ways as well. I don't know the kind of conditions that this creates, the combination of these two. But it doesn't seem like it's a positive situation or a positive dynamic.

JY:  Yeah, I mean, it makes me think of, for example, Albania or other countries that opened up after a long time and then all of a sudden just had this freedom.

YB: But still combined, I mean, that is one thing, the opening up and the avalanche, and that is a challenge. But it is a challenge that perhaps within a settled society with some institutions in which you can turn to, through which you can regulate this, through which you can have countervailing forces and countervailing forums for… that’s one thing. But with the collapse of material institutions that you might have had, it's really creating a bewildering world for people, where you turn back and you have your family that maybe lives two streets away, and this is the circle in which you move, or you feel safe to move.

Of course, for certain communities, right? That is not the condition everywhere. But that is part of what is happening. There's a real sense of bewilderment in the kind of world that you live in. Especially in areas that used to be controlled by the regime where everything that you've known in terms of state authority, from the smallest, the lowliest police officer in your neighborhood, to people, bureaucrats that you would talk to, have changed or your relationship to them has fundamentally changed. There's a real upheaval in your world at different levels. And, you know, and you're faced with a swirling world of information that you can't make sense out of.

JY: I do want to put you on the spot with a question that popped into my head, which is, I often ask people about regulation and depending on where they're working in the world, especially like when I'm talking to folks in Africa and elsewhere. In this case, though, it's a nation-building challenge, right? And so—you're looking at all of these issues and all of these problems—if you were in a position to create press or internet regulation from the ground up in Syria, what do you feel like that should look like? Are there models that you would look to? Are there existing structures or is there something new or?

YB:  I think maybe I don't have a model, but I think maybe a couple of entry points that you would kind of use to think of what model of regulation you want is to understand that there the first challenge is at the level of nation building. Of really recreating a national identity or reimagining a national identity, both in terms of a kind of shared imaginary of what these people are to each other and collectively represent, but also in terms of at the very hyper-local level of how these communities can go back to living together.

And I think that would have to shape how you would approach, say, regulation. I mean, around the Internet, that's a more difficult challenge. But at least in terms of your national media, for example, what is the contribution of the state through its media arm? What kind of national media do you want to put into place? What kind of structures allow for really organic participation in this project or not, right? But also at the level of how do you regulate the market for information in a new state with that level of infrastructural destruction, right? Of the economic circuit in which these networks are in place. How do you want to reconnect Syria to the world? In what ways? For what purposes?

And how do you connect all of these steps to open questions around identity and around that process of national rebuilding, and activating participation in that project, right? Rather than use them to foreclose these questions.

There are also certain challenges that you have in Syria that are endogenous, that are related to the last 14 years, to the societal disintegration and geographic disintegration and economic disintegration, et cetera. But on top of that, of course, we live in an information environment that is, at the level of the global information environment, also structurally cracking down in terms of how we engage with information, how we deal with journalism, how we deal with questions of difference. These are problems that go well beyond Syria, right? These are difficult issues that we don't know how to tackle here in Brussels or in the US, right? And so there's also an interplay between these two. There's an interplay between the fact that even here, we are having to come to terms with some of the myths around liberalism, around journalism, the normative model of journalism, of how to do journalism, right? I mean, we have to come to terms with it. The last two years—of the Gaza genocide—didn't happen in a vacuum. It was earth shattering for a lot of these pretensions around the world that we live in. Which I think is a bigger challenge, but of course it interacts with the kind of challenges that you have in a place like Syria.

JY: To what degree do you feel that the sort of rapid opening up and disinformation and provocations online and offline are contributing to violence?

YB: I think they're at the very least exacerbating the impact of that violence. I can't make claims about how much they're contributing, though I think they are contributing. I think there are clear episodes in which the kind of the circulation of misinformation online, you could directly link it to certain episodes of violence, like what happened in Jaramana before the massacre of the Druze. So a couple of weeks before the Druze, there was this piece of disinformation that led to actual violence and that set the stage to the massive violence later on. During the massacres on the coast, you could also link the kind of panic and the disinformation around the attacks of former regime officers and the effects of that to the mobilization that has happened. The scale of the violence is linked to the circulation of panic and disinformation. So there is a clear contribution. But I think the greater influence is how it exacerbates what happens after that violence, how it exacerbates the depth, for example, of divorce between between the population of Sweida after the massacre, the Druze population of Sweida and the rest of Syria. That is tangible. And that is embedded in the kind of information environment that we have. There are different kinds of material causes for it as well. There is real structural conflict there. But the kind of ideological, discursive, and affective, divorce that has happened over the past six months, that is a product of the information environment that we have.

JY: You are very much a third country, 4th country kid at this point. Like me, you connected to this global community through Global Voices at a relatively young age. In what ways do you feel that global experience has influenced your thinking and your work around these topics, around freedom of expression? How has it shaped you?

YB: I think in a profound way. What it does is it makes you to some extent immune from certain nationalist logics in thinking about the world, right? You have stakes in so many different places. You've built friendships, you've built connections, you've left parts of you in different places. And that is also certainly related to certain privileges, but it also means that you care about different places, that you care about people in many different places. And that shapes the way that you think about the world - it produces commitments that are diffused, complex and at times even contradictory, and it forces you to confront these contradictions. You also have experience, real experience in how much richer the world is if you move outside of these narrow, more nationalist, more chauvinistic ways of thinking about the world. And also you have kind of direct lived experience of the complexity of global circulation in the world and the fact, at a high level, it doesn't produce a homogenized culture, it produces many different things and they're not all equal and they're not all good, but it also leaves spaces for you to contribute to it, to engage with it, to actively try to play within the little spaces that you have.

JY: Okay, here’s my final question that I ask everyone. Do you have a free speech hero? Or someone who's inspired you?

YB: I mean, there are people whose sacrifices humble you. Many of them we don't know by name. Some of them we do know by name. Some of them are friends of ours. I keep thinking of Alaa [Abd El Fattah], who was just released from prison—I was listening to his long interview with Mada Masr (in Arabic) yesterday, and it’s…I mean…is he a hero? I don’t know but he is certainly one of the people I love at a distance and who continues to inspire us.

JY: I think he’d hate to be called a hero.

YB: Of course he would. But in some ways, his story is a tragedy that is inextricable from the drama of the last fifteen years, right? It’s not about turning him into a symbol. He's also a person and a complex person and someone of flesh and blood, etc. But he's also someone who can articulate in a very clear, very simple way, the kind of sense of hope and defeat that we all feel at some level and who continues to insist on confronting both these senses critically and analytically.

JY: I’m glad you said Alaa. He’s someone I learned a lot from early on, and there’s a lot of his words and thinking that have guided me in my practice. 

YB: Yeah, and his story is tragic in the sense that it kind of highlights that in the absence of any credible road towards collective salvation, we're left with little moments of joy when there is a small individual salvation of someone like him. And that these are the only little moments of genuine joy that we get to exercise together. But in terms of a broader sense of collective salvation, I think in some ways our generation has been profoundly and decisively defeated.

JY:  And yet the title of his book, “you have not yet been defeated.”

YB: Yeah, it's true. It's true.

JY: Thank you Yazan for speaking with me.

Jillian C. York

EFFecting Change: Get the Flock Out of Our City

2 days 20 hours ago

Flock contracts have quietly spread to cities across the country. But Flock ALPR (Automated License Plate Readers) erode civil liberties from the moment they're installed. While officials claim these cameras keep neighborhoods safe, the evidence tells a different story. The data reveals how Flock has enabled surveillance of people seeking abortions, protesters exercising First Amendment rights, and communities targeted by discriminatory policing.

This is exactly why cities are saying no. From Austin to Cambridge to small towns across Texas, jurisdictions are rejecting Flock contracts altogether, proving that surveillance isn't inevitable—it's a choice.

Join EFF's Sarah Hamid and Andrew Crocker along with Reem Suleiman from Fight for the Future and Kate Bertash from Rural Privacy Coalition to explore what's happening as Flock contracts face growing resistance across the U.S. We'll break down the legal implications of the data these systems collect, examine campaigns that have successfully stopped Flock deployments, and discuss the real-world consequences for people's privacy and freedom. The conversation will be followed by a live Q&A. 

EFFecting Change Livestream Series:
Get the Flock Out of Our City
Thursday, February 19th
12:00 PM - 1:00 PM Pacific
This event is LIVE and FREE!



Accessibility

This event will be live-captioned and recorded. EFF is committed to improving accessibility for our events. If you have any accessibility questions regarding the event, please contact events@eff.org.

Event Expectations

EFF is dedicated to a harassment-free experience for everyone, and all participants are encouraged to view our full Event Expectations.

Upcoming Events

Want to make sure you don’t miss our next livestream? Here’s a link to sign up for updates about this series: eff.org/ECUpdates. If you have a friend or colleague that might be interested, please join the fight for your digital rights by forwarding this link: eff.org/EFFectingChange. Thank you for helping EFF spread the word about privacy and free expression online. 

Recording

We hope you and your friends can join us live! If you can't make it, we’ll post the recording afterward on YouTube and the Internet Archive!

Melissa Srago

The Internet Still Works: Yelp Protects Consumer Reviews

2 days 20 hours ago

Section 230 helps make it possible for online communities to host user speech: from restaurant reviews, to fan fiction, to collaborative encyclopedias. But recent debates about the law often overlook how it works in practice. To mark its 30th anniversary, EFF is interviewing leaders of online platforms about how they handle complaints, moderate content, and protect their users’ ability to speak and share information.

Yelp hosts millions of reviews written by internet users about local businesses. Most reviews are positive, but over the years, some businesses have tried to pressure Yelp to remove negative reviews, including through legal threats. Since its founding more than two decades ago, Yelp has fought major legal battles to defend reviewers’ rights and preserve the legal protections that allow consumers to share honest feedback online.

Aaron Schur is General Counsel at Yelp. He joined the company in 2010 as one of its first lawyers and has led its litigation strategy for more than a decade, helping secure court decisions that strengthened legal protections for consumer speech. He was interviewed by Joe Mullin, a policy analyst on EFF's Activism Team. 

Joe Mullin: How would you describe Section 230 to a regular Yelp user who doesn’t know about the law?   

Aaron Schur: I'd say it is a simple rule that, generally speaking, when content is posted online, any liability for that content is with the person that created it, not the platform that is displaying it. That allows Yelp to show your review and keep it up if a business complains about it. It also means that we can develop ways to highlight the reviews we think are most helpful and reliable, and mitigate fake reviews in a way, without creating liability for Yelp, because we're allowed to host third party content.

The political debate around Section 230 often centers around the behavior of companies, especially large companies. But we rarely hear about users, even though the law also applies to users. What is the user story that is getting lost? 

Section 230 at heart protects users. It enables a diversity of platforms and content moderation practices—whether it's reviews on Yelp, videos on another platform, whatever it may be. 

Without Section 230, platforms would face heavy pressure to remove consumer speech when we’re threatened with legal action—and that harms users, directly. Their content gets removed. It also harms the greater number of users who would access that content. 

The focus on the biggest tech companies, I think, is understandable but misplaced when it comes to Section 230. We have tools that exist to go after dominant companies, both at the state and the federal level, and Congress could certainly consider competition-based laws—and has, over the last several years. 

Tell me about the editorial decisions that Yelp makes regarding the highlighting of reviews, and the weeding out of reviews that might be fake.  

Yelp is a platform where people share their experiences with local businesses, government agencies, and other entities. People come to Yelp, by the millions, to learn about these places.

With traffic like that come incentives for bad actors to game the system. Some unscrupulous businesses try to create fake reviews, or compensate people to write reviews, or ask family and friends to write reviews. Those reviews will be biased in a way that won’t be transparent. 

Yelp developed an automated system to highlight reviews we find most trustworthy and helpful. Other reviews may be placed in a “not recommended” section where they don’t affect a business’s overall rating, but they’re still visible. That helps us maintain a level playing field and keep user trust. 

Tell me about what your process around complaints around user reviews look like. 

We have a reporting function for reviews. Those reports get looked at by an actual human, who evaluates the review and looks at data about it to decide whether it violates our guidelines. 

We don't remove a review just because someone says it's “wrong,” because we can't litigate the facts in your review. If someone says “my pizza arrived cold,” and the restaurant says, no, the pizza was warm—Yelp is not in a position to adjudicate that dispute. 

That's where Section 230 comes in. It says Yelp doesn’t have to [decide who’s right]. 

What other types of moderation tools have you built? 

Any business, free of charge, can respond to a review, and that response appears directly below it. They can also message users privately. We know when businesses do this, it’s viewed positively by users.

We also have a consumer alert program, where members of the public can report businesses that may be compensating people for positive reviews—offering things like free desserts or discounted rent. In those cases, we can place an alert on the business’s page and link to the evidence we received. We also do this when businesses make certain types of legal threats against users.

It’s about transparency. If a business’s rating is inflated, because that business is threatening reviewers who rate less than five stars with a lawsuit, consumers have a right to know what’s happening. 

How are international complaints, where Section 230 doesn’t come into play, different? 

We have had a lot of matters in Europe, in particular in Germany. It’s a different system there—it’s notice-and-takedown. They have a line of cases that require review sites to basically provide proof that the person was a customer of the business. 

If a review was challenged, we would sometimes ask the user for documentation, like an invoice, which we would redact before providing it. Often, they would do that, in order to defend their own speech online. Which was surprising to me! But they wouldn’t always—which shows the benefit of Section 230. In the U.S., you don’t have this back-and-forth that a business can leverage to get content taken down. 

And invariably, the reviewer was a customer. The business was just using the system to try to take down speech. 

Yelp has been part of some of the most important legal cases around Section 230, and some of those didn’t exist when we spoke in 2012. What happened in the Hassel v. Bird case, and why was that important for online reviewers?

Hassel v. Bird was a case where a law firm got a default judgment against an alleged reviewer, and the court ordered Yelp to remove the review—even though Yelp had not been a party to the case. 

We refused, because the order violated Section 230, due process, and Yelp’s First Amendment rights as a publisher. But the trial court and the appeal court both ruled against us, allowing a side-stepping of Section 230. 

The California Supreme Court ultimately reversed those rulings, and recognized that plaintiffs cannot accomplish indirectly [by suing a user and then ordering a platform to remove content] what they could not accomplish directly by suing the platform itself.

We spoke to you in 2012, and the landscape has really changed. Section 230 is really under attack in a way that it wasn’t back then. From your vantage point at Yelp, what feels different about this moment? 

The biggest tech companies got even bigger, and even more powerful. That has made people distrustful and angry—rightfully so, in many cases. 

When you read about the attacks on 230, it’s really politicians calling out Big Tech. But what is never mentioned is little tech, or “middle tech,” which is how Yelp bills itself. If 230 is weakened or repealed, it’s really the biggest companies, the Googles of the world, that will be able to weather it better than smaller companies like Yelp. They have more financial resources. It won’t actually accomplish what the legislators are setting out to accomplish. It will have unintended consequences across the board. Not just for Yelp, but for smaller platforms. 

This interview was edited for length and clarity.

Joe Mullin

The Internet Still Works: Wikipedia Defends Its Editors

2 days 21 hours ago

Section 230 helps make it possible for online communities to host user speech: from restaurant reviews, to fan fiction, to collaborative encyclopedias. But recent debates about the law often overlook how it works in practice. To mark its 30th anniversary, EFF is interviewing leaders of online platforms about how they handle complaints, moderate content, and protect their users’ ability to speak and share information. 

A decade ago, Wikimedia Foundation, the nonprofit that operates Wikipedia, received 304 requests to alter or remove content over a two-year period, not including copyright complaints. In 2024 alone, it received 664 such takedown requests. Only four were granted. As complaints over user speech have grown, Wikimedia has expanded its legal team to defend the volunteer editors who write and maintain the encyclopedia. 

Jacob Rogers is Associate General Counsel at the Wikimedia Foundation. He leads the team that deals with legal complaints against Wikimedia content and its editors. Rogers also works to preserve the legal protections, including Section 230, that make a community-governed encyclopedia possible. 

Joe Mullin: What kind of content do you think would be most in danger if Section 230 was weakened? 

Jacob Rogers: When you're writing about a living person, if you get it wrong and it hurts their reputation, they will have a legal claim. So that is always a concentrated area of risk. It’s good to be careful, but  I think if there was a looser liability regime, people could get to be too careful—so careful they couldn’t write important public information. 

Current events and political history would also be in danger. Writing about images of Muhammad has been a flashpoint in different countries, because depictions are religiously sensitive and controversial in some contexts. There are different approaches to this in different languages. You might not think that writing about the history of art in your country 500 years ago would get you into trouble—but it could, if you’re in a particular country, and it’s a flash point. 

Writing about history and culture matters to people. And it can matter to governments, to religions, to movements, in a way that can cause people problems. That’s part of why protecting pseudonymity and their ability to work on these topics is so important. 

If you had to describe to a Wikipedia user what Section 230 does, how would you explain it to them? 

If there was nothing—no legal protection at all—I think we would not be able to run the website. There would be too many legal claims, and the potential damages of those claims could bankrupt the company. 

Section 230 protects the Wikimedia Foundation, and it allows us to defer to community editorial processes. We can let the user community make those editorial decisions, and figure things out as a group—like how to write biographies of living persons, and what sources are reliable. Wikipedia wouldn’t work if it had centralized decision making. 

What does a typical complaint look like, and how does the complaint process look? 

In some cases, someone is accused of a serious crime and there’s a debate about the sources. People accused of certain types of wrongdoing, or scams. There are debates about peoples’ politics, where someone is accused of being “far-right” or “far-left.” 

The first step is community dispute resolution. On the top page of every article on Wikipedia there’s a button at the top that translates to “talk.” If you click it, that gives you space to discuss how to write the article. When editors get into a fight about what to write, they should stop and discuss it with each other first. 

If page editors can’t resolve a dispute, third-party editors can come in, or ask for a broader discussion. If that doesn’t work, or there’s harassment, we have Wikipedia volunteer administrators, elected by their communities, who can intervene. They can ban people temporarily, to cool off. When necessary, they can ban users permanently. In serious cases, arbitration committees make final decisions. 

And these community dispute processes we’ve discussed are run by volunteers, no Wikimedia Foundation employees are involved? Where does Section 230 come into play?

That’s right. Section 230 helps us, because it lets disputes go through that community process. Sometimes someone’s edits get reversed, and they write an angry letter to the legal department. If we were liable for that, we would have the risk of expensive litigation every time someone got mad. Even if their claim is baseless, it’s hard to make a single filing in a U.S. court for less than $20,000. There’s a real “death by a thousand cuts” problem, if enough people filed litigation. 

Section 230 protects us from that, and allows for quick dismissal of invalid claims. 

When we're in the United States, then that's really the end of the matter. There’s no way to bypass the community with a lawsuit. 

How does dealing with those complaints work in the U.S.? And how is it different abroad? 

In the US, we have Section 230. We’re able to say, go through the community process, and try to be persuasive. We’ll make changes, if you make a good persuasive argument! But the Foundation isn’t going to come in and change it because you made a legal complaint. 

But in the EU, they don’t have Section 230 protections. Under the Digital Services Act, once someone claims your website hosts something illegal, they can go to court and get an injunction ordering us to take the content down. If we don’t want to follow that order, we have to defend the case in court. 

In one German case, the court essentially said, "Wikipedians didn’t do good enough journalism.” The court said the article’s sources aren’t strong enough. The editors used industry trade publications, and the court said they should have used something like German state media, or top newspapers in the country, not a “niche” publication. We disagreed with that. 

What’s the cost of having to go to court regularly to defend user speech? 

Because the Foundation is a mission-driven nonprofit, we can take on these defenses in a way that’s not always financially sensible, but is mission sensible. If you were focused on profit, you would grant a takedown. The cost of a takedown is maybe one hour of a staff member’s time. 

We can selectively take on cases to benefit the free knowledge mission, without bankrupting the company. To do litigation in the EU costs something on the order of $30,000 for one hearing, to a few hundred thousand dollars for a drawn-out case.

I don’t know what would happen if we had to do that in the United States. There would be a lot of uncertainty. One big unknown is—how many people are waiting in the wings for a better opportunity to use the legal system to force changes on Wikipedia? 

What does the community editing process get right that courts can get wrong? 

Sources. Wikipedia editors might cite a blog because they know the quality of its research. They know what's going into writing that. 

It can be easy sometimes for a court to look at something like that and say, well, this is just a blog, and it’s not backed by a university or institution, so we’re not going to rely on it. But that's actually probably a worse result. The editors who are making that consideration are often getting a more accurate picture of reality. 

Policymakers who want to limit or eliminate Section 230 often say their goal is to get harmful content off the internet, and fast. What do you think gets missed in the conversation about removing harmful content? 

One is: harmful to whom? Every time people talk about “super fast tech solutions,” I think they leave out academic and educational discussions. Everyone talks about how there’s a terrorism video, and it should come down. But there’s also news and academic commentary about that terrorism video. 

There are very few shared universal standards of harm around the world. Everyone in the world agrees, roughly speaking, on child protection, and child abuse images. But there’s wild disagreement about almost every other topic. 

If you do take down something to comply with the UK law, it’s global. And you’ll be taking away the rights of someone in the US or Australia or Canada to see that content. 

This interview was edited for length and clarity. EFF interviewed Wikimedia attorney Michelle Paulson about Section 230 in 2012.

Joe Mullin

On Its 30th Birthday, Section 230 Remains The Lynchpin For Users’ Speech

3 days ago

For thirty years, internet users have benefited from a key federal law that allows everyone to express themselves, find community, organize politically, and participate in society. Section 230, which protects internet users’ speech by protecting the online intermediaries we rely on, is the legal support that sustains the internet as we know it.

Yet as Section 230 turns 30 this week, there are bipartisan proposals in Congress to either repeal or sunset the law. These proposals seize upon legitimate concerns with the harmful and anti-competitive practices of the largest tech companies, but then misdirect that anger toward Section 230.

But rolling back or eliminating Section 230 will not stop invasive corporate surveillance that harms all internet users. Killing Section 230 won’t end to the dominance of the current handful of large tech companies—it would cement their monopoly power

The current proposals also ignore a crucial question: what legal standard should replace Section 230? The bills provide no answer, refusing to grapple with the tradeoffs inherent in making online intermediaries liable for users’ speech.

This glaring omission shows what these proposals really are: grievances masquerading as legislation, not serious policy. Especially when the speech problems with alternatives to Section 230’s immunity are readily apparent, both in the U.S. and around the world. Experience shows that those systems result in more censorship of internet users’ lawful speech.

Let’s be clear: EFF defends Section 230 because it is the best available system to protect users’ speech online. By immunizing intermediaries for their users’ speech, Section 230 benefits users. Services can distribute our speech without filters, pre-clearance, or the threat of dubious takedown requests. Section 230 also directly protects internet users when they distribute other people’s speech online, such as when they reshare another users’ post or host a comment section on their blog.

It was the danger of losing the internet as a forum for diverse political discourse and culture that led to the law in 1996. Congress created Section 230’s limited civil immunity  because it recognized that promoting more user speech outweighed potential harms. Congress decided that when harmful speech occurs, it’s the speaker that should be held responsible—not the service that hosts the speech. The law also protects social platforms when they remove posts that are obscene or violate the services’ own standards. And Section 230 has limits: it does not immunize services if they violate federal criminal laws.

Section 230 Alternatives Would Protect Less Speech

With so much debate around the downsides of Section 230, it’s worth considering: What are some of the alternatives to immunity, and how would they shape the internet?

The least protective legal regime for online speech would be strict liability. Here, intermediaries always would be liable for their users’ speech—regardless of whether they contributed to the harm, or even knew about the harmful speech. It would likely end the widespread availability and openness of social media and web hosting services we’re used to. Instead, services would not let users speak without vetting the content first, via upload filters or other means. Small intermediaries with niche communities may simply disappear under the weight of such heavy liability.

Another alternative: Imposing legal duties on intermediaries, such as requiring that they act “reasonably” to limit harmful user content. This would likely result in platforms monitoring users’ speech before distributing it, and being extremely cautious about what they allow users to say. That inevitably would lead to the removal of lawful speech—probably on a large scale. Intermediaries would not be willing to defend their users’ speech in court, even it is entirely lawful. In a world where any service could be easily sued over user speech, only the biggest services will survive. They’re the ones that would have the legal and technical resources to weather the flood of lawsuits.

Another option is a notice-and-takedown regime, like what exists under the Digital Millennium Copyright Act. That will also result in takedowns of legitimate speech. And there’s no doubt such a system will be abused. EFF has documented how the DMCA leads to widespread removal  https://www.eff.org/takedownsof lawful speech based on frivolous copyright infringement claims. Replacing Section 230 with a takedown system will invite similar behavior, and powerful figures and government officials will use it to silence their critics.

The closest alternative to Section 230’s immunity provides protections from liability until an impartial court has issued a full and final ruling that user-generated content is illegal, and ordered that it be removed. These systems ensure that intermediaries will not have to cave to frivolous claims. But they still leave open the potential for censorship because intermediaries are unlikely to fight every lawsuit that seeks to remove lawful speech. The cost of vindicating lawful speech in court may be too high for intermediaries to handle at scale.

By contrast, immunity takes the variable of whether an intermediary will stand up for their users’ speech out of the equation. That is why Section 230 maximizes the ability for users to speak online.

In some narrow situations, Section 230 may leave victims without a legal remedy. Proposals aimed at those gaps should be considered, though lawmakers should pay careful attention that in vindicating victims, they do not broadly censor users’ speech. But those legitimate concerns are not the criticisms that Congress is levying against Section 230.

EFF will continue to fight for Section 230, as it remains the best available system to protect everyone’s ability to speak online.

Aaron Mackey

RIP Dave Farber, EFF Board Member and Friend

3 days ago

We are sad to report the passing of longtime EFF Board member, Dave Farber. Dave was 91 and lived in Tokyo from age 83, where he was the Distinguished Professor at Keio University and Co-Director of the Keio Cyber Civilization Research Center (CCRC).  Known as the Grandfather of the Internet, Dave made countless contributions to the internet, both directly and through his support for generations of students.  

Dave was the longest-serving EFF Board member, having joined in the early 1990s, before the creation of the World Wide Web or the widespread adoption of the internet.  Throughout the growth of the internet and the corresponding growth of EFF, Dave remained a consistent, thoughtful, and steady presence on our Board.  Dave always gave us credibility as well as ballast.  He seemed to know and be respected by everyone who had helped build the internet, having worked with or mentored too many of them to count.  He also had an encyclopedic knowledge of the internet's technical history. 

From the beginning, Dave saw both the promise and the danger to human rights that would come with the spread of the internet around the world. He committed to helping make sure that the rights and liberties of users and developers, especially the open source community, were protected. He never wavered in that commitment.  Ever the teacher, Dave was also a clear explainer of internet technologies and basically unflappable.  

Dave also managed the Interesting People email list, which provided news and connection for so many internet pioneers and served as model for how people from disparate corners of the world could engage in a rolling conversation about all things digital.  His role as the Chief Technologist at the U.S. Federal Communications Commission from 2000 to 2001 gave him a strong perspective on the ways that government could help or hinder civil liberties in the digital world. 

We will miss his calm, thoughtful voice, both inside EFF and out in the world. May his memory be a blessing.  

Cindy Cohn

Op-ed: Weakening Section 230 Would Chill Online Speech

3 days 3 hours ago

(This appeared as an op-ed published Friday, Feb. 6 in the Daily Journal, a California legal newspaper.)

Section 230, “the 26 words that created the internet,” was enacted 30 years ago this week. It was no rush-job—rather, it was the result of wise legislative deliberation and foresight, and it remains the best bulwark to protect free expression online.

The internet lets people everywhere connect, share ideas and advocate for change without needing immense resources or technical expertise. Our unprecedented ability to communicate online—on blogs, social media platforms, and educational and cultural platforms like Wikipedia and the Internet Archive—is not an accident. In writing Section 230, Congress recognized that for free expression to thrive on the internet, it had to protect the services that power users’ speech. Section 230 does this by preventing most civil suits against online services that are based on what users say. The law also protects users who act like intermediaries when they, for example, forward an email, retweet another user or host a comment section on their blog.

The merits of immunity, both for internet users who rely on intermediaries—from ISPs to email providers to social media platforms, and for internet users who are intermediaries—are readily apparent when compared with the alternatives.

One alternative would be to provide no protection at all for intermediaries, leaving them liable for anything and everything anyone says using their service. This legal risk would essentially require every intermediary to review and legally assess every word, sound or image before it’s published—an impossibility at scale, and a death knell for real-time user-generated content.

Another option: giving protection to intermediaries only if they exercise a specified duty of care, such as where an intermediary would be liable if they fail to act reasonably in publishing a user’s post. But negligence and other objective standards are almost always insufficient to protect freedom of expression because they introduce significant uncertainty into the process and create real chilling effects for intermediaries. That is, intermediaries will choose not to publish anything remotely provocative—even if it’s clearly protected speech—for fear of having to defend themselves in court, even if they are likely to ultimately prevail. Many Section 230 critics bemoan the fact that it prevented courts from developing a common law duty of care for online intermediaries. But the criticism rarely acknowledges the experience of common law courts around the world, few of which adopted an objective standard, and many of which adopted immunity or something very close to it.

Congress’ purposeful choice of Section 230’s immunity is the best way to preserve the ability of millions of people in the U.S. to publish their thoughts, photos and jokes online, to blog and vlog, post, and send emails and messages.

Another alternative is a knowledge-based system in which an intermediary is liable only after being notified of the presence of harmful content and failing to remove it within a certain amount of time. This notice-and-takedown system invites tremendous abuse, as seen under the Digital Millennium Copyright Act’s approach: It’s too easy for someone to notify an intermediary that content is illegal or tortious simply to get something they dislike depublished. Rather than spending the time and money required to adequately review such claims, intermediaries would simply take the content down.

All these alternatives would lead to massive depublication in many, if not most, cases, not because the content deserves to be taken down, nor because the intermediaries want to do so, but because it’s not worth assessing the risk of liability or defending the user’s speech. No intermediary can be expected to champion someone else’s free speech at its own considerable expense.Nor is the United States the only government to eschew “upload filtering,” the requirement that someone must review content before publication. European Union rules avoid this also, recognizing how costly and burdensome it is. Free societies recognize that this kind of pre-publication review will lead risk-averse platforms to nix anything that anyone anywhere could deem controversial, leading us to the most vanilla, anodyne internet imaginable.

The advent of artificial intelligence doesn’t change this. Perhaps there’s a tool that can detect a specific word or image, but no AI can make legal determinations or be prompted to identify all defamation or harassment. Human expression is simply too contextual for AI to vet; even if a mechanism could flag things for human review, the scale is so massive that such human review would still be overwhelmingly burdensome.

Congress’ purposeful choice of Section 230’s immunity is the best way to preserve the ability of millions of people in the U.S. to publish their thoughts, photos and jokes online, to blog and vlog, post, and send emails and messages. Each of those acts requires numerous layers of online services, all of which face potential liability without immunity.

This law isn’t a shield for “big tech.” Its ultimate beneficiaries are all of us who want to post things online without having to code it ourselves, and so that we can read and watch content that others create. If Congress eliminated Section 230 immunity, for example, we would be asking email providers and messaging platforms to read and legally assess everything a user writes before agreeing to send it. 

For many critics of Section 230, the chilling effect is the point: They want a system that will discourage online services to publish protected speech that some find undesirable. They want platforms to publish less than what they would otherwise choose to publish, even when that speech is protected and nonactionable.

When Section 230 was passed in 1996, about 40 million people used the internet worldwide; by 2025, estimates ranged from five billion to north of six billion. In 1996, there were fewer than 300,000 websites; by last year, estimates ranged up to 1.3 billion. There is no workforce and no technology that can police the enormity of everything that everyone says.

Internet intermediaries—whether social media platforms, email providers or users themselves—are protected by Section 230 so that speech can flourish online.

David Greene

Yes to the “ICE Out of Our Faces Act”

6 days 20 hours ago

Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP) have descended into utter lawlessness, most recently in Minnesota. The violence is shocking. So are the intrusions on digital rights and civil liberties. For example, immigration agents are routinely scanning faces of people they suspect of unlawful presence in the country – 100,000 times, according to the Wall Street Journal. The technology has already misidentified at least one person, according to 404 Media.

Face recognition technology is so dangerous that government should not use it at all—least of all these out-of-control immigration agencies.

To combat these abuses, EFF is proud to support the “ICE Out of Our Faces Act.” This new federal bill would ban ICE and CBP agents, and some local police working with them, from acquiring or using biometric surveillance systems, including face recognition technology, or information derived from such systems by another entity. This bill would be enforceable, among other ways, by a strong private right of action.

The bill’s lead author is Senator Ed Markey. We thank him for his longstanding leadership on this issue, including introducing similar legislation that would ban all federal law enforcement agencies, and some federally-funded state agencies, from using biometric surveillance systems (a bill that EFF also supported). The new “ICE Out of My Face Act” is also sponsored by Senator Merkley, Senator Wyden, and Representative Jayapal.

As EFF explains in the new bill’s announcement:

It’s past time for the federal government to end its use of this abusive surveillance technology. A great place to start is its use for immigration enforcement, given ICE and CBP’s utter disdain for the law. Face surveillance in the hands of the government is a fundamentally harmful technology, even under strict regulations or if the technology was 100% accurate. We thank the authors of this bill for their leadership in taking steps to end this use of this dangerous and invasive technology.

You can read the bill here, and the bill’s announcement here.

Adam Schwartz

Protecting Our Right to Sue Federal Agents Who Violate the Constitution

1 week ago

Federal agencies like Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP) have descended into utter lawlessness, most recently in Minnesota. The violence is shocking. So are the intrusions on digital rights. For example, we have a First Amendment right to record on-duty police, including ICE and CBP, but federal agents are violating this right. Indeed, Alex Pretti was exercising this right shortly before federal agents shot and killed him. So were the many people who filmed agents shooting and killing Pretti and Renee Good – thereby creating valuable evidence that contradicts false claims by government leaders.

To protect our digital rights, we need the rule of law. When an armed agent of the government breaks the law, the civilian they injure must be made whole. This includes a lawsuit by the civilian (or their survivor) against the agent, seeking money damages to compensate them for their injury. Such systems of accountability encourage agents to follow the law, whereas impunity encourages them to break it.

Unfortunately, there is a gaping hole in the rule of law: when a federal agent violates the U.S. Constitution, it is increasingly difficult to sue them for damages. For these reasons, EFF supports new statutes to fill this hole, including California S.B. 747.

The Problem

In 1871, at the height of Reconstruction following the Civil War, Congress enacted a landmark statute empowering people to sue state and local officials who violated their constitutional rights. This was a direct response to state-sanctioned violence against Black people that continued despite the formal end of slavery. The law is codified today at 42 U.S.C. § 1983.

However, there is no comparable statute empowering people to sue federal officials who violate the U.S. Constitution.

So in 1971, the U.S. Supreme Court stepped into this gap, in a watershed case called Bivens v. Six Unknown FBI Agents. The plaintiff alleged that FBI agents unlawfully searched his home and used excessive force against him. Justice Brennan, writing for a six-Justice majority of the Court, ruled that “damages may be obtained for injuries consequent upon a violation of the Fourth Amendment by federal officials.”  He explained: “Historically, damages have been regarded as the ordinary remedy for an invasion of personal interests in liberty.” Further: “The very essence of civil liberty certainly consists of the right of every individual to claim the protection of the laws, whenever he receives an injury.”

Subsequently, the Court expanded Bivens in cases where federal officials violated the U.S. Constitution by discriminating in a workplace, and by failing to provide medical care in a prison.

In more recent years, however, the Court has whittled Bivens down to increasing irrelevance. For example, the Court has rejected damages litigation against federal officials who allegedly violated the U.S. Constitution by strip searching a detained person, and by shooting a person located across the border.

In 2022, the Court by a six-to-three vote rejected a damages claim against a Border Patrol agent who used excessive force when investigating alleged smuggling.  In an opinion concurring in the judgment, Justice Gorsuch conceded that he “struggle[d] to see how this set of facts differs meaningfully from those in Bivens itself.” But then he argued that Bivens should be overruled because it supposedly “crossed the line” against courts “assuming legislative authority.”

Last year, the Court unanimously declined to extend Bivens to excessive force in a prison.

The Solution

At this juncture, legislatures must solve the problem. We join calls for Congress to enact a federal statute, parallel to the one it enacted during Reconstruction, to empower people to sue federal officials (and not just state and local officials) who violate the U.S. Constitution.

In the meantime, it is heartening to see state legislatures step forward fill this hole. One such effort is California S.B. 747, which EFF is proud to endorse.

State laws like this one do not violate the Supremacy Clause of the U.S. Constitution, which provides that the Constitution is the supreme law of the land. In the words of one legal explainer, this kind of state law “furthers the ultimate supremacy of the federal Constitution by helping people vindicate their fundamental constitutional rights.” 

This kind of state law goes by many names. The author of S.B. 747, California Senator Scott Wiener, calls it the “No Kings Act.” Protect Democracy, which wrote a model bill, calls it the “Universal Constitutional Remedies Act.” The originator of this idea, Professor Akhil Amar, calls it a “converse 1983”: instead of Congress authorizing suit against state officials for violating the U.S. Constitution, states would authorize suit against federal officials for doing the same thing.

We call these laws a commonsense way to protect the rule of law, which is a necessary condition to preserve our digital rights. EFF has long supported effective judicial remedies, including support for nationwide injunctions and private rights of action, and opposition to qualified immunity.

We also support federal and state legislation to guarantee our right to sue federal agents for damages when they violate the U.S. Constitution.

Adam Schwartz

Smart AI Policy Means Examining Its Real Harms and Benefits

1 week ago

The phrase "artificial intelligence" has been around for a long time, covering everything from computers with "brains"—think Data from Star Trek or Hal 9000 from 2001: A Space Odyssey—to the autocomplete function that too often has you sending emails to the wrong person. It's a term that sweeps a wide array of uses into it—some well-established, others still being developed.

Recent news shows us a rapidly expanding catalog of potential harms that may result from companies pushing AI into every new feature and aspect of public life—like the automation of bias that follows from relying on a backward-looking technology to make consequential decisions about people's housing, employment, education, and so on. Complicating matters, the computation needed for some AI services requires vast amounts of water and electricity, leading to sometimes difficult questions about whether the increased fossil fuel use or consumption of water is justified.

We are also inundated with advertisements and exhortations to use the latest AI-powered apps, and with hype insisting AI can solve any problem.

Obscured by this hype, there are some real examples of AI proving to be a helpful tool. For example, machine learning is especially useful for scientists looking at everything from the inner workings of our biology to cosmic bodies in outer space. AI tools can also improve accessibility for people with disabilities, facilitate police accountability initiatives, and more. There are reasons why these problems are amenable to machine learning and why excitement over these uses shouldn’t translate into a perception that just any language model or AI technology possesses expert knowledge or can solve whatever problem it’s marketed as solving.

EFF has long fought for sensible, balanced tech policies because we’ve seen how regulators can focus entirely on use cases they don’t like (such as the use of encryption to hide criminal behavior) and cause enormous collateral harm to other uses (such as using encryption to hide dissident resistance). Similarly, calls to completely preempt state regulation of AI would thwart important efforts to protect people from the real harms of AI technologies. Context matters. Large language models (LLMs) and the tools that rely on them are not magic wands—they are general-purpose technologies. And if we want to regulate those technologies in a way that doesn’t shut down beneficial innovations, we have to focus on the impact(s) of a given use or tool, by a given entity, in a specific context. Then, and only then, can we even hope to figure out what to do about it.

So let’s look at the real-world landscape.

AI’s Real and Potential Harms

Thinking ahead about potential negative uses of AI helps us spot risks. Too often, the corporations developing AI tools—as well as governments that use them—lose sight of the real risks, or don’t care. For example, companies and governments use AI to do all sorts of things that hurt people, from price collusion to mass surveillance. AI should never be part of a decision about whether a person will be arrested, deported, placed into foster care, or denied access to important government benefits like disability payments or medical care.

There is too much at stake, and governments have a duty to make responsible, fair, and explainable decisions, which AI can’t reliably do yet. Why? Because AI tools are designed to identify and reproduce patterns in data that they are “trained” on.  If you train AI on records of biased government decisions, such as records of past arrests, it will “learn” to replicate those discriminatory decisions.

And simply having a human in the decision chain will not fix this foundational problem. Studies have shown that having a human “in the loop” doesn’t adequately correct for AI bias, both because the human tends to defer to the AI and because the AI can provide cover for a biased human to ratify decisions that agree with their biases and override the AI at other times.

These biases don’t just arise in obvious contexts, like when a government agency is making decisions about people. It can also arise in equally life-affecting contexts like medical care. Whenever AI is used for analysis in a context with systemic disparities and whenever the costs of an incorrect decision fall on someone other than those deciding whether to use the tool.  For example, dermatology has historically underserved people of color because of a focus on white skin, with the resulting bias affecting AI tools trained on the existing and biased image data.

These kinds of errors are difficult to detect and correct because it’s hard or even impossible to understand how an AI tool arrives at individual decisions. These tools can sometimes find and apply patterns that a human being wouldn't even consider, such as basing diagnostic decisions on which hospital a scan was done at. Or determining that malignant tumors are the ones where there is a ruler next to them—something that a human would automatically exclude from their evaluation of an image. Unlike a human, AI does not know that the ruler is not part of the cancer.

Auditing and correcting for these kinds of mistakes is vital, but in some cases, might negate any sort of speed or efficiency arguments made in favor of the tool. We all understand that the more important a decision is, the more guardrails against disaster need to be in place. For many AI tools, those don't exist yet. Sometimes, the stakes will be too high to justify the use of AI. In general, the higher the stakes, the less this technology should be used.

We also need to acknowledge the risk of over-reliance on AI, at least as it is currently being released. We've seen shades of a similar problem before online (see: "Dr. Google"), but the speed and scale of AI use—and the increasing market incentive to shoe-horn “AI” into every business model—have compounded the issue.

Moreover, AI may reinforce a user’s pre-existing beliefs—even if they’re wrong or unhealthy. Many users may not understand how AI works, what it is programmed to do, and how to fact check it. Companies have chosen to release these tools widely without adequate information about how to use them properly and what their limitations are. Instead they market them as easy and reliable. Worse, some companies also resist transparency in the name of trade secrets and reducing liability, making it harder for anyone to evaluate AI-generated answers. 

Other considerations may weigh against AI uses are its environmental impact and potential labor market effects. Delving into these is beyond the scope of this post, but it is an important factor in determining if AI is doing good somewhere and whether any benefits from AI are equitably distributed.

Research into the extent of AI harms and means of avoiding them is ongoing, but it should be part of the analysis.

AI’s Real and Potential Benefits

However harmful AI technologies can sometimes be, in the right hands and circumstances, they can do things that humans simply can’t. Machine learning technology has powered search tools for over a decade. It’s undoubtedly useful for machines to help human experts pore through vast bodies of literature and data to find starting points for research—things that no number of research assistants could do in a single year. If an actual expert is involved and has a strong incentive to reach valid conclusions, the weaknesses of AI are less significant at the early stage of generating research leads. Many of the following examples fall into this category.

Machine learning differs from traditional statistics in that the analysis doesn’t make assumptions about what factors are significant to the outcome. Rather, the machine learning process computes which patterns in the data have the most predictive power and then relies upon them, often using complex formulae that are unintelligible to humans. These aren’t discoveries of laws of nature—AI is bad at generalizing that way and coming up with explanations. Rather, they’re descriptions of what the AI has already seen in its data set.

To be clear, we don't endorse any products and recognize initial results are not proof of ultimate success. But these cases show us the difference between something AI can actually do versus what hype claims it can do.

Researchers are using AI to discover better alternatives to today’s lithium-ion batteries, which require large amounts of toxic, expensive, and highly combustible materials. Now, AI is rapidly advancing battery development: by allowing researchers to analyze millions of candidate materials and generate new ones. New battery technologies discovered with the help of AI have a long way to go before they can power our cars and computers, but this field has come further in the past few years than it had in a long time.

AI Advancements in Scientific and Medical Research

AI tools can also help facilitate weather prediction. AI forecasting models are less computationally intensive and often more reliable than traditional tools based on simulating the physical thermodynamics of the atmosphere. Questions remain, though about how they will handle especially extreme events or systemic climate changes over time.

For example:

  • The National Oceanic and Atmospheric Administration has developed new machine learning models to improve weather prediction, including a first-of-its-kind hybrid system that  uses an AI model in concert with a traditional physics-based model to deliver more accurate forecasts than either model does on its own. to augment its traditional forecasts, with improvements in accuracy when the AI model is used in concert with the physics-based model.
  • Several models were used to forecast a recent hurricane. Google DeepMind’s AI system performed the best, even beating official forecasts from the U.S. National Hurricane Center (which now uses DeepMind’s AI model).

 Researchers are using AI to help develop new medical treatments:

  • Deep learning tools, like the Nobel Prize-winning model AlphaFold, are helping researchers understand protein folding. Over 3 million researchers have used AlphaFold to analyze biological processes and design drugs that target disease-causing malfunctions in those processes.
  • Researchers used machine learning simulate and computationally test a large range of new antibiotic candidates hoping they will help treat drug-resistant bacteria, a growing threat that kills millions of people each year.
  • Researchers used AI to identify a new treatment for idiopathic pulmonary fibrosis, a progressive lung disease with few treatment options. The new treatment has successfully completed a Phase IIa clinical trial. Such drugs still need to be proven safe and effective in larger clinical trials and gain FDA approval before they can help patients, but this new treatment for pulmonary fibrosis could be the first to reach that milestone.
  • Machine learning has been used for years to aid in vaccine development—including the development of the first COVID-19 vaccines––accelerating the process by rapidly identifying potential vaccine targets for researchers to focus on.
AI Uses for Accessibility and Accountability 

AI technologies can improve accessibility for people with disabilities. But, as with many uses of this technology, safeguards are essential. Many tools lack adequate privacy protections, aren’t designed for disabled users, and can even harbor bias against people with disabilities. Inclusive design, privacy, and anti-bias safeguards are crucial. But here are two very interesting examples:

  • AI voice generators are giving people their voices back, after losing their ability to speak. For example, while serving in Congress, Rep. Jennifer Wexton developed a debilitating neurological condition that left her unable to speak. She used her cloned voice to deliver a speech from the floor of the House of Representatives advocating for disability rights.
  • Those who are blind or low-vision, as well as those who are deaf or hard-of-hearing, have benefited from accessibility tools while also discussing their limitations and drawbacks. At present, AI tools often provide information in a more easily accessible format than traditional web search tools and many websites that are difficult to navigate for users that rely on a screen reader. Other tools can help blind and low vision users navigate and understand the world around them by providing descriptions of their surroundings. While these visual descriptions may not always be as good as the ones a human may provide, they can still be useful in situations when users can’t or don’t want to ask another human to describe something. For more on this, check out our recent podcast episode on “Building the Tactile Internet.”

When there is a lot of data to comb through, as with police accountability, AI is very useful for researchers and policymakers:

  •  The Human Rights Data Analysis Group used LLMs to analyze millions of pages of records regarding police misconduct. This is essentially the reverse of harmful use cases relating to surveillance; when the power to rapidly analyze large amounts of data is used by the public to scrutinize the state there is a potential to reveal abuses of power and, given the power imbalance, very little risk that undeserved consequences will befall those being studied.
  • An EFF client, Project Recon, used an AI system to review massive volumes of transcripts of prison parole hearings to identify biased parole decisions. This innovative use of technology to identify systemic biases, including racial disparities, is the type of AI use we should support and encourage.

It is not a coincidence that the best examples of positive uses of AI come in places where experts, with access to infrastructure to help them use the technology and the requisite experience to evaluate the results, are involved. Moreover, academic researchers are already accustomed to explaining what they have done and being transparent about it—and it has been hard won knowledge that ethics are a vital step in work like this.

Nor is it a coincidence that other beneficial uses involve specific, discrete solutions to problems faced by those whose needs are often unmet by traditional channels or vendors. The ultimate outcome is beneficial, but it is moderated by human expertise and/or tailored to specific needs.

Context Matters

It can be very tempting—and easy—to make a blanket determination about something, especially when the stakes seem so high. But we urge everyone—users, policymakers, the companies themselves—to cut through the hype. In the meantime, EFF will continue to work against the harms caused by AI while also making sure that beneficial uses can advance.

Tori Noble

EFF to Close Friday in Solidarity with National Shutdown

1 week 6 days ago

The Electronic Frontier Foundation stands with the people of Minneapolis and with all of the communities impacted by the ongoing campaign of ICE and CBP violence. EFF will be closed Friday, Jan. 30 as part of the national shutdown in opposition to ICE and CBP and the brutality and terror they and other federal agencies continue to inflict on immigrant communities and any who stand with them.

We do not make this decision lightly, but we will not remain silent. 

Cindy Cohn

Introducing Encrypt It Already

2 weeks ago

Today, we’re launching Encrypt It Already, our push to get companies to offer stronger privacy protections to our data and communications by implementing end-to-end encryption. If that name sounds a little familiar, it’s because this is a spiritual successor to our 2019 campaign, Fix It Already, a campaign where we pushed companies to fix longstanding issues.

End-to-end encryption is the best way we have to protect our conversations and data. It ensures the company that provides a service cannot access the data or messages you store on it. So, for secure chat apps like WhatsApp and Signal, that means the company that makes those apps cannot see the contents of your messages, and they’re only accessible on your and your recipients. When it comes to data, like what’s stored using Apple’s Advanced Data Protection, it means you control the encryption keys and the service provider will not be able to access the data.  

We’ve divided this up into three categories, each with three different demands:

  • Keep your Promises: Features that the company has publicly stated they’re working on, but which haven’t launched yet.
    • Facebook should use end-to-end encryption for group messages
    • Apple and Google should deliver on their promise of interoperable end-to-end encryption of RCS
    • Bluesky should launch its promised end-to-end encryption for DMs
  • Defaults Matter: Features that are available on a service or in app already, but aren’t enabled by default.
    • Telegram should default to end-to-end encryption for DMs
    • WhatsApp should use end-to-end encryption for backups by default
    • Ring should enable end-to-end encryption for its cameras by default
  • Protect Our Data: New features that companies should launch, often because their competition is doing it already.
    • Google should launch end-to-end encryption for Google Authenticator backups
    • Google should offer end-to-end encryption for Android backup data
    • Apple and Google should offer an AI permissions per app option to block AI access to secure chat apps

What is only half the problem. How is just as important.

What Companies Should Do When They Launch End-to-End Encryption Features

There’s no one-size fits all way to implement end-to-end encryption in products and services, but best practices can support the security of the platform with the transparency that makes it possible for its users to trust it protects data like the company claims it does. When these encryption features launch, companies should consider doing so with:

  • A blog post written for a general audience that summarizes the technical details of the implementation, and when it makes sense, a technical white paper that goes into further detail for the technical crowd.
  • Clear user-facing documentation around what data is and isn’t end-to-end encrypted, and robust and clear user controls when it makes sense to have them.
  • Data minimization principles whenever feasible, storing as little metadata as possible.

Technical documentation is important for end-to-encryption features, but so is clear documentation that makes it easy for users to understand what is and isn’t protected, what features may change, and what steps they need to take to set it up so they’re comfortable with how data is protected.

What You Can Do

When it’s an option, enable any end-to-end encryption features you can, like on Telegram, WhatsApp, and Ring.

For everything else, let companies know that these are features you want! You can find messages to share on social media on the Encrypt It Already website, and take the time to customize those however you’d like. 

In some cases, you can also reach out to a company directly with feature requests, which all the above companies, except for Google and WhatsApp, offer in some form. We recommend filing these through any service you use for any of the above features you’d like to see:

As for Ring and Telegram, we’ve already made the asks and just need your help to boost them. Head over to the Telegram bug and suggestions and upvote this post, and Ring’s feature request board and boost this post.

End-to-end encryption protects what we say and what we store in a way that gives users—not companies or governments—control over data. These sorts of privacy-protective features should be the status quo across a range of products, from fitness wearables to notes apps, but instead it’s a rare feature limited to a small set of services, like messaging and (occasionally) file storage. These demands are just the start. We deserve this sort of protection for a far wider array of products and services. It’s time to encrypt it already!

Join EFF

Help protect digital privacy & free speech for everyone

Thorin Klosowski

Google Settlement May Bring New Privacy Controls for Real-Time Bidding

2 weeks ago

EFF has long warned about the dangers of the “real-time bidding” (RTB) system powering nearly every ad you see online. A proposed class-action settlement with Google over their RTB system is a step in the right direction towards giving people more control over their data. Truly curbing the harms of RTB, however, will require stronger legislative protections.

What Is Real-Time Bidding?

RTB is the process by which most websites and apps auction off their ad space. Unfortunately, the milliseconds-long auctions that determine which ads you see also expose your personal information to thousands of companies a day. At a high-level, here’s how RTB works:

  1. The moment you visit a website or app with ad space, it asks an ad tech company to determine which ads to display for you. This involves sending information about you and the content you’re viewing to the ad tech company.
  2. This ad tech company packages all the information they can gather about you into a “bid request” and broadcasts it to thousands of potential advertisers. 
  3. The bid request may contain information like your unique advertising ID, your GPS coordinates, IP address, device details, inferred interests, demographic information, and the app or website you’re visiting. The information in bid requests is called “bidstream data” and typically includes identifiers that can be linked to real people. 
  4. Advertisers use the personal information in each bid request, along with data profiles they’ve built about you over time, to decide whether to bid on the ad space. 
  5. The highest bidder gets to display an ad for you, but advertisers (and the adtech companies they use to buy ads) can collect your bidstream data regardless of whether or not they bid on the ad space.   
Why Is Real-Time Bidding Harmful?

A key vulnerability of real-time bidding is that while only one advertiser wins the auction, all participants receive data about the person who would see their ad. As a result, anyone posing as an ad buyer can access a stream of sensitive data about billions of individuals a day. Data brokers have taken advantage of this vulnerability to harvest data at a staggering scale. Since bid requests contain individual identifiers, they can be tied together to create detailed profiles of people’s behavior over time.

Data brokers have sold bidstream data for a range of invasive purposes, including tracking union organizers and political protesters, outing gay priests, and conducting warrantless government surveillance. Several federal agencies, including ICE, CBP and the FBI, have purchased location data from a data broker whose sources likely include RTB. ICE recently requested information on “Ad Tech” tools it could use in investigations, further demonstrating RTB’s potential to facilitate surveillance. RTB also poses national security risks, as researchers have warned that it could allow foreign states to obtain compromising personal data about American defense personnel and political leaders.

The privacy harms of RTB are not just a matter of misuse by individual data brokers. RTB auctions broadcast torrents of personal data to thousands of companies, hundreds of times per day, with no oversight of how this information is ultimately used. Once your information is broadcast through RTB, it’s almost impossible to know who receives it or control how it’s used. 

Proposed Settlement with Google Is a Step in the Right Direction

As the dominant player in the online advertising industry, Google facilitates the majority of RTB auctions. Google has faced several class-action lawsuits for sharing users’ personal information with thousands of advertisers through RTB auctions without proper notice and consent. A recently proposed settlement to these lawsuits aims to give people more knowledge and control over how their information is shared in RTB auctions.

Under the proposed settlement, Google must create a new privacy setting (the “RTB Control”) that allows people to limit the data shared about them in RTB auctions. When the RTB Control is enabled, bid requests will not include identifying information like pseudonymous IDs (including mobile advertising IDs), IP addresses, and user agent details. The RTB Control should also prevent cookie matching, a method companies use to link their data profiles about a person to a corresponding bid request. Removing identifying information from bid requests makes it harder for data brokers and advertisers to create consumer profiles based on bidstream data. If the proposed settlement is approved, Google will have to inform all users about the new RTB Control via email. 

While this settlement would be a step in the right direction, it would still require users to actively opt out of their identifying information being shared through RTB. Those who do not change their default settings—research shows this is most people—will remain vulnerable to RTB’s massive daily data breach. Google broadcasting your personal data to thousands of companies each time you see an ad is an unacceptable and dangerous default. 

The impact of RTB Control is further limited by technical constraints on who can enable it. RTB Control will only work for devices and browsers where Google can verify users are signed in to their Google account, or for signed-out users on browsers that allow third-party cookies. People who don't sign in to a Google account or don't enable privacy-invasive third-party cookies cannot benefit from this protection. These limitations could easily be avoided by making RTB Control the default for everyone. If the settlement is approved, regulators and lawmakers should push Google to enable RTB Control by default.

The Real Solution: Ban Online Behavioral Advertising

Limiting the data exposed through RTB is important, but we also need legislative change to protect people from the online surveillance enabled and incentivized by targeted advertising. The lack of strong, comprehensive privacy law in the U.S. makes it difficult for individuals to know and control how companies use their personal information. Strong privacy legislation can make privacy the default, not something that individuals must fight for through hidden settings or additional privacy tools. EFF advocates for data privacy legislation with teeth and a ban on ad targeting based on online behavioral profiles, as it creates a financial incentive for companies to track our every move. Until then, you can limit the harms of RTB by using EFF’s Privacy Badger to block ads that track you, disabling your mobile advertising ID (see instructions for iPhone/Android), and keeping an eye out for Google’s RTB Control.

Lena Cohen

✍️ The Bill to Hand Parenting to Big Tech | EFFector 38.2

2 weeks 1 day ago

Lawmakers in Washington are once again focusing on kids, screens, and mental health. But according to Congress, Big Tech is somehow both the problem and the solution. We're diving into the latest attempt to control how kids access the internet and more with our latest EFFector newsletter.

Since 1990, EFFector has been your guide to understanding the intersection of technology, civil liberties, and the law. This latest issue tracks what to do when you hit an age gate online, explains why rent-only copyright culture makes us all worse off, and covers the dangers of law enforcement purchasing straight-up military drones.

Prefer to listen in? In our audio companion, EFF Senior Policy Analyst Joe Mullin explains what lawmakers should do if they really want to help families. Find the conversation on YouTube or the Internet Archive.

LISTEN TO EFFECTOR

EFFECTOR 38.2 - ✍️ THE BILL TO HAND PARENTING TO BIG TECH

Want to stay in the fight for privacy and free speech online? Sign up for EFF's EFFector newsletter for updates, ways to take action, and new merch drops. You can also fuel the fight to protect people from these data breaches and unlawful surveillance when you support EFF today!

Christian Romero

DSA Human Rights Alliance Publishes Principles Calling for DSA Enforcement to Incorporate Global Perspectives

2 weeks 1 day ago

The Digital Services Act (DSA) Human Rights Alliance has, since its founding by EFF and Access Now in 2021, worked to ensure that the European Union follows a human rights-based approach to platform governance by integrating a wide range of voices and perspectives to contextualise DSA enforcement and examining the DSA’s effect on tech regulations around the world.

As the DSA moves from legislation to enforcement, it has become increasingly clear that its impact depends not only on the text of the Act but also how it’s interpreted and enforced in practice. This is why the Alliance has created a set of recommendations to include civil society organizations and rights-defending stakeholders in the enforcement process. 

 The Principles for a Human Rights-Centred Application of the DSA: A Global Perspective, a report published this week by the Alliance, outlines steps the European Commission, as the main DSA enforcer, as well as national policymakers and regulators, should take to bring diverse groups to the table as a means of ensuring that the implementation of the DSA is grounded in human rights standards.

 The Principles also offer guidance for regulators outside the EU who look to the DSA as a reference framework and international bodies and global actors concerned with digital governance and the wider implications of the DSA. The Principles promote meaningful stakeholder engagement and emphasize the role of civil society organisations in providing expertise and acting as human rights watchdogs.

“Regulators and enforcers need input from civil society, researchers, and affected communities to understand the global dynamics of platform governance,” said EFF International Policy Director Christoph Schmon. “Non-EU-based civil society groups should be enabled to engage on equal footing with EU stakeholders on rights-focused elements of the DSA. This kind of robust engagement will help ensure that DSA enforcement serves the public interest and strengthens fundamental rights for everyone, especially marginalized and vulnerable groups.”

“As activists are increasingly intimidated, journalists silenced, and science and academic freedom attacked by those who claim to defend free speech, it is of utmost importance that the Digital Services Act's enforcement is centered around the protection of fundamental rights, including the right to the freedom of expression,” said Marcel Kolaja, Policy & Advocacy Director—Europe at Access Now. “To do so effectively, the global perspective needs to be taken into account. The DSA Human Rights Principles provide this perspective and offer valuable guidance for the European Commission, policymakers, and regulators for implementation and enforcement of policies aiming at the protection of fundamental rights.”

“The Principles come at the crucial moment for the EU candidate countries, such as Serbia, that have been aligning their legislation with the EU acquis but still struggle with some of the basic rule of law and human rights standards,” said Ana Toskic Cvetinovic, Executive Director for Partners Serbia. “The DSA HR Alliance offers the opportunity for non-EU civil society to learn about the existing challenges of DSA implementation and design strategies for impacting national policy development in order to minimize any negative impact on human rights.”

 The Principles call for:

◼ Empowering EU and non-EU Civil Society and Users to Pursue DSA Enforcement Actions

◼ Considering Extraterritorial and Cross-Border Effects of DSA Enforcement

◼ Promoting Cross-Regional Collaboration Among CSOs on Global Regulatory Issues

◼ Establishing Institutionalised Dialogue Between EU and Non-EU Stakeholders

◼ Upholding the Rule of Law and Fundamental Rights in DSA Enforcement, Free from Political Influence

◼ Considering Global Experiences with Trusted Flaggers and Avoid Enforcement Abuse

◼ Recognising the International Relevance of DSA Data Access and Transparency Provisions for Human Rights Monitoring

The Principles have been signed by 30 civil society organizations,researchers, and independent experts.

The DSA Human Right Alliance represents diverse communities across the globe to ensure that the DSA embraces a human rights-centered approach to platform governance and that EU lawmakers consider the global impacts of European legislation.

 

Karen Gullo
Checked
2 minutes 13 seconds ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed