Congress's Crusade to Age Gate the Internet: 2025 in Review

3 months 2 weeks ago

In the name of 'protecting kids online,' Congress pushed forward legislation this year that could have severely undermined our privacy and stifled free speech. These bills would have mandated invasive age-verification checks for everyone online—adults and kids alike—handing unprecedented control to tech companies and government authorities.

Lawmakers from both sides of the aisle introduced bill after bill, each one somehow more problematic than the last, and each one a gateway for massive surveillance, internet censorship, and government overreach. In all, Congress considered nearly twenty federal proposals.

For us, this meant a year of playing legislative whack-a-mole, fighting off one bad bill after another. But more importantly, it meant building sustained opposition, strengthening coalitions, and empowering our supporters—that's you!—with the tools you need to understand what's at stake and take action.

Luckily, thanks to this strong opposition, these federal efforts all stalled… for now.

So, before we hang our hats and prepare for the new year, let’s review some of our major wins against federal age-verification legislation in 2025.

The Kids Online Safety Act (KOSA)

Of the dozens of federal proposals relating to kids online, the Kids Online Safety Act remains the biggest threat. We, along with a coalition of civil liberties groups, LGBTQ+ advocates, youth organizations, human rights advocates, and privacy experts, have been sounding the alarm on KOSA for years now.

First introduced in 2022, KOSA would allow the Federal Trade Commission to sue apps and websites that don’t take measures to restrict young people’s access to certain content. There have been numerous versions introduced, though all of them share a common core: KOSA is an unconstitutional censorship bill that threatens the speech and privacy rights of all internet users. It would impose a requirement that platforms “exercise reasonable care” to prevent and mitigate a sweeping list of harms to minors, including depression, anxiety, eating disorders, substance use, bullying, and “compulsive usage.” Those prohibitions are so broad that they will sweep up online speech about the topics, including efforts to provide resources to adults and minors experiencing them. The bill claims prohibit censorship based on “the viewpoint of users,” but that’s simply a smokescreen. Its core function is to let the federal government sue platforms, big or small, that don’t block or restrict content that someone later claims contributed to one of these harms. 

In addition to stifling online speech, KOSA would strongly incentivize age-verification systems—forcing all users, adults and minors, to prove who they are before they can speak or read online. Because KOSA requires online services to separate and censor aspects of their services accessed by children, services are highly likely to demand to know every user’s age to avoid showing minors any of the content KOSA deems harmful. There are a variety of age determination options, but all have serious privacy, accuracy, or security problems. Even worse, age-verification schemes lead everyone to provide even more personal data to the very online services that have invaded our privacy before. And all age verification systems, at their core, burden the rights of adults to read, get information, and speak and browse online anonymously.

Despite what lawmakers claim, KOSA won’t bother big tech—in fact, they endorse it! The bill is written so that big tech companies, like Apple and X, will be able to handle the regulatory burden that KOSA will demand, while smaller platforms will struggle to comply. Under KOSA, a small platform hosting mental health discussion boards will be just as vulnerable as Meta or TikTok—but much less able to defend itself. 

The good news is that KOSA’s momentum this Congress was waning at best. There was a lot of talk about the bill from lawmakers, but little action. The Senate version of the bill, which passed overwhelmingly last summer, did not even make it out of committee this Congress.

In the House, lawmakers could not get on the same page about the bill—so much so that one of the original sponsors of KOSA actually voted against the bill in committee in December.

The bad news is that lawmakers are determined to keep raising this issue, as soon as the beginning of next year. So let’s keep the momentum going by showing them that users do not want age verification mandates—we want privacy.

TAKE ACTION

Don't let congress censor the internet

Threats Beyond KOSA

KOSA wasn’t the only federal bill in 2025 that used “kids’ safety” as a cover for sweeping surveillance and censorship mandates. Concern about possible harms of AI chatbots dominated policy discussion this year in Congress.

One of the most alarming proposals on the issue was the GUARD Act, which would require AI chatbots to verify all users’ ages, prohibit minors from using AI tools, and implement steep criminal penalties for chatbots that promote or solicit certain harms. As we wrote in November, though the GUARD Act may look like a child-safety bill, in practice it’s an age-gating mandate that could be imposed on nearly every public-facing AI chatbot—from customer-service bots to search-engine assistants. The GUARD Act could force countless AI companies to collect sensitive identity data, chill online speech, and block teens from using some of the digital tools that they rely on every day.

Like KOSA, the GUARD Act would make the internet less free, less private, and less safe for everyone. It would further consolidate power and resources in the hands of the bigger AI companies, crush smaller developers, and chill innovation under the threat of massive fines. And it would cut off vulnerable groups’ ability to use helpful everyday AI tools, further fracturing the internet we know and love.

With your help, we urged lawmakers to reject the GUARD Act and focus instead on policies that provide more transparency, options, and comprehensive privacy for all users.

Beating Age Verification for Good

Together, these bills reveal a troubling pattern in Congress this year. Rather than actually protecting young people’s privacy and safety online, Congress continues to push a legislative framework that’s based on some deeply flawed assumptions:

  1. That the internet must be age-gated, with young people either heavily monitored or kicked off entirely, in order to be safe;
  2. That the value of our expressive content to each individual should be determined by the state, not individuals or even families; and
  3. That these censorship and surveillance regimes are worth the loss of all users’ privacy, anonymity, and free expression online.

We’ve written over and over about the many communities who are immeasurably harmed by online age verification mandates. It is also worth remembering who these bills serve—big tech companies, private age verification vendors, AI companies, and legislators vying for the credit of “solving” online safety while undermining users at every turn.

We fought these bills all through 2025, and we’ll continue to do so until we beat age verification for good. So rest up, read up (starting with our all-new resource hub, EFF.org/Age!), and get ready to join us in this fight in 2026. Thank you for your support this year.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

Molly Buckley

States Tried to Censor Kids Online. Courts, and EFF, Mostly Stopped Them: 2025 in Review

3 months 2 weeks ago

Lawmakers in at least a dozen states believe that they can pass laws blocking  young people from social media or require them to get their parents’ permission before logging on. Fortunately, nearly every trial court to review these laws has ruled that they are unconstitutional.

It’s not just courts telling these lawmakers they are wrong. EFF has spent the past year filing friend-of-the-court briefs in courts across the country explaining how these laws violate young people’s First Amendment rights to speak and get information online. In the process, these laws also burden adults’ rights, and jeopardize everyone’s privacy and data security.

Minors have long had the same First Amendment rights as adults: to talk about politics, create art, comment on the news, discuss or practice religion, and more. The internet simply amplified their ability to speak, organize, and find community.

Although these state laws vary in scope, most have two core features. First, they require social media services to estimate or verify the ages of all users. Second, they either ban minor access to social media, or require parental permission. 

In 2025, EFF filed briefs challenging age-gating laws in California (twice), Florida, Georgia, Mississippi, Ohio, Utah, Texas, and Tennessee. Across these cases we argued the same point: these laws burden the First Amendment rights of both young people and adults. In many of these briefs, the ACLU, Center for Democracy & Technology, Freedom to Read Foundation, LGBT Technology Institute, TechFreedom, and Woodhull Freedom Foundation joined.

There is no “kid exception” to the First Amendment. The Supreme Court has repeatedly struck down laws that restrict minors’ speech or impose parental-permission requirements. Banning young people entirely from social media is an extreme measure that doesn’t match the actual risks. As EFF has urged, lawmakers should pursue strong privacy laws, not censorship, to address online harms.

These laws also burden everyone’s speech requiring users to prove their age. ID-based systems of access can lock people out if they don’t have the right form of ID, and biometric systems are often discriminatory or inaccurate. Requiring users to identify themselves before speaking also chills anonymous speech—protected by the First Amendment, and essential for those who risk retaliation. 

Finally, requiring users to provide sensitive personal information increases their risk of future privacy and security invasions. Most of these laws perversely require social media companies to collect even more personal information from everyone, especially children, who can be more vulnerable to identify theft.

EFF will continue to fight for the rights of minors and adults to access the internet, speak freely, and organize online.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

Aaron Mackey

Site Blocking Laws Will Always Be a Bad Idea: 2025 in Review

3 months 2 weeks ago

This year, we fought back against the return of a terrible idea that hasn’t improved with age: site blocking laws. 

More than a decade ago, Congress tried to pass SOPA and PIPA—two sweeping bills that would have allowed the government and copyright holders to quickly shut down entire websites based on allegations of piracy. The backlash was massive. Internet users, free speech advocates, and tech companies flooded lawmakers with protests, culminating in an “Internet Blackout” on January 18, 2012. Turns out, Americans don’t like government-run internet blacklists. The bills were ultimately shelved.  

But we’ve never believed they were gone for good. The major media and entertainment companies that backed site blocking in the US in 2012 turned to pushing for site-blocking laws in other countries. Rightsholders continued to ask US courts for site-blocking orders, often winning them without a new law. And sure enough, the Motion Picture Association (MPA) and its allies have asked Congress to try again. 

There were no less than three Congressional drafts of site-blocking legislation. Representative Zoe Lofgren kicked off the year with the Foreign Anti-Digital Piracy Act (FADPA). Fellow House of Representatives member Darrell Issa also claimed to be working on a bill that would make it offensively easy for a studio to block your access to a website based solely on the belief that there is infringement happening. Not to be left out, the Senate Judiciary Committee produced the terribly named Block BEARD Act.  

None of these three attempts to fundamentally alter the way you experience the internet moved too far after their press releases. But the number tells us that there is, once again, an appetite among major media conglomerates and politicians to resurrect SOPA/PIPA from the dead.  

None of these proposals fixes the flaws of SOPA/PIPA, and none ever could. Site blocking is a flawed idea and a disaster for free expression that no amount of rewriting will fix. There is no way to create a fast lane for removing your access to a website that is not a major threat to the open web. Just as we opposed SOPA/PIPA over ten years ago, we oppose these efforts.  

Site blocking bills seek to build a new infrastructure of censorship into the heart of the internet. They would enable court orders directed to the organizations that make the internet work, like internet service providers, domain name resolvers, and reverse proxy services, compelling them to help block US internet users from visiting websites accused of copyright infringement. The technical means haven’t changed much since 2012. - tThey involve blocking Internet Protocol addresses or domain names of websites. These methods are blunt—sledgehammers rather than scalpels. Today, many websites are hosted on cloud infrastructure or use shared IP addresses. Blocking one target can mean blocking thousands of unrelated sites. That kind of digital collateral damage has already happened in Austria, Italy, South Korea, France, and in the US, to name just a few.  

Given this downside, one would think the benefits of copyright enforcement from these bills ought to be significant. But site blocking is trivially easy to evade. Determined site owners can create the same content on a new domain within hours. Users who want to see blocked content can fire up a VPN or change a single DNS setting to get back online.  

The limits that lawmakers have proposed to put on these laws are an illusion. While ostensibly aimed at “foreign” websites, they sweep in any website that doesn’t conspicuously display a US origin, putting anonymity at risk. And despite the rhetoric of MPA and others that new laws would be used only by responsible companies against the largest criminal syndicates, laws don’t work that way. Massive new censorship powers invite abuse by opportunists large and small, and the costs to the economy, security, and free expression are widely borne. 

It’s time for Big Media and its friends in Congress to drop this flawed idea. But as long as they keep bringing it up, we’ll keep on rallying internet users of all stripes to fight it. 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

Mitch Stoltz

EFF's Investigations Expose Flock Safety's Surveillance Abuses: 2025 in Review

3 months 3 weeks ago

Throughout 2025, EFF conducted groundbreaking investigations into Flock Safety's automated license plate reader (ALPR) network, revealing a system designed to enable mass surveillance and susceptible to grave abuses. Our research sparked state and federal investigations, drove landmark litigation, and exposed dangerous expansion into always-listening voice detection technology. We documented how Flock's surveillance infrastructure allowed law enforcement to track protesters exercising their First Amendment rights, target Romani people with discriminatory searches, and surveil women seeking reproductive healthcare.

Flock Enables Surveillance of Protesters

When we obtained datasets representing more than 12 million searches logged by more than 3,900 agencies between December 2024 and October 2025, the patterns were unmistakable. Agencies logged hundreds of searches related to political demonstrations—the 50501 protests in February, Hands Off protests in April, and No Kings protests in June and October. Nineteen agencies conducted dozens of searches specifically tied to No Kings protests alone. Sometimes searches explicitly referenced protest activity; other times, agencies used vague terminology to obscure surveillance of constitutionally protected speech.

The surveillance extended beyond mass demonstrations. Three agencies used Flock's system to target activists from Direct Action Everywhere, an animal-rights organization using civil disobedience to expose factory farm conditions. Delaware State Police queried the Flock network nine times in March 2025 related to Direct Action Everywhere actions—showing how ALPR surveillance targets groups engaged in activism challenging powerful industries.

Biased Policing and Discriminatory Searches

Our November analysis revealed deeply troubling patterns: more than 80 law enforcement agencies used language perpetuating harmful stereotypes against Romani people when searching the nationwide Flock Safety ALPR network. Between June 2024 and October 2025, police performed hundreds of searches using terms such as "roma" and racial slurs—often without mentioning any suspected crime.

Audit logs revealed searches including "roma traveler," "possible g*psy," and "g*psy ruse." Grand Prairie Police Department in Texas searched for the slur six times while using Flock's "Convoy" feature, which identifies vehicles traveling together—essentially targeting an entire traveling community without specifying any crime. According to a 2020 Harvard University survey, four out of 10 Romani Americans reported being subjected to racial profiling by police. Flock's system makes such discrimination faster and easier to execute at scale.

Weaponizing Surveillance Against Reproductive Rights

In October, we obtained documents showing that Texas deputies queried Flock Safety's surveillance data in what police characterized as a missing person investigation, but was actually an abortion case. Deputies initiated a "death investigation" of a "non-viable fetus," logged evidence of a woman's self-managed abortion, and consulted prosecutors about possible charges.

A Johnson County official ran two searches with the note "had an abortion, search for female." The second search probed 6,809 networks, accessing 83,345 cameras across nearly the entire country. This case revealed Flock's fundamental danger: a single query accesses more than 83,000 cameras spanning almost the entire nation, with minimal oversight and maximum potential for abuse—particularly when weaponized against people seeking reproductive healthcare.

Feature Updates Miss the Point

In June, EFF explained why Flock Safety's announced feature updates cannot make ALPRs safe. The company promised privacy-enhancing features like geofencing and retention limits in response to public pressure. But these tweaks don't address the core problem: Flock's business model depends on building a nationwide, interconnected surveillance network that creates risks no software update can eliminate. Our 2025 investigations proved that abuses stem from the architecture itself, not just how individual agencies use the technology.

Accountability and Community Action

EFF's work sparked significant accountability measures. U.S. Rep. Raja Krishnamoorthi and Rep. Robert Garcia launched a formal investigation into Flock's role in "enabling invasive surveillance practices that threaten the privacy, safety, and civil liberties of women, immigrants, and other vulnerable Americans."

Illinois Secretary of State Alexi Giannoulias launched an audit after EFF research showed Flock allowed U.S. Customs and Border Protection to access Illinois data in violation of state privacy laws. In November, EFF partnered with the ACLU of Northern California to file a lawsuit against San Jose and its police department, challenging warrantless searches of millions of ALPR records. Between June 5, 2024 and June 17, 2025, SJPD and other California law enforcement agencies searched San Jose's database 3,965,519 times—a staggering figure illustrating the vast scope of warrantless surveillance enabled by Flock's infrastructure.

Our investigations also fueled municipal resistance to Flock Safety. Communities from Austin to Evanston to Eugene successfully canceled or refused to renew their Flock contracts after organizing campaigns centered on our research documenting discriminatory policing, immigration enforcement, threats to reproductive rights, and chilling effects on protest. These victories demonstrate that communities—armed with evidence of Flock's harms—can challenge and reject surveillance infrastructure that threatens civil liberties.

Dangerous New Capabilities: Always-Listening Microphones

In October 2025, Flock announced plans to expand its gunshot detection microphones to listen for "human distress" including screaming. This dangerous expansion transforms audio sensors into powerful surveillance tools monitoring human voices on city streets. High-powered microphones above densely populated areas raise serious questions about wiretapping laws, false alerts, and potential for dangerous police responses to non-emergencies. After EFF exposed this feature, Flock quietly amended its marketing materials to remove explicit references to "screaming"—replacing them with vaguer language about "distress" detection—while continuing to develop and deploy the technology.

Looking Forward

Flock Safety's surveillance infrastructure is not a neutral public safety tool. It's a system that enables and amplifies racist policing, threatens reproductive rights, and chills constitutionally protected speech. Our 2025 investigations proved it beyond doubt. As we head into 2026, EFF will continue exposing these abuses, supporting communities fighting back, and litigating for the constitutional protections that surveillance technology has stripped away.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

Sarah Hamid

Fighting Renewed Attempts to Make ISPs Copyright Cops: 2025 in Review

3 months 3 weeks ago

You might not know it, given the many headlines focused on new questions about copyright and Generative AI, but the year’s biggest copyright case concerned an old-for-the-internet question: do ISPs have to be copyright cops? After years of litigation, that question is now squarely before the Supreme Court. And if the Supreme Court doesn’t reverse a lower court’s ruling, ISPs could be forced to terminate people’s internet access based on nothing more than mere accusations of copyright infringement. This would threaten innocent users who rely on broadband for essential aspects of daily life.

The Stakes: Turning ISPs into Copyright Police

This issue turns on what courts call “secondary liability,” which is the legal idea that someone can be held responsible not for what they did directly, but for what someone else did using their product or service. The case began when music companies sued Cox Communications, arguing that the ISP should be held liable for copyright infringement committed by some of its subscribers. The Court of Appeals for the Fourth Circuit agreed, adopting a “material contribution” standard for contributory copyright liability (a rule for when service providers can be held liable for the actions of users). Under that standard, providing a service that could be used for infringement is enough to create liability when a customer infringes.

The Fourth Circuit’s rule would have devastating consequences for the public. Given copyright law’s draconian penalties, ISP would be under enormous pressure to terminate accounts whenever they get an infringement notice, whether or not the actual accountholder has infringed anything: entire households, schools, libraries, or businesses that share an internet connection. These would include:

  • Public libraries, which provide internet access to millions of Americans who lack it at home, could lose essential service.
  • Universities, hospitals, and local governments could see internet access for whole communities disrupted.
  • Households—especially in low-income and communities of color, which disproportionately share broadband connections with other people—would face collective punishment for the alleged actions of a single user.

And with more than a third of Americans having only one or no broadband provider, many users would have no way to reconnect.

EFF—along with the American Library Association, the Association of Research Libraries, and Re:Create—filed an amicus brief urging the Court to reverse the Fourth Circuit’s decision, taking guidance from patent law. In the Patent Act, where Congress has explicitly defined secondary liability, there’s a different test: contributory infringement exists only where a product is incapable of substantial non-infringing use. Internet access, of course, is overwhelmingly used for lawful purposes, making it the very definition of a “staple article of commerce” that can’t be liable under the patent framework.

The Supreme Court held a hearing in the case on December 1, and a majority of the justices seemed troubled by the implications of the Fourth Circuit’s ruling. One exchange was particularly telling: asked what should happen when the notices of infringement target a university account upon which thousands of people rely, Sony’s counsel suggested the university could resolve the issue by essentially slowing internet speeds so infringement might be less appealing. It’s hard to imagine the university community would agree that research, teaching, artmaking, library services, and the myriad other activities that rely on internet access should be throttled because of the actions of a few students. Hopefully the Supreme Court won’t either.

We expect a ruling in the case in the next few months. Fingers crossed that the Court rejects the Fourth Circuit’s draconian rule.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

Corynne McSherry
Checked
4 hours 37 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed