Protect Yourself From Meta’s Latest Attack on Privacy

22 hours 7 minutes ago

Researchers recently caught Meta using an egregious new tracking technique to spy on you. Exploiting a technical loophole, the company was able to have their apps snoop on users’ web browsing. This tracking technique stands out for its flagrant disregard of core security protections built into phones and browsers. The episode is yet another reason to distrust Meta, block web tracking, and end surveillance advertising. 

Fortunately, there are steps that you, your browser, and your government can take to fight online tracking. 

What Makes Meta’s New Tracking Technique So Problematic?

More than 10 years ago, Meta introduced a snippet of code called the “Meta pixel,” which has since been embedded on about 20% of the most trafficked websites. This pixel exists to spy on you, recording how visitors use a website and respond to ads, and siphoning potentially sensitive info like financial information from tax filing websites and medical information from hospital websites, all in service of the company’s creepy system of surveillance-based advertising. 

While these pixels are well-known, and can be blocked by tools like EFF’s Privacy Badger, researchers discovered another way these pixels were being used to track you. 

Even users who blocked or cleared cookies, hid their IP address with a VPN, or browsed in incognito mode could be identified

Meta’s tracking pixel was secretly communicating with Meta’s apps on Android devices. This violates a fundamental security feature (“sandboxing”) of mobile operating systems that prevents apps from communicating with each other. Meta got around this restriction by exploiting localhost, a feature meant for developer testing. This allowed Meta to create a hidden channel between mobile browser apps and its own apps. You can read more about the technical details here.

This workaround helped Meta bypass user privacy protections and attempts at anonymity. Typically, Meta tries to link data from “anonymous” website visitors to individual Meta accounts using signals like IP addresses and cookies. But Meta made re-identification trivial with this new tracking technique by sending information directly from its pixel to Meta's apps, where users are already logged in. Even users who blocked or cleared cookies, hid their IP address with a VPN, or browsed in incognito mode could be identified with this tracking technique.  

Meta didn’t just hide this tracking technique from users. Developers who embedded Meta’s tracking pixels on their websites were also kept in the dark. Some developers noticed the pixel contacting localhost from their websites, but got no explanation when they raised concerns to Meta. Once publicly exposed, Meta immediately paused this tracking technique. They claimed they were in discussions with Google about “a potential miscommunication regarding the application of their policies.”

While the researchers only observed the practice on Android devices, similar exploits may be possible on iPhones as well.

This exploit underscores the unique privacy risks we face when Big Tech can leverage out of control online tracking to profit from our personal data.

How Can You Protect Yourself?

Meta seems to have stopped using this technique for now, but that doesn’t mean they’re done inventing new ways to track you. Here are a few steps you can take to protect yourself:

Use a Privacy-Focused Browser

Choose a browser with better default privacy protections than Chrome. For example, Brave and DuckDuckGo protected users from this tracking technique because they block Meta’s tracking pixel by default. Firefox only partially blocked the new tracking technique with its default settings, but fully blocked it for users with “Enhanced Tracking Protection” set to “Strict.” 

It’s also a good idea to avoid using in-app browsers. When you open links inside the Facebook or Instagram apps, Meta can track you more easily than if you opened the same links in an external browser.

Delete Unnecessary Apps

Reduce the number of ways your information can leak by deleting apps you don’t trust or don’t regularly use. Try opting for websites over apps when possible. In this case, and many similar cases, using the Facebook and Instagram website instead of the apps would have limited data collection. Even though both can contain tracking code, apps can access information that websites generally can’t, like a persistent “advertising ID” that companies use to track you (follow EFF’s instructions to turn it off if you haven’t already). 

Install Privacy Badger

EFF’s free browser extension blocks trackers to stop companies from spying on you online. Although Privacy Badger would’ve stopped Meta’s latest tracking technique by blocking their pixel, Firefox for Android is the only mobile browser it currently supports. You can install Privacy Badger on Chrome, Firefox, and Edge on your desktop computer. 

Limit Meta’s Use of Your Data

Meta’s business model creates an incentive to collect as much information as possible about people to sell targeted ads. Short of deleting your accounts, you have a number of options to limit tracking and how the company uses your data.

How Should Google Chrome Respond?

After learning about Meta’s latest tracking technique, Chrome and Firefox released fixes for the technical loopholes that Meta exploited. That’s an important step, but Meta’s deliberate attempt to bypass browsers’ privacy protections shows why browsers should do more to protect users from online trackers. 

Unfortunately, the most popular browser, Google Chrome, is also the worst for your privacy. Privacy Badger can help by blocking trackers on desktop Chrome, but Chrome for Android doesn’t support browser extensions. That seems to be Google’s choice, rather than a technical limitation. Given the lack of privacy protections they offer, Chrome should support extensions on Android to let users protect themselves. 

Although Chrome addressed the latest Meta exploit after it was exposed, their refusal to block third-party cookies or known trackers leaves the door wide open for Meta’s other creepy tracking techniques. Even when browsers block third-party cookies, allowing trackers to load at all gives them other ways to harvest and de-anonymize users’ data. Chrome should protect its users by blocking known trackers (including Google’s). Tracker-blocking features in Safari and Firefox show that similar protections are possible and long overdue in Chrome. It has yet to be approved to ship in Chrome, but a Google proposal to block fingerprinting scripts in Incognito Mode is a promising start. 

Yet Another Reason to Ban Online Behavioral Advertising

Meta’s business model relies on collecting as much information as possible about people in order to sell highly-targeted ads. Even if this method has been paused, as long as they have the incentive to do so Meta will keep finding ways to bypass your privacy protections. 


The best way to stop this cycle of invasive tracking techniques and patchwork fixes is to ban online behavioral advertising. This would end the practice of targeting ads based on your online activity, removing the primary incentive for companies to track and share your personal data. We need strong federal privacy laws to ensure that you, not Meta, control what information you share online.

Lena Cohen

A Token of Appreciation for Sustaining Donors 💞

1 day 20 hours ago

You'll get a custom EFF35 Challenge Coin when you become a monthly or annual Sustaining Donor by July 10. It’s that simple.

Give Once a Month

Give Once a YEar

Start a Convenient recurring donation Today!

But here's a little more background for all of you detail-oriented digital rights fans. EFF's 35th Anniversary celebration has begun and we're commemorating three and a half decades for fighting for your privacy, security, and free expression rights online. These values are hallmarks of freedom and necessities for true democracy, and you can help protect them. It's only possible with the kindness and steadfast support from EFF members, and over 30% of them are Sustaining Donors: people who spread out their support with a monthly or annual automatic recurring donation.

We're saying thanks to new and upgrading Sustaining Donors by offering brand new EFF35 Challenge Coins as a literal token of thanks. Challenge coins follow a long tradition of offering a symbol of kinship and respect for great achievements—and we owe our strength to tech creators and users like you. EFF challenge coins are individually numbered for each supporter and only available while supplies last.

Become a Sustaining Donor

Just start an automated recurring donation of at least $5 per month (Copper Level) or $25 per year (Silicon Level) by July 10, 2025. We'll automatically send a special-edition EFF challenge coin to the shipping address you provide during your transaction.

Already a Monthly or Annual Sustaining Donor?

First of all—THANKS! Second, you can get an EFF35 Challenge Coin when you upgrade your donation. Just increase your monthly or annual gift by any amount and let us know by emailing upgrade@eff.org.

Get started with your upgrade at eff.org/recurring. If you used PayPal, just cancel your current recurring donation and then go to eff.org to start a new upgraded recurring donation.

Digital Rights Every Day

EFF's mission is sustained by thousands of people from every imaginable background giving modest donations when they can. Every cent counts. We like to show our gratitude and give you something to start conversations about civil liberties and human rights, whether you're a one time donor or recurring Sustaining Donor.

Check out freshly-baked member gifts made for EFF's anniversary year including new EFF35 Cityscape T-Shirt, Motherboard Hooded Sweatshirt, and new stickers. With your help, EFF is here to stay.

Aaron Jue

Strategies for Resisting Tech-Enabled Violence Facing Transgender People

2 days 14 hours ago

Today's Supreme Court’s ruling in U.S. v. Skrmetti upholding bans on gender-affirming care for youth makes it clear: trans people are under attack. Threats to trans rights and healthcare are coming from legislatures, anti-trans bigots (both organized and not), apathetic bystanders, and more. Living under the most sophisticated surveillance apparatus in human history only makes things worse. While the dangers are very much tangible and immediate, the risks posed by technology can amplify them in insidious ways. Here is a non-exhaustive overview of concerns, a broad-sweeping threat model, and some recommended strategies that you can take to keep yourself and your loved ones safe.

Dangers for Trans Youth

Trans kids experience an inhumane amount of cruelty and assault. Much of today’s anti-trans legislation is aimed specifically at making life harder for transgender youth, across all aspects of life. For this reason, we have highlighted several of the unique threats facing transgender youth.

School Monitoring Software

Most school-issued devices are root-kitted with surveillance spyware known as student-monitoring software. The purveyors of these technologies have been widely criticized for posing significant risks to marginalized children, particularly LGBTQ+ students. We ran our own investigation on the dangers posed by these technologies with a project called Red Flag Machine. Our findings showed that a significant portion of the times students’ online behavior was flagged as “inappropriate” was when they were researching LGBTQ+ topics such as queer history, sexual education, psychology, and medicine. When a device with this software flags such activity it often leads to students being placed in direct contact with school administrators or even law enforcement. As I wrote 3 years ago, this creates a persistent and uniquely dangerous situation for students living in areas with regressive laws around LGBTQ+ life or unsafe home environments.

The risks posed by technology can amplify threats in insidious ways

Unfortunately, because of the invasive nature of these school-issued devices, we can’t recommend a safe way to research LGBTQ+ topics on them without risking school administrators finding out. If possible, consider compartmentalizing those searches to different devices, ones owned by you or a trusted friend, or devices found in an environment you trust, such as a public library.

Family Owned Devices

If you don’t own your phone, laptop, or other devices—such as if your parents or guardians are in control of them (e.g. they have access to unlock them or they exert control over the app stores you can access with them)— it’s safest to treat those devices as you would a school-issued device. This means you should not trust those devices for the most sensitive activities or searches that you want to keep especially private. While steps like deleting browser history and using hidden folders or photo albums can offer some safety, they aren’t sure-fire protections to prevent the adults in your life from accessing your sensitive information. When possible, try using a public library computer (outside of school) or borrow a trusted friend’s device with fewer restrictions. 

Dangers for Protestors

Pride demonstrations are once again returning to their roots as political protests. It’s important to treat them as such by locking down your devices and coming up with some safety plans in advance. We recommend reading our entire Surveillance Self-Defense guide on attending a protest, taking special care to implement strategies like disabling biometric unlock on your phone and documenting the protest without putting others at risk. If you’re attending the demonstration with others–which is strongly encouraged–consider setting up a Signal group chat and using strategies laid out in this blog post by Micah Lee.

Counter-protestors

There is a significant push from anti-trans bigots to make Pride month more dangerous for our community. An independent source has been tracking and mapping anti-trans organized groups who are specifically targeting Pride events. While the list is non-exhaustive, it does provide some insight into who these groups are and where they are active. If one of these groups is organizing in your area, it will be important to take extra precautions to keep yourself safe.

Data Brokers & Open-Source Intelligence

Data brokers pose a significant threat to everyone–and frankly, the entire industry deserves to be deleted out of existence. The dangers are even more pressing for people doing the vital work advocating for human rights of transgender people. If you’re a doctor, an activist, or a supportive family member of a transgender person, you are at risk of your own personal information being weaponized against you. Anti-trans bigots and their supporters online will routinely access open-source intelligence and data broker records to cause harm.

You can reduce some of these risks by opting out from data brokers. It’s not a cure-all (the entire dissolution of the data broker industry is the only solution), but it’s a meaningful step. The DIY method has been found most effective, though there are services to automate the process if you would rather save yourself the time and energy. For the DIY approach, we recommend using Yael Grauer’s Big Ass Data-Broker Opt Out List.

Legality is likely to continue to shift

It’s also important to look into other publicly accessible information that may be out there, including voter registration records, medical licensing information, property sales records, and more. Some of these can be obfuscated through mechanisms like “address confidentiality programs.” These protections vary state-by-state, so we recommend checking your local laws and protections.

Medical Data

In recent years, legislatures across the country have moved to restrict access to and ban transgender healthcare. Legality is likely to continue to shift, especially after the Supreme Court’s green light today in Skrmetti. Many of the concerns around criminalization of transgender healthcare overlap with those surrounding abortion access –issues that are deeply connected and not mutually exclusive. The Surveillance Self-Defense playlist for the abortion access movement is a great place to start when thinking through these risks, particularly the guides on mobile phone location tracking, making a security plan, and communicating with others. While some of this overlaps with the previously linked protest safety guides, that redundancy only underscores the importance.

Unfortunately, much of the data about your medical history and care is out of your hands. While some medical practitioners may have some flexibility over how your records reflect your trans identity, certain aspects like diagnostic codes and pharmaceutical data for hormone therapy or surgery are often more rigid and difficult to obscure. As a patient, it’s important to consult with your medical provider about this information. Consider opening up a dialogue with them about what information needs to be documented, versus what could be obfuscated, and how you can plan ahead in the event that this type of care is further outlawed or deemed criminal.

Account Safety Locking Down Social Media Accounts

It’s a good idea for everyone to review the privacy and security settings on their social media accounts. But given the extreme amount of anti-trans hate online (sometimes emboldened by the very platforms themselves), this is a necessary step for trans people online. To start, check out the Surveillance Self-Defense guide on social media account safety

We can’t let the threats posed by technology diminish our humanity and our liberation.

In addition to reviewing your account settings, you may want to think carefully about what information you choose to share online. While visibility of queerness and humanity is a powerful tool for destigmatizing our existence, only you can decide if the risk involved with sharing your face, your name, and your life outweigh the benefit of showing others that no matter what happens, trans people exist. There’s no single right answer—only what’s right for you.

Keep in mind also that LGBTQ expression is at significantly greater risk of censorship by these platforms. There is little individuals can do to fully evade or protect against this, underscoring the importance of advocacy and platform accountability.

Dating Apps

Dating apps also pose a unique set of risks for transgender people. Intimate partner violence for transgender people is at a staggeringly high rate compared to cisgender people–meaning we must take special care to protect ourselves. This guide on LGBTQ dating app safety is worth reading, but here’s the TLDR: always designate a friend as your safety contact before and after meeting anyone new, meet in public first, and be mindful of how you share photos with others on dating apps.

Safety and Liberation Are Collective Efforts

While bodily autonomy is under attack from multiple fronts, it’s crucial that we band together to share strategies of resistance. Digital privacy and security must be considered when it comes to holistic security and safety. Don’t let technology become the tool that enables violence or restricts the self-determination we all deserve.

Trans people have always existed. Trans people will continue to exist despite the state’s efforts to eradicate us. Digital privacy and security are just one aspect of our collective safety. We can’t let the threats posed by technology diminish our humanity and our liberation. Stay informed. Fight back. We keep each other safe.

Daly Barnett

Apple to Australians: You’re Too Stupid to Choose Your Own Apps

2 days 15 hours ago

Apple has released a scaremongering, self-serving warning aimed at the Australian government, claiming that Australians will be overrun by a parade of digital horribles if Australia follows the European Union’s lead and regulates Apple’s “walled garden.” 

The EU’s Digital Markets Act is a big, complex, ambitious law that takes aim squarely at the source of Big Tech’s power: lock-in. For users, the DMA offers interoperability rules that let Europeans escape US tech giants’ walled gardens without giving up their relationships and digital memories.  

For small businesses, the DMA offers something just as valuable: the right to process their own payments. That may sound boring, but here’s the thing: Apple takes 30 percent commission on most payments made through iPhone and iPad apps, and they ban app makers from including alternative payment methods or even mentioning that Apple customers can make their payments on the web. 

All this means that every euro a European Patreon user sends to a performer or artist takes a round-trip through Cupertino, California, and comes back 30 cents lighter. Same goes for other money sent to major newspapers, big games, or large service providers. Meanwhile, the actual cost of processing a payment in the EU is less than one percent, meaning that Apple is taking in a 3,000 percent margin on its EU payments. 

To make things worse, Apple uses “digital rights management” to lock iPhones and iPads to its official App Store. That means that Europeans can’t escape Apple’s 30 percent “app tax” by installing apps from a store with fairer payment policies.  

Here, too, the DMA offers relief, with a rule that requires Apple to permit “sideloading” of apps (that is, installing apps without using an app store). The same rule requires Apple to allow its customers to choose to use independent app stores. 

With the DMA, the EU is leading the world in smart, administrable tech policies that strike at the power of tech companies. This is a welcome break from the dominant approach to tech policy over the first two decades of this century, in which regulators focused on demanding that tech companies use their power wisely – by surveilling and controlling their users to prevent bad behavior – rather than taking that power away. 

Which is why Australia is so interested. A late 2024 report from the Australian Treasury took a serious look at transposing DMA-style rules to Australia. It’s a sound policy, as the European experience has shown. 

But you wouldn’t know it by listening to Apple. According to Apple, Australians aren’t competent to have the final say over which apps they use and how they pay for them, and only Apple can make those determinations safely. It’s true that Apple sometimes takes bold, admirable steps to protect its customers’ privacy – but it’s also true that sometimes Apple invades its customers’ privacy (and lies about it). It’s true that sometimes Apple defends its customers from government spying – but it’s also true that sometimes Apple serves its customers up on a platter to government spies, delivering population-scale surveillance for autocratic regimes (and Apple has even been known to change its apps to help autocrats cling to power). 

Apple sometimes has its customers’ backs, but often, it sides with its shareholders (or repressive governments) over those customers. There’s no such thing as a benevolent dictator: letting Apple veto your decisions about how you use your devices will not make you safer

Apple’s claims about the chaos and dangers that Europeans face thanks to the DMA are even more (grimly) funny when you consider that Apple has flouted EU law with breathtaking acts of malicious compliance. Apparently, the European iPhone carnage has been triggered by the words on the European law books, without Apple even having to follow those laws! 

The world is in the midst of a global anti-monopoly wave that keeps on growing. This decade has seen big, muscular antitrust action in the US, the UK, the EU, Canada, South Korea, Japan, Germany, Spain, France, and even China.  

It’s been a century since the last wave of trustbusting swept the globe, and while today’s monopolists are orders of magnitude larger than their early 20th-century forbears, they also have a unique vulnerability.  

Broadly speaking, today’s tech giants cheat in the same way everywhere. They do the same spying, the same price-gouging, and employ the same lock-in tactics in every country where they operate, which is practically every country. That means that when a large bloc like the EU makes a good tech regulation, it has the power to ripple out across the planet, benefiting all of us – like when the EU forced Apple to switch to standard USB-C cables to charge their devices, and we all got iPhones with USB-C ports

It makes perfect sense for Australia to import the DMA – after all, Apple and other American tech companies run the same scams on Australians as they do on Europeans. 

Around the world, antitrust enforcers have figured out that they can copy one another’s homework, to the benefit of the people they defend. For example, in 2022, the UK’s Digital Markets Unit published a landmark study on the abuses of the mobile duopoly. The EU Commission relied on the UK report when it crafted the DMA, as did an American Congressman who introduced a similar law that year. The same report’s findings became the basis for new enforcement efforts in Japan and South Korea

As Benjamin Franklin wrote, “He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening mine.” It’s wonderful to see Australian regulators picking up best practices from the EU, and we look forward to seeing what ideas Australia has for the rest of the world to copy. 

Cory Doctorow

LGBT Q&A: Your Online Speech and Privacy Questions, Answered

2 days 19 hours ago

This year, like almost all years before, LGBTQ+ Pride month is taking place at a time of burgeoning anti-LGBTQ+ violence, harassment, and criticism. Lawmakers and regulators are passing legislation restricting freedom of expression and privacy for LGBTQ+ individuals and fueling offline intolerance. Online platforms are also complicit in this pervasive ecosystem by censoring pro-LGBTQ+ speech, forcing LGBTQ+ individuals to self-censor or turn to VPNs to avoid being profiled, harassed, doxxed, or criminally prosecuted. Unfortunately, these risks look likely to continue, threatening LGBTQ+ individuals and the fight for queer liberation. 

This Pride, we’re here to help build an online space where you get to decide what aspects of yourself you share with others, how you present to the world, and what things you keep private.

We know that it feels overwhelming thinking about how to protect yourself online in the face of these issues—whether that's best practices for using gay dating apps like Grindr and Her, how to download a VPN to see and interact with banned LGBTQ+ content, methods for posting pictures from events and protests without outing your friends, or how to argue over your favorite queer musicians’ most recent problematic takes without being doxxed. 

That's why this LGBTQ+ Pride month, we’re launching an LGBT Q&A. Throughout Pride, we’ll be answering your most pressing digital rights questions on EFF’s Instagram and TikTok accounts. Comment your questions under these posts on Instagram and TikTok, and we’ll reply directly. Want to stay anonymous? Submit your questions via a secure link on our website and we’ll answer these in separate posts. 

Everyone needs guidance and protection from prying eyes. This is especially true for those of us who face consequences when intimate details around gender or sexual identities are revealed without consent. This Pride, we’re here to help build an online space where you get to decide what aspects of yourself you share with others, how you present to the world, and what things you keep private.

No question is too big or too small! But comments that discriminate against marginalized groups, including the LGBTQ+ community, will not be engaged with. 

The fight for the safety and rights of LGBTQ+ people is not just a fight for visibility online (and offline)—it’s a fight for survival. Now more than ever, it's essential to collectivize information sharing to not only make the digital world safer for LGBTQ+ individuals, but to make it a space where people can have fun, share memes, date, and build communities without facing repression and harm. Join us to make the internet private, safe, and full of gay pride.

Paige Collings

Big Brother's Little Problem | EFFector 37.6

2 days 19 hours ago

Just in time for summer, EFFector is back—with a brand new look! If you're not signed up, now's a perfect time to subscribe and get the latest details on EFF's work defending your rights to privacy and free expression online.

EFFector 37.6 highlights an important role that EFF has to protecting you online: watching the watchers. In this issue, we're pushing back on invasive car-tracking technologies, and we share an update on our case challenging the illegal disclosure of government records to DOGE. You'll also find updates on issues like masking at protests, defending encryption in Europe, and the latest developments in the right to repair movement.

Speaking of right to repair: we're debuting a new audio companion to EFFector as well! This time, Hayley Tsukayama breaks down how Washington's new right to repair law fits into broader legislative trends. You can listen now on YouTube or the Internet Archive.

SUBSCRIBE TO EFFECTOR

EFFECTOR 37.6 - BIG BROTHER'S LITTLE PROBLEM

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

Podcast Episode: Securing Journalism on the ‘Data-Greedy’ Internet

3 days 6 hours ago

Public-interest journalism speaks truth to power, so protecting press freedom is part of protecting democracy. But what does it take to digitally secure journalists’ work in an environment where critics, hackers, oppressive regimes, and others seem to have the free press in their crosshairs?

%3Ciframe%20height%3D%2252px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2F3a9d54ab-0f04-453e-8101-fe44607d3800%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com

    

(You can also find this episode on the Internet Archive and on YouTube.)

That’s what Harlo Holmes focuses on as Freedom of the Press Foundation’s digital security director. Her team provides training, consulting, security audits, and other support to newsrooms, independent journalists, freelancers, documentary filmmakers – anyone who is making independent journalism in the public interest – so that they can do their jobs more safely and securely. Holmes joins EFF’s Cindy Cohn and Jason Kelley to discuss the tools and techniques that help journalists protect themselves and their sources while keeping the world informed.  

In this episode you’ll learn about:

  • The importance of protecting online anonymity on an ever-increasingly “data-greedy” internet
  • How digital security nihilism in the United States compares with regions of the world where oppressive and repressive governance are more common
  • Why compartmentalization can be a simple, easy approach to digital security
  • The need for middleware to provide encryption and other protections that shield sources’ anonymity and journalists’ work product when using corporate data platforms
  • How podcasters, YouTubers, and TikTokers fit into the broad sweep of media history, and need digital protections as well 

Harlo Holmes is the chief information security officer and director of digital security at Freedom of the Press Foundation. She strives to help individual journalists in various media organizations become confident and effective in securing their communications within their newsrooms, with their sources, and with the public at large. She is a media scholar, software programmer, and activist. Holmes was a regular contributor to the open-source mobile security collective Guardian Project, where she spearheaded the media metadata verification initiative currently empowering ProofMode, Save by OpenArchive, eyeWitness to Atrocities, and others. 

Resources: 

What do you think of “How to Fix the Internet?” Share your feedback here.

Transcript

HARLO HOLMES: within the sphere of public interest journalism. The reason why it exists is because it holds truth to power and it doesn't have to be adversarial, although, that's our right as citizens on this planet, but it doesn't have to be adversarial. And over the tenure that I've had, I've seen so many amazing examples where affecting change through public interest journalism done right, with the most detail paid to the operational and digital security of an investigation, literally ended up with laws being changed and legislation being written in order to make sure the problem that the journalist pointed out does not happen again.
One of my favorites is with Reuters. They wrote a story about how members of the intelligence community in Washington DC, after they had left Washington DC, were being actively poached by intelligence services in the UAE.
So it would take, like, leaving members of the people working in Washington DC, place them in cushy intelligence jobs at the UAE in order to, like, work on programs that we know are like, surveillance heavy, antithetical to all of our interests, public interest as well as the interest of the United States government.
And when that reporting came out, literally like, uh, Congress approved a bill saying that you have to wait three years before you can go through that revolving door rotation. 
And that's the trajectory that makes me the most proud to work where I do.

CINDY COHN: That's Harlo Holmes talking about some of the critically important journalism that she is able to help facilitate in her role with the Freedom of the Press Foundation.
I'm Cindy Cohn, the executive director of the Electronic Frontier Foundation.

JASON KELLEY: And I'm Jason Kelley, EFF's activism director. This is our podcast, How to Fix the Internet.

CINDY COHN: On this show, we flip the script from the dystopian doom and gloom thinking we all get mired in when thinking about the future of tech -- we're here challenge ourselves, our guests and our listeners to imagine the better future that we could be working towards. What can we look forward to if we dare to dream about getting things right?

JASON KELLEY: Our guest today, Harlo Holmes, is the chief information security officer and the director of digital security at the Freedom of the Press Foundation where she teaches journalists how to keep themselves – and their sources – safe online.

CINDY COHN: We started off by getting Harlo to explain exactly how the Freedom of the Press Foundation operates.

HARLO HOLMES: What we do, I like to say, is a three-pillared approach to protecting press freedom in the 21st century. The first, absolutely most important is our advocacy team. So not only do we have a staff of lawyers and legal scholars that weigh in on First Amendment issues and protect them within the United States, we also have a fantastic advocacy team at our own little newsroom, the US Press Freedom Tracker, where we have reporters who, anytime members of the press have their right to perform their rightful function challenged, minimized, persecuted, et cetera, we have reporters who are there who report on it, and we stay with those cases for as long as it takes.
And that's something that we're incredibly proud of. That's just one pillar. The other pillars that we have, is our engineering wing. So perhaps you have heard of a tool called SecureDrop. In certain newsrooms all over the planet, it's actually installed in order to technologically enable, as much anonymity as, uh, technically possible. Between reporters at those newsrooms and members of the press at large who might want to be whistleblowers or just to, you know, like, uh, say hey to, a news outlet that they admire in a way that ensures their anonymity.
And then there is my small team. We are the digital security team. Uh, we do a lot of training, consulting, security audits, and other supports that we can provide to newsrooms, independent journalists, freelancers, documentary filmmakers, anyone who is making independent journalism in the public interest in order to do their job more safely and securely.

CINDY COHN: Yeah. I think this is a really important thing that the Freedom of the Press Foundation does. Specifically your piece of it, this kind of connective tissue between the people who are really busy doing the reporting and finding things out and the people who wanna give them information and making sure that this whole thing is protected in a secure way. And I appreciate that you put it third, but to me it's really central to how this whole thing works. So I think that's really important.
And of course, SecureDrop for, you know, old time EFF and digital rights people – we know that this piece of technology was developed by our friend Aaron Schwartz, before he passed away. And the Freedom Press Foundation has picked it up and really turned it from a good but small idea into something that is vital and in newsrooms all around the world.

HARLO HOLMES: Yes. And thank you very, very much, for recognizing those particular achievements. SecureDrop has grown over the past, what, 12 years? I would say, into not only a tool that enables, the groundbreaking amount of journalism that has pretty much changed the trajectory of current events over the years that it's been developed, but also, represents increasing advances in technology around security that everyone on the planet benefits from. So for example, SecureDrop would not be anywhere were it not for its deep collaboration with the Tor Project, right?
And for all of us who pay attention to, digital security cryptography and, the intersection with human rights, you know, that the Tor Network is a groundbreaking piece of technology that not only provides, you know, anonymity on the internet in an increasingly, like, data-greedy environment, but also, like, represents, the ways that people can access parts of the internet in so many different innovative ways. And investigative journalism use in Secure drop is just one example of the benefits of like having Tor around and having it supported.
And so, that's one example. Another example is that, as people's interactions with computers change, uh, the way that we interface with browsers change the. Interplay between, you know, like using a regular computer and accessing stuff on mobile, that's changed, right?
And so our team has, like, such commendable intellectual curiosity in talking about these nuances and finding ways to make people's safety using all of these interfaces better. And so even though, we build Secure Drop in service of promoting public interest journalism, the way that it reverberates in technology is something that we're incredibly proud of. And it's all done in open source, right? Which means that anyone can access it. Anyone can iterate upon it, anyone can benefit from it.

CINDY COHN: Yeah, and it, and everyone can trust it. 'cause you know, you might not be able to read the code, but many people can. And so developing this trust and security, you know, they go hand in hand.

HARLO HOLMES: Yes,

JASON KELLEY: You use this term "data-greedy," which I really love. I've never heard that before.

CINDY COHN: It's so good!

JASON KELLEY: So you just created this incredible term "data-greedy" that I've never heard anyone use and I love and it's a good descriptor, I think of sort of like why journalists, but also everyone needs to be aware of like the tracks that they're leaving, the digital security practices that they use because it's not even necessarily the case that that data collection is intended to be harmful, but we just live in this environment where data is collected, where it's, you know, used sometimes intentionally to track people, but often just for other reasons.
Let's talk a little bit about that third pillar. What is it that journalists specifically need to think about in terms of security? I think a lot of people probably who have never done journalism, don't really think about the dangers of collecting information, of talking to sources of, you know, protecting that, how, how should they be thinking about it and what are the kinds of things that you talk to people about?

HARLO HOLMES: Great question. First and foremost, I feel that our team at Freedom of the Press Foundation, leads every training with the assumption that a journalist's job is to tell the story in the most compelling and effective way. Their job is not to concern themselves with what data stewardship means.
What protection of digital assets means. That's our job. And so, we really, really lean into meeting people where they are and just giving them exactly what it is that they need to know in order to do this job better without putting undue pressure on them. And also without scaring the bejesus out of anyone.
Because when you do take stock of like how data greedy all of our devices are, it can be a little bit scary to the point of making people feel disempowered. And so we always want to avoid that.

CINDY COHN: What are some techniques you use to try to avoid that? 'Cause I think that's really central to a lot of work that we're trying to do to try to get people, beyond what I think my colleague, Eva Galperin called “privacy nihilism. I'm not sure if she started it. She's the one who I heard it from.

HARLO HOLMES: I probably have heard that from her as well. I love, Eva and, uh, she has been so instrumental in the way that I think through these issues over the past like decade so yeah, digital security nihilism is 100% a thing.
And, perhaps maybe later we can get into like the regional contours of that because people in the United States have or exhibit a certain amount of nihilism. And then if you talk to people in like Central and Eastern Europe, it's a different way. If you talk to people in Latin America and South America, it's a different way.
So having that perspective actually like really helps the contours around how you approach people in digital security education and training..

CINDY COHN: Oh please, tell us more. I'm fascinated by this.

HARLO HOLMES: OK, so, I do want to come back to your original question, but, that said, I can definitely do a detour into the historicity of, um, digital security nihilism and how it interplays with where you are on the planet.
It's all political and in the United States we have, well, even though we're currently like in a bit of a, or in a bit of a, in a crisis mode, where we are absolutely looking at, you know, like our rights to privacy, the concessions that we make, our prominence in building these technologies and thus having a little bit of, like, insider knowledge of what the contours are.
Uh, if you compare that to the digital security protections of people who are in, let's say, you know, like Central or Eastern Europe, where, historically, they have never had or not for, you know, like decades, um, if not even like, you know, a hundred years. Um, that access to transparency about what's being done to their data and also transparency into how that data has been taken away from them because they didn't have a seat at the table.
if you look at places in, Latin America, Central America, South America, there are plenty of places where loss of digital security also comes hand in hand with loss of physical security, right? Like speaking to someone over the phone can often, especially where journalists are considered, will often come with a threat of physical violence, often to the most extreme. Right. So, yeah, exactly. Which is, you know, according to, um, so many, you know, like academics and scholars who focus on press freedom, know that, that that is one of the most dangerous places on the planet to be a journalist because failures in digital security can often come with literally, you know, like being summarily executed, right? So, every region on this planet has their own contours. It is constantly a fascinating challenge and one that I'm willing to meet in order to understand these contexts and to appropriately apply the right digital security solutions to the audiences that we find ourselves in front of.

CINDY COHN: Yeah. Okay. Back to my original question, sorry.

HARLO HOLMES: Go for it.

JASON KELLEY: Well, what, what is, I mean, did we get to the point? I don't think we really covered yet, really the basics of, like, what journalists need to think about in terms of their security. I mean, that's, you know, I, I, I love talking about privacy nihilism and how we can fight it, but, um, we would talk for three hours if we did that.

HARLO HOLMES: Yeah. Um, so quite frankly, one of the things that we're leaning most heavily on, and this is pretty much across the board, right, has to do with compartmentalization. I feel that, uh, recently within the United States, it's become really like technicolor to people. So they understand exactly why that's important, but it's always been important and it's always like something that you can apply everywhere.
There's always historically been attention as, uh, since the very moment the first iPhone stepped onto the market, this temptation to go the easy route. Everything is on the same device. You're calling your mom. You're, you know, like researching a flight on Expedia. You're, you know, Googling something. And then you're also talking to a source about a sensitive story, or you're also like, you know, gonna like, go through the comments in the Google Doc on the report that you're writing regarding a national security issue.
People definitely do need to be encouraged to like decouple the ways that they treat devices because these devices are not our friends. And the companies that like, create the experiences on these devices, they are definitely not our friends. They never have been.
But I hear you on that and, uh, reminding people, despite their digital security nihilism, despite their temptation to do the easiest of things, just reminding people to apply appropriate compartmentalization.
We take things very slowly. We take things as easily as we possibly can because there are ways that people can get started in order to, actually be effective at this until they get to the point where it actually means something either to their livelihoods or the story that they're working on and that of the sources that they, interact with. But yeah, that's pretty much where it starts.
Also, credential security is like the bread and butter. And I've been at this for, almost exactly 10 years at FPF and, you know, within this industry for about 15.
And it never changes that people really, really do need to maintain as much rigor regarding how people access their accounts. So like, you gotta have a unique, complex password. You have to be using a password manager. You have to be using multifactor authentication. And the ways that you can get it have changed over the years and they get better and better and better.
You have to be vigilant against phishing, but the ways that people try to phish you are like, you know, increasingly, like, sneakier. You know, we deal with it as it comes, but ultimately that has never changed. It really hasn't.

CINDY COHN: So we've, we've talked a little bit about kind of the nihilism and the kind of, thicket of things that you have to kind of make your way through in order to, help, journalists and their sources feel more secure. So let's flip it a bit. What does it look like if it's better? What are the kinds of places where you see, you know, if we could get this right, it would start to get better?

HARLO HOLMES: I love this question because I do feel that I've been able to look at it from multiple sides. Similarly, as I was describing how Secure Drop not only enables impactful public interest journalism, it represents a herculean feat of cryptography and techno activism. This is one example, Signal is another example.
So, one of the things I thought was so poignant when, as Joe Biden was exiting the White House, one of his, like, parting shots was to say like, everyone should use Signal. Like, and the reason why he says this is because, Signal not only represents like a cool app or like, you know, a thing that, like, hackers love and you know, like we can be proud of 'cause we got in on the first floor.
It represents the evolution of technologies that we should have. Our phone conversations had not been encrypted. Now they are. Get with it. You know, like that's the point. So from a technical perspective, that's what is so important and that's something that we always want to position ourselves to champion.

JASON KELLEY: Let's take a quick moment to say thank you to our sponsor. How to Fix The Internet is supported by the Alfred P. Sloan Foundation's program in Public Understanding of Science and Technology, enriching people's lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.
We also wanna thank EFF members and donors. You can become a member for just $25 and for a little more, you can get some good, very stylish gear. Your support is the reason we can keep our digital security guides for journalists, and everyone else, up to date to deal with the latest threats. So please, if you like what we do, go to eff.org/pod to donate.
We also wanted to share that our friend Cory Doctorow has a new podcast. Listen to this.
[Who Broke the Internet trailer]

JASON KELLEY: And now back to our conversation with Harlo Holmes.
Are there tools that are missing that, in a better world you're like, oh, this would be great to have, you know, or things that maybe couldn't exist without changes to technology or to the way that people, work or to policy that you just absolutely hear, you know, oh, it would be nice if we could do this, but for whatever reason, that's not a place we're at yet.

HARLO HOLMES: Yeah. Actually I have started to have a couple of conversations about that. Um, before I answer, I will say that I don't have, like, the bandwidth or time to be a technologist. Um, it's like my code writing days are probably over, but I have so many opinions.

JASON KELLEY: Of course. So many ideas.

HARLO HOLMES: Yeah. Um -

CINDY COHN: Well, we're your audience, right? I mean, you know, the EFF audience are people who, you know, uh, not overwhelmingly, um, but a lot of people with technical skills who are trying to figure out, okay, how do I, how do I apply them to do good? And, and, and I think, you know, over the years we've seen a lot of really well-meaning efforts by technologists to try to do something to support other communities that weren't grounded enough in those communities, and so didn't really work.
And I think your work at Freedom of the Press Foundation, again, has kind of bridged that gap, especially for journalists. But there's, there's broader things. so where else could you see places where technologists could really dig in and have this work in a way that sometimes it does, but often it doesn't.

HARLO HOLMES: I love that question because that is exactly the point, right? Bridging the gap. And I feel that like at FPF, given, you know how I introduce it with like the three pillars or whatever, we are uniquely poised in order to perform, like, you know, user research within a community, right? And then have that directly inform technology mandates, have that directly inform advocacy, uh, like, you know, charge to action.
So I think anyone who finds themselves at those cross sections, that's exactly what you have to kind of, like, strategize around in order to be as effective as possible. In terms of, like, actual technologies, one thing and I already kind of started having these conversations with people, is let's take our relationship within a typical newsroom to cloud services like Google when you are drafting, right? I mean it's anecdotal and like the plural anecdote is not data, right. But that said, we do know that given that, you know, Google's Drive has so much machine learning and AI enabled power, drafting a story that's like the next Watergate, right? Like that's actually going to get you put in jail before you get to publish, right?
Because we know about their capabilities. And, not gonna, like, talk about specific anecdotes, but like that is a thing, right? But one of the things, or like the big contention is that actually, like, in terms of collaboration, how effective you can be writing a story, how like, you know, you rely on the comments section with your editor, right, as you're, you know, massaging a story. You rely on those features as much.
What are 0pen source, like, you know, hacker ethos alternatives. We have, you know, we have Nextcloud, we have uh, CryptPad, we have Etherpad. But all of those things are insufficient not only, like, in terms of their feature set in what needs to be accommodated in order for a journalist to work, right, but also, can be insufficient in terms of their sustainability models, the fact that we can rely upon them in the future. And as much as we love all of those people at those developer initiatives, no one is supporting them to make sure that they can be in a place to be a viable alternative, right?
So, what's the next frontier, right? If I don't want to live in a world where a Nextcloud doesn't exist, where a CryptPad doesn't exist, or an Etherpad, like that's not what I'm saying, 'cause they're fantastic and they're really great to use in creative scenarios.
However, if you're thinking about the meat and potatoes day to day in a typical newsroom, and you have to contend with a tech giant like Google that has become increasingly, like, ideologically unreliable. Guess what? They actually do have a really cool tool called client side encryption, right? So now you're actually, like, removing the people who decide at Google what is ideologically acceptable use of their tools, right? You're removing them from the position where they can make any decision or scrutinize further and client side encryption.
Or like anything that provides end-to-end encryption, that is like the ultimate goal. That's what we should protect. Whether it is in Secure Drop, whether it is in Nextcloud or CryptPad, or if it's in Google itself. And so actually, I would recommend, like, anybody who has these spare cycles to contribute to a developer effort to tackle this type of middleware that allows us to still have as much autonomy as possible within the ecosystems that we have to kind of work within.

CINDY COHN: I love that. I mean, it’s a story about interoperability, right? This, you know, what you're talking about, middleware in this area is like, we should be able to make a choice. Either use things that are not hosted corporately or use things that are hosted corporately, but be able to have our cake and eat it too.
Have a tool that helps us interoperate with that system without the bad parts of the deal, right. And in this instance, the bad parts of the deal are a piece of it, it’s the business model, but a piece of it is just compliance with, with government in a way that the, the company, is increasing, you know, used to fight. They still fight some.

HARLO HOLMES: They still fight, yes.

CINDY COHN: They might fight, yes, but they also don't have the ability to fight that much. We might wanna go to something that's a little, that, that gives them the ability to say, look, we don't have access to that information. Just like Apple doesn't have access to the information that's stored on your iPhone. They made a policy decision to protect you.

HARLO HOLMES: But now we're looking at what happened in the UK, and we’re like, hm.

CINDY COHN: Exactly, but then the government has to act, you know, so it's always a fight on the technical level, and on the policy level, sadly. I wish that encryption we could, you know, fix with just technology. But we need other forms of protection. but I love this idea of having so many options, you know, some that are decentralized, some that are hosted, you know, in the nonprofit world, some that might be publicly supported, and then some that are the corporate side, but with the protections that we need.
And I just feel like we need all of the above. Anybody who asks you to choose between these strategies is kind of getting you caught in a side fight when the main fight is how do we get people the privacy that they need to do their work?

HARLO HOLMES: Yeah. Yeah. And one of the things that gives me the most hope is, continuing to fight in a space where we are all about the options.
We're all about giving people options and being as creative as possible and building options for everyone.

JASON KELLEY: What else gives you hope? Because you've been at Freedom of the Press for a while now, and we're at a difficult time in a lot of ways, but I assume there are other things that you've, you know, seen change over the years in a positive way, right? Because it feels too easy to say, look, things are getting dire, because in many ways they are. But, but what else gives you hope, given how long you've been working on this issue?

HARLO HOLMES: I actually, I love really thinking through the new challenges of other types of media that is represented. So much of my career had been, pretty much centered around traditional print and/or digital. However, I am so enthusiastic about being alongside, like, podcasters and YouTube creators as they navigate these new challenges and also understand, like, the long history of media theory, where we've gone as an industry in order to understand how it applies to them.
So one thing that I thought was pretty cool was having a conversation, recently, with a somewhat influential, TikTok person about class consciousness in regards to whether or not people who are influencers should actually start considering themselves as journalists legitimately.
And one of the things that I mentioned had to do with the fact that, you know, like in the 2010s, bloggers were not considered quote unquote journalists, and yet blogging has become one of the most influential, even like from a financial perspective, like, drivers within this market. So influencers should not consider themselves anything other than journalists, because their fights are – especially like when, you know, platforms get involved and like what their economic model looks like and their, you know, integrity and ethos within journalism – like, that's the media history that we are building right now. So that excites me.

CINDY COHN: Oh, that's great. You know, EFF was involved in some of the early cases about whether bloggers could be protected by journalism shield laws, we had a case called Apple v. Does a long time ago that, uh, that helped establish that in the state of California. But I, I really love helping, kind of, new media think of itself as media.
And also, I mean, the way that I always think about it is, it's not whether you're a journalist, it's whether you're doing journalism, right? It's the verb part. and that, different framing than I think helps break people out of the mold of, well, I do some stuff that's just kind of silly, and that might not be journalism, but if you're bringing news to the public, if you're bringing information to the public that the public wants, even if it's in a fashion context, like, that's journalism and it should have, uh, you should think of yourself that way because there is this rich history of how we protect that and how important that is to society, not just about the kind of hard political issues, but actually, you know, in creating and shaping and managing our culture as well.

HARLO HOLMES: Mm-hmm. I agree 100%.

JASON KELLEY: How did you end up doing this kind of digital security work specifically for journalists? Did you make an intentional choice at some point that you wanted to help journalists, or have you sort of found yourself here and it's just incredible, important work?

HARLO HOLMES: A little bit of both. I'm an avid media consumer who cares a lot about media history, and in undergraduate school I studied comparative literature, which is all based off of the fact that the media itself has its own unique power. And the way that it is expressed says way more than what is actually said.
And I've always found that to be the most important thing to do. As far as technology is concerned, as any young inquisitive person might do, I got into coding like so hardcore and, it wasn't until I was in grad school that I discovered, via a class with this fantastic person, Nathan Freitas, who's a Harvard, uh, Berkman Fellow Emeritus, and also the head of the Guardian Project, where he opened my eyes to the fact that like the code that you're writing, just like, you know, for fun or whatever, like you can actually use this to defend human rights.
And so it was kind of the culmination of those ideas that led me through, like, a couple of things. Like, um, I was an open news fellow at, um, the New York Times for about a year where I worked with the computer assisted reporting team and that was really impressive. And that was the first time where I got to see how people will, like, scrape a webpage in order to write an investigative story.
And I was like, wow, people do that that's so cool! And then also because I was hanging out with like Nathan and other folks, um, I was the, the one of the kids in the newsroom floor who knew what Tor was, they're like, that's cool. How do we use this in journalism? I'm like, well, I got ideas. And that's how, kind of how my career got started.

CINDY COHN: That's so great. Nathan's an old friend of EFF. That's so fun to hear the tentacles of how, you know, people inspire other people. Inspire other people. I think that's part of the fun story of digital rights.

HARLO HOLMES: Yeah, yeah. I agree. I think anyone is super duper lucky to understand not only like the place that you occupy right now, but also where it sits within, like, a long history. And, I also really love, any experience where I get to kind of touch people with that as well.

CINDY COHN: Nice. Ooh, that's a nice place to end. What do you think, Jason?

JASON KELLEY: That sounds great. Yeah. And think of all the people who are saying the same thing about you now that you're saying about Nathan. Right. It never stops.

HARLO HOLMES: It shouldn't ever stop. It shouldn't. This is our history.

CINDY COHN: Oh, Harlo, thank you so much for coming and spending time with us. It's just been a delight to talk to you and good luck going forward. The times really need people like you.

HARLO HOLMES: Thank you so much. Um, it's always a pleasure to talk to you and, um, I love your pod. I love the work that you do, and I'll, you know, see you next time.

JASON KELLEY: Well, I'm really glad that we got a chance to talk to Harlo because these conversations with folks who work in these, um, specific areas with people are really helpful when, you know, it's not our job every day to talk to journalists, just like it's not our job every day to talk to specific advocates about specific issues. But you learn exactly what the kinds of things are that they think about and what we need to get things right and what it'll look like if we do get things right for journalists or, or whomever it is.

CINDY COHN: Yeah, and I think the thing that I loved about the conversation is the stuff that she articulated is stuff that will help all of us. You know, it's a particular need for journalists. But when, you know, when we asked her, you know, what kind of tools need to exist, you know, she pointed, you know, not only to the open source decentralized tools like Ether Pad and things like that, but to basically an interoperability issue that making Google Docs secure, so that Google doesn't know what you're saying on your Google Docs. And I would toss Slack in there as well. That, you know, taking the tools that people rely on every day and building in things that make them secure against the company and against government coming and strong arming the company into giving them information, like that's a tool that will be really great for journalists, and I can see that. It'll also help all the rest of us.

JASON KELLEY: Yeah.

CINDY COHN: And the, you know, the other thing she said when she was giving, you know, what advice do you give to journalists, like off the top? She said, well, use separate devices for the things that you're doing and don't have everything on one device, you know, because, uh, I think I love the, what she say, they're data-hungry?

JASON KELLEY: Data-greedy.

CINDY COHN: Data-greedy, even better. That our devices are data greedy. So separating them gives us something. That's a useful piece of information for anyone who’s in activism.

JASON KELLY: Yeah. And so, I mean, I, I wanna say easy. It's not always simple to have two devices, but the idea that the solution wasn't something more complicated. It reminds me that often the best advice is something that's fairly simple and that really, you know, anyone who has the ability and the money could have multiple devices and, and journalists are no different.
So it reminded me also that, you know, when we're working on things like our surveillance, self-defense guides, it's helpful to remember that, like Harlo said, her job is to make the journalist’s job easy, right? They shouldn't have to think about this stuff. And that's how sort of the spirit of the guides that we write as well.
And that was just a really good reminder that sometimes you feel like you're trying to convince everyone, or explain to them how all these tools work and actually it might be better to think about, well, you shouldn't have to understand all of this deeply like I do. In some cases you just need to know that this works and that's what you need to use.

CINDY COHN: Yeah, I think that's right and I, you know, obviously, you know, ‘just go out and buy a second device’ isn't advice that we would give to people in parts of the world where that's a really a prohibitive suggestion. But there are many parts of the world, and journalists, many of them, live in them, where it is actually not that hard a thing to do to get yourself a burner phone or get a simpler phone for your work, rather than having to try to, you know, configure one device to really support all of those things.
And turning on two FA right? Turning on two factor authentication. Another thing that is just good advice for anybody. So, you know, what I'm hearing is that, you know, if we build a place that is better for journalists, it's better for all of us and vice versa. If we build a world that's better for all of us, it's also better for journalists. So, I really liked that. I also really liked her articulating and lifting up the role that the Tor project plays in what they do with Secure Drop. What they do to try to help protect journalists who have, uh, confidential sources.
Because we're, again, as we're looking into all of these various tools that help create a better future, a more secure future, we're discovering that actually open source tools, like Tor, underlie many different pieces of the better world. And so we're starting to see kind of the network for good, right, the conspiracy for good of a lot of the open source security projects.

JASON KELLEY: I didn't really realize when we were putting together these guests for this season, how interconnected they all were, and it's been really wonderful to hear everyone lift everyone else up. They really do all depend on one another, and it is really important to see that for the people who maybe don't think about it and use these tools as one-offs, right?

CINDY COHN: Yeah. And I think as those of us who are trying to make the internet better, recognizing that we're all in this together, so as we're headed into this time, where we're seeing a lot of targeted attacks on different pieces of a secure world. You know, recognizing that these things are interconnected and then building strength from there seems to me to be a really important strategy.

JASON KELLEY: And that's our episode for today. If you have feedback or suggestions, we'd love to hear from you. Visit eff.org/podcast and click on listener feedback. And while you're there, you can become a member and donate, maybe even pick up some of the merch, and just see what's happening in digital rights this week and every week.
Our theme music is by Nat Keefe of Beat Mower with Reed Mathis, and How to Fix the Internet is supported by the Alfred P SLoan Foundation's program and public understanding of science and technology. We'll see you next time. I'm Jason Kelley.

CINDY COHN: And I'm Cindy Cohn.

MUSIC CREDITS: This podcast is licensed creative commons attribution 4.0 international, and includes the following music licensed creative commons attribution 3.0 unported by its creators: Drops of H2, The Filtered Water Treatment by Jay Lang. Sound design, additional music and theme remixes by Gaetan Harris.

Josh Richman

Betting on Your Digital Rights: EFF Benefit Poker Tournament at DEF CON 33

4 days 7 hours ago

Hacker Summer Camp is almost here... and with it comes the Third Annual EFF Benefit Poker Tournament at DEF CON 33 hosted by security expert Tarah Wheeler.

Please join us at the same place and time as last year: Friday, August 8th, at high noon at the Horseshoe Poker Room. The fees haven’t changed; it’s still $250 to register plus $100 the day of the tournament with unlimited rebuys. (AND all players will receive a complimentary EFF Titanium Level Membership for the year.)

Tarah Wheeler—EFF board member and resident poker expert—has been working hard on the tournament since last year! We will have Lintile as emcee this year and there's going to be bug bounties! When you take someone out of the tournament, they will give you a pin. Prizes—and major bragging rights—go to the player with the most bounty pins. Be sure to register today and see lintile in action!

Did we mention there will be Celebrity Bounties? Knock out Wendy Nather, Chris “WeldPond” Wysopal, Jake “MalwareJake” Williams, Bryson Bort, Allan Friedman and get neat EFF swag and the respect of your peers! Plus, as always, knock out Tarah's dad Mike, and she donates $250 to the EFF in your name!

EFF Benefit Poker Tournament at DC33
Horseshoe Poker Room
3645 Las Vegas Blvd Overpass, Las Vegas, NV 89109
Friday, August 8, 12:00 pm

Register Now

Find Full Event Details and Registration

Have a friend that might be interested but not sure how to play? Have you played some poker before but could use a refresher? Join poker pro Mike Wheeler (Tarah’s dad) and celebrities for a free poker clinic from 11:00 am-11:45 am just before the tournament. Mike will show you the rules, strategy, table behavior, and general Vegas slang at the poker table. Even if you know poker pretty well, come a bit early and help out.

Register today and reserve your deck. Be sure to invite your friends to join you!

 

Melissa Srago

Connectivity is a Lifeline, Not a Luxury: Telecom Blackouts in Gaza Threaten Lives and Digital Rights

4 days 15 hours ago

For the third time since October 2023, Gaza has faced a near-total telecommunications blackout—plunging over 2 million residents into digital darkness and isolating them from the outside world. According to Palestinian digital rights organization 7amleh, the latest outage began on June 11, 2025, and lasted three days before partial service was restored on June 14. As of today, reports from inside Gaza suggest that access has been cut off again in central and southern Gaza. 

Blackouts like these affect internet and phone communications across Gaza, leaving journalists, emergency responders, and civilians unable to communicate, document, or call for help.

Cutting off telecommunications during an active military campaign is not only a violation of basic human rights—it is a direct attack on the ability of civilians to survive, seek safety, and report abuses. Access to information and the ability to communicate are core to the exercise of freedom of expression, press freedom, and the right to life itself.

The threat of recurring outages looms large. Palestinian digital rights groups warn of a complete collapse of Gaza’s telecommunications infrastructure, which has already been weakened by years of blockade, lack of spare parts, and now sustained bombardment.

These blackouts systematically silence the people of Gaza amidst a humanitarian crisis. They prevent the documentation of war crimes, hide the extent of humanitarian crises, and obstruct the global community’s ability to witness and respond.

EFF has long maintained that governments and occupying powers must not disrupt internet or telecom access, especially during times of conflict. The blackout in Gaza is not just a local or regional issue—it’s a global human rights emergency.

As part of the campaign led by 7amleh to #ReconnectGaza, we call on all actors, including governments, telecommunications regulators, and civil society, to demand an end to telecommunications blackouts in Gaza and everywhere. Connectivity is a lifeline, not a luxury. 

Jillian C. York

Google’s Advanced Protection Arrives on Android: Should You Use It?

4 days 16 hours ago

With this week’s release of Android 16, Google added a new security feature to Android, called Advanced Protection. At-risk people—like journalists, activists, or politicians—should consider turning on. Here’s what it does, and how to decide if it’s a good fit for your security needs.

To get some confusing naming schemes clarified at the start: Advanced Protection is an extension of Google’s Advanced Protection Program, which protects your Google account from phishing and harmful downloads, and is not to be confused with Apple’s Advanced Data Protection, which enables end-to-end encryption for most data in iCloud. Instead, Google's Advanced Protection is more comparable to the iPhone’s Lockdown Mode, Apple’s solution to protecting high risk people from specific types of digital threats on Apple devices.

Advanced Protection for Android is meant to provide stronger security by: enabling certain features that aren’t on by default, disabling the ability to turn off features that are enabled by default, and adding new security features. Put together, this suite of features is designed to isolate data where possible, and reduce the chances of interacting with unsecure websites and unknown individuals.

For example, when it comes to enabling existing features, Advanced Protection turns on Android’s “theft detection” features (designed to protect against in-person thefts), forces Chrome to use HTTPS for all website connections (a feature we’d like to see expand to everything on the phone), enables scam and spam protection features in Google Messages, and disables 2G (which helps prevent your phone from connecting to some Cell Site Simulators). You could go in and enable each of these individually in the Settings app, but having everything turned on with one tap is much easier to do.

Advanced Protection also prevents you from disabling certain core security features that are enabled by default, like Google Play Protect (Android’s built-in malware protection) and Android Safe Browsing (which safeguards against malicious websites).

But Advanced Protection also adds some new features. Once turned on, the “Inactivity reboot” feature restarts your device if it’s locked for 72 hours, which prevents ease of access that can occur when your device is on for a while and you have settings that could unlock your device. By forcing a reboot, it resets everything to being encrypted and behind biometric or pin access. It also turns on “USB Protection,” which makes it so any new USB connection can only be used for charging when the device is locked. It also prevents your device from auto-reconnecting to unsecured Wi-Fi networks.

As with all things Android, some of these features are limited to select devices, or only phones made by certain manufacturers. Memory Tagging Extension (MTE), which attempts to mitigate memory vulnerabilities by blocking unauthorized access, debuted on Pixel 8 devices in 2023 is only now showing up on other phones. These segmentations in features makes it a little difficult to know exactly what your device is protecting against if you’re not using a Pixel phone.

Some of the new features, like the ability to generate security logs that you can then share with security professionals in case your device is ever compromised, along with the aforementioned insecure network reconnect and USB protection features, won’t launch until later this year.

It’s also worth considering that enabling Advanced Protection may impact how you use your device. For example, Advanced Protection disables the JavaScript optimizer in Chrome, which may break some websites, and since Advanced Protection blocks unknown apps, you won’t be able to side-load. There’s also the chance that some of the call screening and scam detection features may misfire and flag legitimate calls.

How to Turn on Advanced Protection

Advanced Protection is easy to turn on and off, so there’s no harm in giving it a try. Advanced Protection was introduced with Android 16, so you may need to update your phone, or wait a little longer for your device manufacturer to support the update if it doesn’t already. Once you’re updated, to turn it on:

  • Open the Settings app.
  • Tap Security and Privacy > Advanced Protection, and enable the option next to “Device Protection.” 
  • If you haven’t already done so, now is a good time to consider enabling Advanced Protection for your Google account as well, though you will need to enroll a security key or a passkey to use this feature.

We welcome these features on Android, as well as the simplicity of its approach to enabling several pre-existing security and privacy features all at once. While there is no panacea for every security threat, this is a baseline that improves the security on Android for at-risk individuals without drastically altering day-to-day use, which is a win for everyone. We hope to see Google continue to push new improvements to this feature and for different phone manufacturer’s to support Advanced Protection where they don’t already.

Thorin Klosowski

EFF to NJ Supreme Court: Prosecutors Must Disclose Details Regarding FRT Used to Identify Defendant

4 days 17 hours ago

This post was written by EFF legal intern Alexa Chavara.

Black box technology has no place in the criminal legal system. That’s why we’ve once again filed an amicus brief arguing that the both the defendant and the public have a right to information regarding face recognition technology (FRT) that was used during an investigation to identify a criminal defendant.

Back in June 2023, we filed an amicus brief along with Electronic Privacy Information Center (EPIC) and the National Association of Criminal Defense Lawyers (NACDL) in State of New Jersey v. Arteaga. We argued that information regarding the face recognition technology used to identify the defendant should be disclosed due to the fraught process of a face recognition search and the many ways that inaccuracies manifest in the use of the technology. The New Jersey appellate court agreed, holding that state prosecutors must turn over detailed information to the defendant about the FRT used, including how it works, its source code, and its error rate. The court held that this ensures the defendant’s due process rights with the ability to examine the information, scrutinize its reliability, and build a defense.

Last month, partnering with the same organizations, we filed another amicus brief in favor of transparency regarding FRT in the criminal system, this time in the New Jersey Supreme Court in State of New Jersey v. Miles.

In Miles, New Jersey law enforcement used FRT to identify Mr. Miles as a suspect in a criminal investigation. The defendant, represented by the same public defender in Arteaga, moved for discovery on information about the FRT used, relying on Arteaga. The trial court granted this request for discovery, and the appellate court affirmed. The State then appealed to the New Jersey Supreme Court, where the issue is before the Court for the first time.

As explained in our amicus brief, disclosure is necessary to ensure criminal prosecutions are based on accurate evidence. Every search using face recognition technology presents a unique risk of error depending on various factors from the specific FRT system used, the databases searched, the quality of the photograph, and the demographics of the individual. Study after study shows that facial recognition algorithms are not always reliable, and that error rates spike significantly when involving faces of people of color,  especially Black women, as well as trans and nonbinary people.

Moreover, these searches often determine the course of investigation, reinforcing errors and resulting in numerous wrongful arrests, most often of Black folks. Discovery is the last chance to correct harm from misidentification and to allow the defendant to understand the evidence against them.

Furthermore, the public, including independent experts, have the right to examine the technology used in criminal proceedings. Under the First Amendment and the more expansive New Jersey Constitution corollary, the public’s right to access criminal judicial proceedings includes filings in pretrial proceedings, like the information being sought here. That access provides the public meaningful oversight of the criminal justice system and increases confidence in judicial outcomes, which is especially significant considering the documented risks and shortcomings of FRT.

Hannah Zhao

Protecting Minors Online Must Not Come at the Cost of Privacy and Free Expression

4 days 21 hours ago

The European Commission has taken an important step toward protecting minors online by releasing draft guidelines under Article 28 of the Digital Services Act (DSA). EFF recently submitted feedback to the Commission’s Targeted Consultation, emphasizing a critical point: Online safety for young people must not come at the expense of privacy, free expression, and equitable access to digital spaces.

We support the Commission’s commitment to proportionality, rights-based protections, and its efforts to include young voices in shaping these guidelines. But we remain deeply concerned by the growing reliance on invasive age assurance and verification technologies—tools that too often lead to surveillance, discrimination, and censorship.

Age verification systems typically depend on government-issued ID or biometric data, posing significant risks to privacy and shutting out millions of people without formal documentation. Age estimation methods fare no better: they’re inaccurate, especially for marginalized groups, and often rely on sensitive behavioral or biometric data. Meanwhile, vague mandates to protect against “unrealistic beauty standards” or “potentially risky content” threaten to overblock legitimate expression, disproportionately harming vulnerable users, including LGBTQ+ youth.

By placing a disproportionate emphasis on age assurance as a necessary tool to safeguard minors, the guidelines do not address the root causes of risks encountered by all users, including minors, and instead merely focus on treating their symptoms.

Safety matters—but so do privacy, access to information, and the fundamental rights of all users. We urge the Commission to avoid endorsing disproportionate, one-size-fits-all technical solutions. Instead, we recommend user-empowering approaches: Strong default privacy settings, transparency in recommender systems, and robust user control over the content they see and share.

The DSA presents an opportunity to protect minors while upholding digital rights. We hope the final guidelines reflect that balance.

Read more about digital identity and the future of age verification in Europe here.

Jillian C. York

A New Digital Dawn for Syrian Tech Users

1 week 1 day ago

U.S. sanctions on Syria have for several decades not only restricted trade and financial transactions, they’ve also severely limited Syrians’ access to digital technology. From software development tools to basic cloud services, Syrians were locked out of the global internet economy—stifling innovation, education, and entrepreneurship.

EFF has for many years pushed for sanctions exemptions for technology in Syria, as well as in Sudan, Iran, and Cuba. While civil society had early wins in securing general licenses for Iran and Sudan allowing the export of communications technologies, the conflict in Syria that began in 2011 made loosening of sanctions a pipe dream.

But recent changes to U.S. policy could mark the beginning of a shift. In a quiet yet significant move, the U.S. government has eased sanctions on Syria. On May 23, the Treasury Department issued General License 25, effectively allowing technology companies to provide services to Syrians. This decision could have an immediate and positive impact on the lives of millions of Syrian internet users—especially those working in the tech and education sectors.

A Legacy of Digital Isolation

For years, Syrians have found themselves barred from accessing even the most basic tools. U.S. sanctions meant that companies like Google, Apple, Microsoft, and Amazon—either by law or by cautious decisions taken to avoid potential penalties—restricted access to many of their services. Developers couldn’t access GitHub repositories or use Google Cloud; students couldn’t download software for virtual classrooms; and entrepreneurs struggled to build startups without access to payment gateways or secure infrastructure.

Such restrictions can put users in harm’s way; for instance, not being able to access the Google Play store from inside the country means that Syrians can’t easily download secure versions of everyday tools like Signal or WhatsApp, thus potentially subjecting their communications to surveillance.

These restrictions also compounded the difficulties of war, economic collapse, and internal censorship. Even when Syrian tech workers could connect with global communities, their participation was hampered by legal gray zones and technical blocks.

What the Sanctions Relief Changes

Under General License 25, companies will now be able to provide services to Syria that have never officially been available. While it may take time for companies to catch up with any regulatory changes, it is our hope that Syrians will soon be able to access and make use of technologies that will enable them to more freely communicate and rebuild.

For Syrian developers, the impact could be transformative. Restored access to platforms like GitHub, AWS, and Google Cloud means the ability to build, test, and deploy apps without the need for VPNs or workarounds. It opens the door to participation in global hackathons, remote work, and open-source communities—channels that are often lifelines for those in conflict zones. Students and educators stand to benefit, too. With sanctions eased, educational tools and platforms that were previously unavailable could soon be accessible. Entrepreneurs may also finally gain access to secure communications, e-commerce platforms, and the broader digital infrastructure needed to start and scale businesses. These developments could help jumpstart local economies.

Despite the good news, challenges remain. Major tech companies have historically been slow to respond to sanctions relief, often erring on the side of over-compliance to avoid liability. Many of the financial and logistical barriers—such as payment processing, unreliable internet, and ongoing conflict—will not disappear overnight.

Moreover, the lifting of sanctions is not a blanket permission slip; it’s a cautious opening. Any future geopolitical shifts or changes in U.S. foreign policy could once again cut off access, creating an uncertain digital future for Syrians.

Nevertheless, by removing barriers imposed by sanctions, the U.S. is taking a step toward recognizing that access to technology is not a luxury, but a necessity—even in sanctioned or conflict-ridden countries.

For Syrian users, the lifting of tech sanctions is more than a bureaucratic change—it’s a door, long closed, beginning to open. And for the international tech community, it’s an opportunity to re-engage, responsibly and thoughtfully, with a population that has been cut off from essential services for too long.

Jillian C. York

EFFecting Change: Pride in Digital Freedom

1 week 2 days ago

Join us for our next EFFecting Change livestream this Thursday! We're talking about emerging laws and platform policies that affect the digital privacy and free expression rights of the LGBT+ community, and how this echoes the experience of marginalized people across the world.

EFFecting Change Livestream Series:
Pride in Digital Freedom
Thursday, June 12th
4:00 PM - 5:00 PM Pacific - Check Local Time
This event is LIVE and FREE!

Join our panel featuring EFF Senior Staff Technologist Daly Barnett, EFF Legislative Activist Rindala Alajaji, Chosen Family Law Center Senior Legal Director Andy Izenson, and Woodhull Freedom Foundation Chief Operations Officer Mandy Salley while they discuss what is happening and what should change to protect digital freedom.

effectingchangepride_social_banner.png

We hope you and your friends can join us live! Be sure to spread the word, and share our past livestreams. Please note that all events will be recorded for later viewing on our YouTube page.

Want to make sure you don’t miss our next livestream? Here’s a link to sign up for updates about this series: eff.org/ECUpdates.

Melissa Srago

Congress Can Act Now to Protect Reproductive Health Data

1 week 2 days ago

State, federal, and international regulators are increasingly concerned about the harms they believe the internet and new technology are causing to users of all categories. Lawmakers are currently considering many proposals that are intended to provide protections to the most vulnerable among us. Too often, however, those proposals do not carefully consider the likely unintended consequences or even whether the law will actually reduce the harms it’s supposed to target. That’s why EFF supports Rep. Sara Jacobs’ newly reintroduced “My Body, My Data" Act, which will protect the privacy and safety of people seeking reproductive health care, while maintaining important constitutional protections and avoiding any erosion of end-to-end encryption. 

Take Action

Tell Congress to Protect Reproductive Health Data

Privacy fears should never stand in the way of healthcare. That's why this common-sense bill will require businesses and non-governmental organizations to act responsibly with personal information concerning reproductive health care. Specifically, it restricts them from collecting, using, retaining, or disclosing reproductive health information that isn't essential to providing the service someone requests.

The bill would protect people who use fertility or period-tracking apps or are seeking information about reproductive health services.

These restrictions apply to companies that collect personal information related to a person’s reproductive or sexual health. That includes data related to pregnancy, menstruation, surgery, termination of pregnancy, contraception, basal body temperature or diagnoses. The bill would protect people who, for example, use fertility or period-tracking apps or are seeking information about reproductive health services. 

We are proud to join Planned Parenthood Federation of America, Reproductive Freedom for All, Physicians for Reproductive Health, National Partnership for Women & Families, National Women’s Law Center,  Center for Democracy and Technology, Electronic Privacy Information Center, National Abortion Federation, Catholics for Choice, National Council for Jewish Women, Power to Decide, United for Reproductive & Gender Equity, Indivisible, Guttmacher, National Network of Abortion Funds, and All* Above All in support of this bill. 

In addition to the restrictions on company data processing, this bill also provides people with necessary rights to access and delete their reproductive health information. Companies must also publish a privacy policy, so that everyone can understand what information companies process and why. It also ensures that companies are held to public promises they make about data protection and gives the Federal Trade Commission the authority to hold them to account if they break those promises. 

The bill also lets people take on companies that violate their privacy with a strong private right of action. Empowering people to bring their own lawsuits not only places more control in the individual's hands, but also ensures that companies will not take these regulations lightly. 

Finally, while Rep. Jacobs' bill establishes an important national privacy foundation for everyone, it also leaves room for states to pass stronger or complementary laws to protect the data privacy of those seeking reproductive health care. 

We thank Rep. Jacobs and Sens. Mazie Hirono and Ron Wyden for taking up this important bill, H.R. 3916, and using it as an opportunity not only to protect those seeking reproductive health care, but also highlight why data privacy is an important element of reproductive justice. 

Take Action

Tell Congress to Protect Reproductive Health Data

India McKinney

Oppose STOP CSAM: Protecting Kids Shouldn’t Mean Breaking the Tools That Keep Us Safe

1 week 3 days ago

A Senate bill re-introduced this week threatens security and free speech on the internet. EFF urges Congress to reject the STOP CSAM Act of 2025 (S. 1829), which would undermine services offering end-to-end encryption and force internet companies to take down lawful user content.   

TAKE ACTION

Tell Congress Not to Outlaw Encrypted Apps

As in the version introduced last Congress, S. 1829 purports to limit the online spread of child sexual abuse material (CSAM), also known as child pornography. CSAM is already highly illegal. Existing law already requires online service providers who have actual knowledge of “apparent” CSAM on their platforms to report that content to the National Center for Missing and Exploited Children (NCMEC). NCMEC then forwards actionable reports to law enforcement agencies for investigation. 

S. 1829 goes much further than current law and threatens to punish any service that works to keep its users secure, including those that do their best to eliminate and report CSAM. The bill applies to “interactive computer services,” which broadly includes private messaging and email apps, social media platforms, cloud storage providers, and many other internet intermediaries and online service providers. 

The Bill Threatens End-to-End Encryption

The bill makes it a crime to intentionally “host or store child pornography” or knowingly “promote or facilitate” the sexual exploitation of children. The bill also opens the door for civil lawsuits against providers for the intentional, knowing or even reckless “promotion or facilitation” of conduct relating to child exploitation, the “hosting or storing of child pornography,” or for “making child pornography available to any person.”  

The terms “promote” and “facilitate” are broad, and civil liability may be imposed based on a low recklessness state of mind standard. This means a court can find an app or website liable for hosting CSAM even if the app or website did not even know it was hosting CSAM, including because the provider employed end-to-end encryption and could not view the contents of content uploaded by users.

Creating new criminal and civil claims against providers based on broad terms and low standards will undermine digital security for all internet users. Because the law already prohibits the distribution of CSAM, the bill’s broad terms could be interpreted as reaching more passive conduct, like merely providing an encrypted app.  

Due to the nature of their services, encrypted communications providers who receive a notice of CSAM may be deemed to have “knowledge” under the criminal law even if they cannot verify and act on that notice. And there is little doubt that plaintiffs’ lawyers will (wrongly) argue that merely providing an encrypted service that can be used to store any image—not necessarily CSAM—recklessly facilitates the sharing of illegal content.  

Affirmative Defense Is Expensive and Insufficient 

While the bill includes an affirmative defense that a provider can raise if it is “technologically impossible” to remove the CSAM without “compromising encryption,” it is not sufficient to protect our security. Online services that offer encryption shouldn’t have to face the impossible task of proving a negative in order to avoid lawsuits over content they can’t see or control. 

First, by making this protection an affirmative defense, providers must still defend against litigation, with significant costs to their business. Not every platform will have the resources to fight these threats in court, especially newcomers that compete with entrenched giants like Meta and Google. Encrypted platforms should not have to rely on prosecutorial discretion or favorable court rulings after protracted litigation. Instead, specific exemptions for encrypted providers should be addressed in the text of the bill.  

Second, although technologies like client-side scanning break encryption, members of Congress have misleadingly claimed otherwise. Plaintiffs are likely to argue that providers who do not use these techniques are acting recklessly, leading many apps and websites to scan all of the content on their platforms and remove any content that a state court could find, even wrongfully, is CSAM.

TAKE ACTION

Tell Congress Not to Outlaw Encrypted Apps

The Bill Threatens Free Speech by Creating a New Exception to Section 230 

The bill allows a new type of lawsuit to be filed against internet platforms, accusing them of “facilitating” child sexual exploitation based on the speech of others. It does this by creating an exception to Section 230, the foundational law of the internet and online speech. Section 230 provides partial immunity to internet intermediaries when sued over content posted by their users. Without that protection, platforms are much more likely to aggressively monitor and censor users.

Section 230 creates the legal breathing room for internet intermediaries to create online spaces for people to freely communicate around the world, with low barriers to entry. However, creating a new exception that exposes providers to more lawsuits will cause them to limit that legal exposure. Online services will censor more and more user content and accounts, with minimal regard as to whether that content is in fact legal. Some platforms may even be forced to shut down or may not even get off the ground in the first place, for fear of being swept up in a flood of litigation and claims around alleged CSAM. On balance, this harms all internet users who rely on intermediaries to connect with their communities and the world at large. 

India McKinney

Despite Changes, A.B. 412 Still Harms Small Developers

1 week 3 days ago

California lawmakers are continuing to promote a bill that will reinforce the power of giant AI companies by burying small AI companies and non-commercial developers in red tape, copyright demands and potentially, lawsuits. After several amendments, the bill hasn’t improved much, and in some ways has actually gotten worse. If A.B. 412 is passed, it will make California’s economy less innovative, and less competitive. 

The Bill Threatens Small Tech Companies

A.B. 412 masquerades as a transparency bill, but it’s actually a government-mandated “reading list” that will allow rights holders to file a new type of lawsuit in state court, even as the federal courts continue to assess whether and how federal copyright law applies to the development of generative AI technologies. 

The bill would require developers—even two-person startups— to keep lists of training materials that are “registered, pre-registered or indexed” with the U.S. Copyright Office, and help rights holders create digital ‘fingerprints’ of those works—a technical task with no established standards and no realistic path for small teams to follow. Even if it were limited to registered copyrighted material, that’s a monumental task, as we explained in March when we examined the earlier text of A.B. 412. 

The bill’s amendments have made compliance even harder, since it now requires technologists to go beyond copyrighted material and somehow identify “pre-registered” copyrights. The amended bill also has new requirements that demand technologists document and keep track of when they look at works that aren’t copyrighted but are subject to exclusive rights, such as pre-1972 sound recordings—rights that, not coincidentally, are primarily controlled by large entertainment companies. 

The penalties for noncompliance are steep—up to $1,000 per day per violation—putting small developers at enormous financial risk even for accidental lapses.

The goal of this list is clear: for big content companies to more easily file lawsuits against software developers, big and small. And for most AI developers, the burden will be crushing. Under A.B. 412, a two-person startup building an open-source chatbot, or an indie developer fine-tuning a language model for disability access, would face the same compliance burdens as Google or Meta. 

Reading and Analyzing The Open Web Is Not a Crime 

It’s critical to remember that AI training is very likely protected by fair use under U.S. copyright law—a point that’s still being worked out in the courts. The idea that we should preempt that process with sweeping state regulation is not just premature; it’s dangerous.

It’s also worth noting that copyright is governed by federal law. Federal courts are already working to define the boundaries of fair use and copyright in the AI context—the California legislature should let them do their job. A.B. 412 tries to create a state-level regulatory scheme in an area that belongs in federal hands—a risky legal overreach that could further complicate an already unsettled policy space.

A.B. 412 is a solution in search of a problem. The courthouse doors are far from closed to content owners who want to dispute the use of their copyrighted works. There are multiple high-profile litigations over the copyright status of AI training works that are working their way through trial courts and appeal courts right now. 

Scope Creep

Rather than narrowing its focus to make compliance more realistic, the latest amendments to A.B. 412 actually expand the scope of covered works. The bill now demands documentation of obscure categories of content like pre-1972 sound recordings. These recordings have rights that are often murky, and largely controlled by major media companies.

The bill also adds “preregistered” and indexed works to its coverage. Preregistration, designed to help entertainment companies punish unauthorized copying even before commercial release, expands the universe of content that developers must track—without offering any meaningful help to small creators. 

A Moat Serving Big Tech

Ironically, the companies that will benefit most from A.B. 412 are the very same large tech firms that lawmakers often claim they want to regulate. Big companies can hire teams of lawyers and compliance officers to handle these requirements. Small developers? They’re more likely to shut down, sell out, or never enter the field in the first place.

This bill doesn’t create a fairer marketplace. It builds a regulatory moat around the incumbents, locking out new competitors and ensuring that only a handful of companies have the resources to develop advanced AI systems. Truly innovative technology often comes from unknown or small companies, but A.B. 412 threatens to turn California—and anyone who does business there—into a fortress where only the biggest players survive.

A Lopsided Bill 

A.B. 412 is becoming an increasingly extreme and one-sided piece of legislation. It’s a maximalist wishlist for legacy rights-holders, delivered at the expense of small developers and the public. The result will be less competition, less innovation, and fewer choices for consumers—not more protection for creators.

This new version does close a few loopholes, and expands the period for AI developers to respond to copyright demands from 7 days to 30 days. But it seriously fails to close others: for instance, the exemption for noncommercial development applies only to work done “exclusively for noncommercial academic or governmental” institutions. That still leaves a huge window to sue hobbyists and independent researchers who don’t have university or government jobs. 

While the bill nominally exempts developers who use only public or developer-owned data, that’s a carve-out with no practical value. Like a search engine, nearly every meaningful AI system relies on mixed sources — and developers can’t realistically track the copyright status of them all.

At its core, A.B. 412 is a flawed bill that would harm the whole U.S. tech ecosystem. Lawmakers should be advancing policies that protect privacy, promote competition, and ensure that innovation benefits the public—not just a handful of entrenched interests.

If you’re a California resident, now is the time to speak out. Tell your legislators that A.B. 412 will hurt small companies, help big tech, and lock California’s economy in the past.

Joe Mullin

35 Years for Your Freedom Online

1 week 4 days ago

Once upon a time we were promised flying cars and jetpacks. Yet we've arrived at a more complicated timeline where rights advocates can find themselves defending our hard-earned freedoms more often than shooting for the moon. In tough times, it's important to remember that your vision for the future can be just as valuable as the work you do now.

Thirty-five years ago, a small group of folks saw the coming digital future and banded together to ensure that technology would empower people, not oppress them—and EFF was born. While the dangers of corporate and state forces grew alongside the internet, EFF and supporters like you faithfully rose to the occasion. Will you help celebrate EFF’s 35th anniversary and donate in support of digital freedom?

Give today

Protect Online Privacy & Free Expression

Together we’ve won many fights for encryption, free speech, innovation, and privacy online. Yet it’s plain to see that we must keep advocating for technology users whether that’s in the courts, before lawmakers, educating the public, or creating privacy-enhancing tools. EFF members make it possible—you can lend a hand and get some great perks!

Summer Swag Is Here

We love making stuff for EFF’s members each year. It’s our way of saying thanks for supporting the mission for your rights online, and I hope it’s your way of starting a conversation about internet freedom with people in your life.

shirts-both-necklines-wider-square-750px.jpg

Celebrate EFF's 35th Anniversary in the digital rights movement with this EFF35 Cityscape member t-shirt by Hugh D’Andrade! EFF has a not-so-secret weapon that keeps us in the fight even when the odds are against us: we never lose sight of our vision for a better future. Choose a roomy Classic Fit Crewneck or a soft Slim Fit V-Neck.

hoodie-front-back-alt-square-750px.jpg

And enjoy Lovelace-Klimtian vibes on EFF’s new Motherboard Hooded Sweatshirt by Shirin Mori. Gold details and orange poppies pop on lush forest green. Don't lose the forest for the trees—keep fighting for a world where tech supports people irl.

Join the Sustaining Donor Challenge (it’s easy)

You'll get a numbered EFF35 Challenge Coin when you become a monthly or annual Sustaining Donor by July 10. It’s that simple.

If you're already a Sustaining Donor—THANKS! You too can get an EFF 35th Anniversary Challenge Coin when you upgrade your donation. Just increase your monthly or annual gift and let us know by emailing upgrade@eff.org. Get started at eff.org/recurring. If you used PayPal, just cancel your current recurring donation and then go to eff.org to start a new upgraded recurring donation.

coin_cat_1200px.jpg

Support internet freedom with a no-fuss automated recurring donation! Over 30% of EFF members have joined as Sustaining Donors to defend digital rights (and get some great swag every year). Challenge coins follow a long tradition of offering a symbol of kinship and respect for great achievements—and EFF owes its strength to technology creators and users like you.

With your help, EFF is here to stay.

Join EFF

Protect Online Privacy & Free Expression

Aaron Jue

NYC Lets AI Gamble with Child Welfare

1 week 4 days ago

The Markup revealed in its reporting last month that New York City’s Administration for Children’s Services (ACS) has been quietly deploying an algorithmic tool to categorize families as “high risk". Using a grab-bag of factors like neighborhood and mother’s age, this AI tool can put families under intensified scrutiny without proper justification and oversight.

ACS knocking on your door is a nightmare for any parent, with the risk that any mistakes can break up your family and have your children sent to the foster care system. Putting a family under such scrutiny shouldn’t be taken lightly and shouldn’t be a testing ground for  automated decision-making by the government.

 This “AI” tool, developed internally by ACS’s Office of Research Analytics, scores families for “risk” using 279 variables and subjects those deemed highest-risk to intensified scrutiny. The lack of transparency, accountability, or due process protections demonstrates that ACS has learned nothing from the failures of similar products in the realm of child services.

The algorithm operates in complete secrecy and the harms from this opaque “AI theater” are not theoretical. The 279 variables are derived only from cases back in 2013 and 2014 where children were seriously harmed. However, it is unclear how many cases were analyzed, what, if any, kind of auditing and testing was conducted, and whether including of data from other years would have altered the scoring.

What we do know is disturbing: Black families in NYC face ACS investigations at seven times the rate of white families and ACS staff has admitted that the agency is more punitive towards Black families, with parents and advocates calling its practices “predatory.” It is likely that the algorithm effectively automates and amplifies this discrimination.

Despite the disturbing lack of transparency and accountability, ACS’s usage of this system has subjected families that this system ranks as “highest risk” to additional scrutiny, including possible home visits, calls to teachers and family, or consultations with outside experts. But those families, their attorneys, and even caseworkers don't know when and why the system flags a case, making it difficult to challenge the circumstances or process that leads to this intensified scrutiny.

This is not the only incidence in which usage of AI tools in the child services system has encountered issues with systemic biases. Back in 2022, the Associated Press reported that Carnegie Mellon researchers found that from August 2016 to May 2018, Allegheny County in Pennsylvania used an algorithmic tool that flagged 32.5% of Black children for “mandatory” investigation compared to just 20.8% of white, all while social workers disagreed with the algorithm's risk scores about one-third of the time.

The Allegheny system operates with the same toxic combination of secrecy and bias now plaguing NYC. Families and their attorneys can never know their algorithmic scores, making it impossible to challenge decisions that could destroy their lives. When a judge asked to see a family’s score in court, the county resisted, claiming it didn't want to influence legal proceedings with algorithmic numbers, which suggests that the scores are too unreliable for judicial scrutiny yet acceptable for targeting families.

Elsewhere these biased systems were successfully challenged. The developers of the Allegheny tool had already had their product rejected in New Zealand, where researchers correctly identified that the tool would likely result in more Māori families being tagged for investigation. Meanwhile, California spent $195,273 developing a similar tool before abandoning it in 2019 due in part to concerns about racial equity.

Governmental deployment of automated and algorithmic decision making not only perpetuates social inequalities, but removes mechanisms for accountability when agencies make mistakes. The state should not be using these tools for rights-determining decisions and any other uses must be subject to vigorous scrutiny and independent auditing to ensure the public’s trust in the government’s actions.

Hannah Zhao

Criminalizing Masks at Protests is Wrong

1 week 4 days ago

There has been a crescendo of states attempting to criminalize the wearing of face coverings while attending protests. Now the President has demanded, in the context of ongoing protests in Los Angeles: “ARREST THE PEOPLE IN FACE MASKS, NOW!”

But the truth is: whether you are afraid of catching an airborne illness from your fellow protestors, or you are concerned about reprisals from police or others for expressing your political opinions in public, you should have the right to wear a mask. Attempts to criminalize masks at protests fly in the face of a right to privacy.

Anonymity is a fundamental human right.

In terms of public health, wearing a mask while in a crowd can be a valuable tool to prevent the spread of communicable illnesses. This can be essential for people with compromised immune systems who still want to exercise their First Amendment-protected right to protest.

Moreover, wearing a mask is a perfectly legitimate surveillance self-defense practice during a protest. There has been a massive proliferation of surveillance camera networks, face recognition technology, and databases of personal information. There also is a long law enforcement’s history of harassing and surveilling people for publicly criticizing or opposing law enforcement practices and other government policies. What’s more, non-governmental actors may try to identify protesters in order to retaliate against them, for example, by limiting their employment opportunities.

All of this may chill our willingness to speak publicly or attend a protest in a cause we believe in. Many people would be less willing to attend a rally or march if they know that a drone or helicopter, equipped with a camera, will take repeated passes over the crowd, and police later will use face recognition to scan everyone’s faces and create a list of protest attendees. This would make many people rightfully concerned about surveillance and harassment from law enforcement.

Anonymity is a fundamental human right. EFF has long advocated for anonymity online. We’ve also supported low-tech methods to protect our anonymity from high-tech snooping in public places; for example, we’ve supported legislation to allow car owners to use license plate covers when their cars are parked to reduce their exposure to ALPRs.

A word of caution. No surveillance self-defense technique is perfect. Technology companies are trying to develop ways to use face recognition technology to identify people wearing masks. But if somebody wants to hide their face to try to avoid government scrutiny, the government should not punish them.

While members of the public have a right to wear a mask when they protest, law enforcement officials should not wear a mask when they arrest protesters and others. An elementary principle of police accountability is to require uniformed officers to identify themselves to the public; this discourages officer misconduct, and facilitates accountability if an officer violates the law. This is one reason EFF has long supported the First Amendment right to record on-duty police, including ICE officers.

For these reasons, EFF believes it is wrong for state legislatures, and now federal law enforcement, to try to criminalize or punish mask wearing at protests. It is especially wrong, in moments like the present, where government it taking extreme measures to crack down on the civil liberties of protesters. 

Matthew Guariglia
Checked
26 minutes 5 seconds ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed