Voting Against the Surveillance State | EFFector 36.2

2 months 2 weeks ago

EFF is here to keep you up-to-date with the latest news about your digital rights! EFFector 36.2 is out now and covers a ton of the latest news, including: a victory, as Amazon's Ring will no longer facilitate warrantless footage requests from police; an analysis on Apple's announcement to support RCS on iPhones; and a call for San Francisco voters to vote no on Proposition E on the March 5, 2024 ballot.

You can read the full newsletter here, or subscribe to get the next issue in your inbox automatically! You can also listen to the audio version of the newsletter on the Internet Archive, or by clicking the button below:

LISTEN ON YouTube

EFFector 36.2 | Voting Against the Surveillance State

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

EFF Helps News Organizations Push Back Against Legal Bullying from Cyber Mercenary Group

2 months 3 weeks ago

Cyber mercenaries present a grave threat to human rights and freedom of expression. They have been implicated in surveillance, torture, and even murder of human rights defenders, political candidates, and journalists. One of the most effective ways that the human rights community pushes back against the threat of targeted surveillance and cyber mercenaries is to investigate and expose these companies and their owners and customers. 

But for the last several months, there has emerged a campaign of bullying and censorship seeking to wipe out stories about the mercenary hacking campaigns of a less well-known company, Appin Technology, in general, and the company’s cofounder, Rajat Khare, in particular. These efforts follow a familiar pattern: obtain a court order in a friendly international jurisdiction and then misrepresent the force and substance of that order to bully publishers around the world to remove their stories.

We are helping to push back on that effort, which seeks to transform a very limited and preliminary Indian court ruling into a global takedown order. We are representing Techdirt and MuckRock Foundation, two of the news entities asked to remove Appin-related content from their sites. On their behalf, we challenged the assertions that the Indian court either found the Reuters reporting to be inaccurate or that the order requires any entities other than Reuters and Google to do anything. We requested a response – so far, we have received nothing.

Background

If you worked in cybersecurity in the early 2010’s, chances are that you remember Appin Technology, an Indian company offering information security education and training with a sideline in (at least according to many technical reports) hacking-for-hire. 

On November 16th, 2023, Reuters published an extensively-researched story titled “How an Indian Startup Hacked the World” about Appin Technology and its cofounder Rajat Khare. The story detailed hacking operations carried out by Appin against private and government targets all over the world while Khare was still involved with the company. The story was well-sourced, based on over 70 original documents and interviews with primary sources from inside Appin. But within just days of publication, the story—and many others covering the issue—disappeared from most of the web.

On December 4th, an Indian court preliminarily ordered Reuters to take down their story about Appin Technology and Khare while a case filed against them remains pending in the court. Reuters subsequently complied with the order and took the story offline. Since then dozens of other journalists have written about the original story and about the takedown that followed. 

At the time of this writing, more than 20 of those stories have been taken down by their respective publications, many at the request of an entity called “Association of Appin Training Centers (AOATC).” Khare’s lawyers have also sent letters to news sites in multiple countries demanding they remove his name from investigative reports. Khare’s lawyers also succeeded in getting Swiss courts to issue an injunction against reporting from Swiss public television, forcing them to remove his name from a story about Qatar hiring hackers to spy on FIFA officials in preparation for the World Cup. Original stories, cybersecurity reports naming Appin, stories about the Reuters story, and even stories about the takedown have all been taken down. Even the archived version of the Reuters story was taken down from archive.org in response to letters sent by the Association of Appin Training Centers.

One of the letters sent by AOATC to Ron Deibert, the founder and director of Citizen Lab, reads:

Ron Deibert had the following response:

Not everyone has been as confident as Ron Deibert. Some of the stories that were taken down have been replaced with a note explaining the takedown, while others were redacted into illegibility, such as the story from Lawfare:

It is not clear who is behind The Association of Appin Training Centers, but according to documents surfaced by Reuters, the organization didn’t exist until after the lawsuit was filed against Reuters in Indian court. Khare’s lawyers have denied any connection between Khare and the training center organization. Even if this is true, it is clear that the goals of both parties are fundamentally aligned in silencing any negative press covering Appin or Rajat Khare.  

Regardless of who is behind the Association of Appin Training Centers, the links between Khare and Appin Technology are extensive and clear. Khare continues to claim that he left Appin in 2013, before any hacking-for-hire took place. However, Indian corporate records demonstrate that he stayed involved with Appin long after that time. 

Khare has also been the subject of multiple criminal investigations. Reuters published a sworn 2016 affidavit by Israeli private investigator Aviram Halevi in which he admits hiring Appin to steal emails from a Korean businessman. It also published a 2012 Dominican prosecutor’s filing which described Khare as part of an alleged hacker’s “international criminal network.” A publicly available criminal complaint filed with India’s Central Bureau of Investigation shows that Khare is accused, with others, of embezzling nearly $100 million from an Indian education technology company. A Times of India story from 2013 notes that Appin was investigated by an unnamed Indian intelligence agency over alleged “wrongdoings.”

Response to AOATC

EFF is helping two news organizations stand up to the Association of Appin Training Centers’ bullying—Techdirt and Muckrock Foundation. 

Techdirt received a similar request to the one Ron Diebert received, after it published an article about the Reuters takedown, but then also received the following emails:

Dear Sir/Madam,

I am writing to you on behalf of Association of Appin Training Centers in regards to the removal of a defamatory article running on https://www.techdirt.com/ that refers to Reuters story, titled: “How An Indian Startup Hacked The World” published on 16th November 2023.

As you must be aware, Reuters has withdrawn the story, respecting the order of a Delhi court. The article made allegations without providing substantive evidence and was based solely on interviews conducted with several people.

In light of the same, we request you to kindly remove the story as it is damaging to us.

Please find the URL mentioned below.

https://www.techdirt.com/2023/12/07/indian-court-orders-reuters-to-take-down-investigative-report-regarding-a-hack-for-hire-company/

Thanks & Regards

Association of Appin Training Centers

And received the following email twice, roughly two weeks apart:

Hi Sir/Madam

This mail is regarding an article published on your website,

URL : https://www.techdirt.com/2023/12/07/indian-court-orders-reuters-to-take-down-investigative-report-regarding-a-hack-for-hire-company/

dated on 7th Dec. 23 .

As you have stated in your article, the Reuters story was declared defamatory by the Indian Court which was subsequently removed from their website.

However, It is pertinent to mention here that you extracted a portion of your article from the same defamatory article which itself is a violation of an Indian Court Order, thereby making you also liable under Contempt of Courts Act, 1971.

You are advised to remove this article from your website with immediate effect.

 

Thanks & Regards

Association of Appin Training Centers

We responded to AOATC on behalf of Techdirt and MuckRock Foundation to the “requests for assistance” which were sent to them, challenging AOATC’s assertions about the substance and effect of the Indian court interim order. We pointed out that the Indian court order is only interim and not a final judgment that Reuters’ reporting was false, and that it only requires Reuters and Google to do anything. Furthermore, we explained that even if the court order applied to MuckRock and Techdirt, the order is inconsistent with the First Amendment and would be unenforceable in US courts pursuant to the SPEECH Act:

To the Association of Appin Training Centers:

We represent and write on behalf of Techdirt and MuckRock Foundation (which runs the DocumentCloud hosting services), each of which received correspondence from you making certain assertions about the legal significance of an interim court order in the matter of Vinay Pandey v. Raphael Satter & Ors. Please direct any future correspondence about this matter to me.

We are concerned with two issues you raise in your correspondence.

First, you refer to the Reuters article as containing defamatory materials as determined by the court. However, the court’s order by its very terms is an interim order, that indicates that the defendants’ evidence has not yet been considered, and that a final determination of the defamatory character of the article has not been made. The order itself states “this is only a prima-facie opinion and the defendants shall have sufficient opportunity to express their views through reply, contest in the main suit etc. and the final decision shall be taken subsequently.”

Second, you assert that reporting by others of the disputed statements made in the Reuters article “itself is a violation of an Indian Court Order, thereby making you also liable under Contempt of Courts Act, 1971.” But, again by its plain terms, the court’s interim order applies only to Reuters and to Google. The order does not require any other person or entity to depublish their articles or other pertinent materials. And the order does not address its effect on those outside the jurisdiction of Indian courts. The order is in no way the global takedown order your correspondence represents it to be. Moreover, both Techdirt and MuckRock Foundation are U.S. entities. Thus, even if the court’s order could apply beyond the parties named within it, it will be unenforceable in U.S. courts to the extent it and Indian defamation law is inconsistent with the First Amendment to the U.S. Constitution and 47 U.S.C. § 230, pursuant to the SPEECH Act, 28 U.S.C. § 4102. Since the First Amendment would not permit an interim depublication order in a defamation case, the Pandey order is unenforceable.

If you disagree, please provide us with legal authority so we can assess those arguments. Unless we hear from you otherwise, we will assume that you concede that the order binds only Reuters and Google and that you will cease asserting otherwise to our clients or to anyone else.

We have not yet received any response from AOATC. We hope that others who have received takedown requests and demands from AOATC will examine their assertions with a critical eye.  

If a relatively obscure company like AOATC or an oligarch like Rajat Khare can succeed in keeping their name out of the public discourse with strategic lawsuits, it sets a dangerous precedent for other larger, better-resourced, and more well-known companies such as Dark Matter or NSO Group to do the same. This would be a disaster for civil society, a disaster for security research, and a disaster for freedom of expression.

Cooper Quintin

Protect Good Faith Security Research Globally in Proposed UN Cybercrime Treaty

2 months 3 weeks ago

Statement submitted to the UN Ad Hoc Committee Secretariat by the Electronic Frontier Foundation, accredited under operative paragraph No. 9 of UN General Assembly Resolution 75/282, on behalf of 124 signatories.

We, the undersigned, representing a broad spectrum of the global security research community, write to express our serious concerns about the UN Cybercrime Treaty drafts released during the sixth session and the most recent one. These drafts pose substantial risks to global cybersecurity and significantly impact the rights and activities of good faith cybersecurity researchers.

Our community, which includes good faith security researchers in academia and cybersecurity companies, as well as those working independently, plays a critical role in safeguarding information technology systems. We identify vulnerabilities that, if left unchecked, can spread malware, cause data breaches, and give criminals access to sensitive information of millions of people. We rely on the freedom to openly discuss, analyze, and test these systems, free of legal threats.

The nature of our work is to research, discover, and report vulnerabilities in networks, operating systems, devices, firmware, and software. However, several provisions in the draft treaty risk hindering our work by categorizing much of it as criminal activity. If adopted in its current form, the proposed treaty would increase the risk that good faith security researchers could face prosecution, even when our goal is to enhance technological safety and educate the public on cybersecurity matters. It is critical that legal frameworks support our efforts to find and disclose technological weaknesses to make everyone more secure, rather than penalize us, and chill the very research and disclosure needed to keep us safe. This support is essential to improving the security and safety of technology for everyone across the world.

Equally important is our ability to differentiate our legitimate security research activities from malicious exploitation of security flaws. Current laws focusing on “unauthorized access” can be misapplied to good faith security researchers, leading to unnecessary legal challenges. In addressing this, we must consider two potential obstacles to our vital work. Broad, undefined rules for prior authorization risk deterring good faith security researchers, as they may not understand when or under what circumstances they need permission. This lack of clarity could ultimately weaken everyone's online safety and security. Moreover, our work often involves uncovering unknown vulnerabilities. These are security weaknesses that no one, including the system's owners, knows about until we discover them. We cannot be certain what vulnerabilities we might find. Therefore, requiring us to obtain prior authorization for each potential discovery is impractical and overlooks the essence of our work.

The unique strength of the security research community lies in its global focus, which prioritizes safeguarding infrastructure and protecting users worldwide, often putting aside geopolitical interests. Our work, particularly the open publication of research, minimizes and prevents harm that could impact people globally, transcending particular jurisdictions. The proposed treaty’s failure to exempt good faith security research from the expansive scope of its cybercrime prohibitions and to make the safeguards and limitations in Article 6-10 mandatory leaves the door wide open for states to suppress or control the flow of security related information. This would undermine the universal benefit of openly shared cybersecurity knowledge, and ultimately the safety and security of the digital environment.

We urge states to recognize the vital role the security research community plays in defending our digital ecosystem against cybercriminals, and call on delegations to ensure that the treaty supports, rather than hinders, our efforts to enhance global cybersecurity and prevent cybercrime. Specifically:

Article 6 (Illegal Access): This article risks criminalizing essential activities in security research, particularly where researchers access systems without prior authorization, to identify vulnerabilities. A clearer distinction is needed between malicious unauthorized access “without right” and “good faith” security research activities; safeguards for legitimate activities should be mandatory. A malicious intent requirement—including an intent to cause damage, defraud, or harm—is needed to avoid criminal liability for accidental or unintended access to a computer system, as well as for good faith security testing.

Article 6 should not use the ambiguous term “without right” as a basis for establishing criminal liability for unauthorized access. Apart from potentially criminalizing security research, similar provisions have also been misconstrued to attach criminal liability to minor violations committed deliberately or accidentally by authorized users. For example, violation of private terms of service (TOS)–a minor infraction ordinarily considered a civil issue–could be elevated into a criminal offense category via this treaty on a global scale.

Additionally, the treaty currently gives states the option to define unauthorized access in national law as the bypassing of security measures. This should not be optional, but rather a mandatory safeguard, to avoid criminalizing routine behavior such as changing one’s IP address, inspecting website code, and accessing unpublished URLs. Furthermore, it is crucial to specify that the bypassed security measures must be actually "effective." This distinction is important because it ensures that criminalization is precise and scoped to activities that cause harm. For instance, bypassing basic measures like geoblocking–which can be done innocently simply by changing location–should not be treated the same as overcoming robust security barriers with the intention to cause harm.

By adopting this safeguard and ensuring that security measures are indeed effective, the proposed treaty would shield researchers from arbitrary criminal sanctions for good faith security research.

These changes would clarify unauthorized access, more clearly differentiating malicious hacking from legitimate cybersecurity practices like security research and vulnerability testing. Adopting these amendments would enhance protection for cybersecurity efforts and more effectively address concerns about harmful or fraudulent unauthorized intrusions.

Article 7 (Illegal Interception): Analysis of network traffic is also a common practice in cybersecurity; this article currently risks criminalizing such analysis and should similarly be narrowed to require criminal intent (mens rea) to harm or defraud.

Article 8 (Interference with Data) and Article 9 (Interference with Computer Systems): These articles may inadvertently criminalize acts of security research, which often involve testing the robustness of systems by simulating attacks through interferences. As with prior articles, criminal intent to cause harm or defraud is not mandated, and a requirement that the activity cause serious harm is absent from Article 9 and optional in Article 8. These safeguards should be mandatory.

Article 10 (Misuse of Devices): The broad scope of this article could criminalize the legitimate use of tools employed in cybersecurity research, thereby affecting the development and use of these tools. Under the current draft, Article 10(2) specifically addresses the misuse of cybersecurity tools. It criminalizes obtaining, producing, or distributing these tools only if they are intended for committing cybercrimes as defined in Articles 6 to 9 (which cover illegal access, interception, data interference, and system interference). However, this also raises a concern. If Articles 6 to 9 do not explicitly protect activities like security testing, Article 10(2) may inadvertently criminalize security researchers. These researchers often use similar tools for legitimate purposes, like testing and enhancing systems security. Without narrow scope and clear safeguards in Articles 6-9, these well-intentioned activities could fall under legal scrutiny, despite not being aligned with the criminal malicious intent (mens rea) targeted by Article 10(2).

Article 22 (Jurisdiction): In combination with other provisions about measures that may be inappropriately used to punish or deter good-faith security researchers, the overly broad jurisdictional scope outlined in Article 22 also raises significant concerns. Under the article's provisions, security researchers discovering or disclosing vulnerabilities to keep the digital ecosystem secure could be subject to criminal prosecution simultaneously across multiple jurisdictions. This would have a chilling effect on essential security research globally and hinder researchers' ability to contribute to global cybersecurity. To mitigate this, we suggest revising Article 22(5) to prioritize “determining the most appropriate jurisdiction for prosecution” rather than “coordinating actions.” This shift could prevent the redundant prosecution of security researchers. Additionally, deleting Article 17 and limiting the scope of procedural and international cooperation measures to crimes defined in Articles 6 to 16 would further clarify and protect against overreach.

Article 28(4): This article is gravely concerning from a cybersecurity perspective. It empowers authorities to compel “any individual” with knowledge of computer systems to provide any “necessary information” for conducting searches and seizures of computer systems. This provision can be abused to force security experts, software engineers and/or tech employees to expose sensitive or proprietary information. It could also encourage authorities to bypass normal channels within companies and coerce individual employees, under the threat of criminal prosecution, to provide assistance in subverting technical access controls such as credentials, encryption, and just-in-time approvals without their employers’ knowledge. This dangerous paragraph must be removed in favor of the general duty for custodians of information to comply with lawful orders to the extent of their ability.

Security researchers—whether within organizations or independent—discover, report and assist in fixing tens of thousands of critical Common Vulnerabilities and Exposure (CVE) reported over the lifetime of the National Vulnerability Database. Our work is a crucial part of the security landscape, yet often faces serious legal risk from overbroad cybercrime legislation.

While the proposed UN CybercrimeTreaty's core cybercrime provisions closely mirror the Council of Europe’s Budapest Convention, the impact of cybercrime regimes and security research has evolved considerably in the two decades since that treaty was adopted in 2001. In that time, good faith cybersecurity researchers have faced significant repercussions for responsibly identifying security flaws. Concurrently, a number of countries have enacted legislative or other measures to protect the critical line of defense this type of research provides. The UN Treaty should learn from these past experiences by explicitly exempting good faith cybersecurity research from the scope of the treaty. It should also make existing safeguards and limitations mandatory. This change is essential to protect the crucial work of good faith security researchers and ensure the treaty remains effective against current and future cybersecurity challenges.

Since these negotiations began, we had hoped that governments would adopt a treaty that strengthens global computer security and enhances our ability to combat cybercrime. Unfortunately, the draft text, as written, would have the opposite effect. The current text would weaken cybersecurity and make it easier for malicious actors to create or exploit weaknesses in the digital ecosystem by subjecting us to criminal prosecution for good faith work that keeps us all safer. Such an outcome would undermine the very purpose of the treaty: to protect individuals and our institutions from cybercrime.

To be submitted by the Electronic Frontier Foundation, accredited under operative paragraph No. 9 of UN General Assembly Resolution 75/282 on behalf of 124 signatories.

Individual Signatories
Jobert Abma, Co-Founder, HackerOne (United States)
Martin Albrecht, Chair of Cryptography, King's College London (Global) Nicholas Allegra (United States)
Ross Anderson, Universities of Edinburgh and Cambridge (United Kingdom)
Diego F. Aranha, Associate Professor, Aarhus University (Denmark)
Kevin Beaumont, Security researcher (Global) Steven Becker (Global)
Janik Besendorf, Security Researcher (Global) Wietse Boonstra (Global)
Juan Brodersen, Cybersecurity Reporter, Clarin (Argentina)
Sven Bugiel, Faculty, CISPA Helmholtz Center for Information Security (Germany)
Jon Callas, Founder and Distinguished Engineer, Zatik Security (Global)
Lorenzo Cavallaro, Professor of Computer Science, University College London (Global)
Joel Cardella, Cybersecurity Researcher (Global)
Inti De Ceukelaire (Belgium)
Enrique Chaparro, Information Security Researcher (Global)
David Choffnes, Associate Professor and Executive Director of the Cybersecurity and Privacy Institute at Northeastern University (United States/Global)
Gabriella Coleman, Full Professor Harvard University (United States/Europe)
Cas Cremers, Professor and Faculty, CISPA Helmholtz Center for Information Security (Global)
Daniel Cuthbert (Europe, Middle East, Africa)
Ron Deibert, Professor and Director, the Citizen Lab at the University of Toronto's Munk School (Canada)
Domingo, Security Incident Handler, Access Now (Global)
Stephane Duguin, CEO, CyberPeace Institute (Global)
Zakir Durumeric, Assistant Professor of Computer Science, Stanford University; Chief Scientist, Censys (United States)
James Eaton-Lee, CISO, NetHope (Global)
Serge Egelman, University of California, Berkeley; Co-Founder and Chief Scientist, AppCensus (United States/Global)
Jen Ellis, Founder, NextJenSecurity (United Kingdom/Global)
Chris Evans, Chief Hacking Officer @ HackerOne; Founder @ Google Project Zero (United States)
Dra. Johanna Caterina Faliero, Phd; Professor, Faculty of Law, University of Buenos Aires; Professor, University of National Defence (Argentina/Global))
Dr. Ali Farooq, University of Strathclyde, United Kingdom (Global)
Victor Gevers, co-founder of the Dutch Institute for Vulnerability Disclosure (Netherlands)
Abir Ghattas (Global)
Ian Goldberg, Professor and Canada Research Chair in Privacy Enhancing Technologies, University of Waterloo (Canada)
Matthew D. Green, Associate Professor, Johns Hopkins University (United States)
Harry Grobbelaar, Chief Customer Officer, Intigriti (Global)
Juan Andrés Guerrero-Saade, Associate Vice President of Research, SentinelOne (United States/Global)
Mudit Gupta, Chief Information Security Officer, Polygon (Global)
Hamed Haddadi, Professor of Human-Centred Systems at Imperial College London; Chief Scientist at Brave Software (Global)
J. Alex Halderman, Professor of Computer Science & Engineering and Director of the Center for Computer Security & Society, University of Michigan (United States)
Joseph Lorenzo Hall, PhD, Distinguished Technologist, The Internet Society
Dr. Ryan Henry, Assistant Professor and Director of Masters of Information Security and Privacy Program, University of Calgary (Canada)
Thorsten Holz, Professor and Faculty, CISPA Helmholtz Center for Information Security, Germany (Global)
Joran Honig, Security Researcher (Global)
Wouter Honselaar, MSc student security; hosting engineer & volunteer, Dutch Institute for Vulnerability Disclosure (DIVD)(Netherlands)
Prof. Dr. Jaap-Henk Hoepman (Europe)
Christian “fukami” Horchert (Germany / Global)
Andrew 'bunnie' Huang, Researcher (Global)
Dr. Rodrigo Iglesias, Information Security, Lawyer (Argentina)
Hudson Jameson, Co-Founder - Security Alliance (SEAL)(Global)
Stijn Jans, CEO of Intigriti (Global)
Gerard Janssen, Dutch Institute for Vulnerability Disclosure (DIVD)(Netherlands)
JoyCfTw, Hacktivist (United States/Argentina/Global)
Doña Keating, President and CEO, Professional Options LLC (Global)
Olaf Kolkman, Principal, Internet Society (Global)Federico Kirschbaum, Co-Founder & CEO of Faraday Security, Co-Founder of Ekoparty Security Conference (Argentina/Global)
Xavier Knol, Cybersecurity Analyst and Researcher (Global) , Principal, Internet Society (Global)Micah Lee, Director of Information Security, The Intercept (United States)
Jan Los (Europe/Global)
Matthias Marx, Hacker (Global)
Keane Matthews, CISSP (United States)
René Mayrhofer, Full Professor and Head of Institute of Networks and Security, Johannes Kepler University Linz, Austria (Austria/Global)
Ron Mélotte (Netherlands)
Hans Meuris (Global)
Marten Mickos, CEO, HackerOne (United States)
Adam Molnar, Assistant Professor, Sociology and Legal Studies, University of Waterloo (Canada/Global)
Jeff Moss, Founder of the information security conferences DEF CON and Black Hat (United States)
Katie Moussouris, Founder and CEO of Luta Security; coauthor of ISO standards on vulnerability disclosure and handling processes (Global)
Alec Muffett, Security Researcher (United Kingdom)
Kurt Opsahl, Associate General Counsel for Cybersecurity and Civil Liberties Policy, Filecoin Foundation; President, Security Researcher Legal Defense Fund (Global)
Ivan "HacKan" Barrera Oro (Argentina)
Chris Palmer, Security Engineer (Global)
Yanna Papadodimitraki, University of Cambridge (United Kingdom/European Union/Global)
Sunoo Park, New York University (United States)
Mathias Payer, Associate Professor, École Polytechnique Fédérale de Lausanne (EPFL)(Global)
Giancarlo Pellegrino, Faculty, CISPA Helmholtz Center for Information Security, Germany (Global)
Fabio Pierazzi, King’s College London (Global)
Bart Preneel, full professor, University of Leuven, Belgium (Global)
Michiel Prins, Founder @ HackerOne (United States)
Joel Reardon, Professor of Computer Science, University of Calgary, Canada; Co-Founder of AppCensus (Global)
Alex Rice, Co-Founder & CTO, HackerOne (United States)
René Rehme, rehme.infosec (Germany)
Tyler Robinson, Offensive Security Researcher (United States)
Michael Roland, Security Researcher and Lecturer, Institute of Networks and Security, Johannes Kepler University Linz; Member, SIGFLAG - Verein zur (Austria/Europe/Global)
Christian Rossow, Professor and Faculty, CISPA Helmholtz Center for Information Security, Germany (Global)
Pilar Sáenz, Coordinator Digital Security and Privacy Lab, Fundación Karisma (Colombia)
Runa Sandvik, Founder, Granitt (United States/Global)
Koen Schagen (Netherlands)
Sebastian Schinzel, Professor at University of Applied Sciences Münster and Fraunhofer SIT (Germany)
Bruce Schneier, Fellow and Lecturer, Harvard Kennedy School (United States)
HFJ Schokkenbroek (hp197), IFCAT board member (Netherlands)
Javier Smaldone, Security Researcher (Argentina)
Guillermo Suarez-Tangil, Assistant Professor, IMDEA Networks Institute (Global)
Juan Tapiador, Universidad Carlos III de Madrid, Spain (Global)
Dr Daniel R. Thomas, University of Strathclyde, StrathCyber, Computer & Information Sciences (United Kingdom)
Cris Thomas (Space Rogue), IBM X-Force (United States/Global)
Carmela Troncoso, Assistant Professor, École Polytechnique Fédérale de Lausanne (EPFL) (Global)
Narseo Vallina-Rodriguez, Research Professor at IMDEA Networks/Co-founder AppCensus Inc (Global)
Jeroen van der Broek, IT Security Engineer (Netherlands)
Jeroen van der Ham-de Vos, Associate Professor, University of Twente, The Netherlands (Global)
Charl van der Walt (Head of Security Research, Orange Cyberdefense (a division of Orange Networks)(South Arfica/France/Global)
Chris van 't Hof, Managing Director DIVD, Dutch Institute for Vulnerability Disclosure (Global) Dimitri Verhoeven (Global)
Tarah Wheeler, CEO Red Queen Dynamics & Senior Fellow Global Cyber Policy, Council on Foreign Relations (United States)
Dominic White, Ethical Hacking Director, Orange Cyberdefense (a division of Orange Networks)(South Africa/Europe)
Eddy Willems, Security Evangelist (Global)
Christo Wilson, Associate Professor, Northeastern University (United States) Robin Wilton, IT Consultant (Global)
Tom Wolters (Netherlands)
Mehdi Zerouali, Co-founder & Director, Sigma Prime (Australia/Global)

Organizational Signatories
Dutch Institute for Vulnerability Disclosure (DIVD)(Netherlands)
Fundación Via Libre (Argentina)
Good Faith Cybersecurity Researchers Coalition (European Union)
Access Now (Global)
Chaos Computer Club (CCC)(Europe)
HackerOne (Global)
Hacking Policy Council (United States)
HINAC (Hacking is not a Crime)(United States/Argentina/Global)
Intigriti (Global)
Jolo Secure (Latin America)
K+LAB, Digital security and privacy Lab, Fundación Karisma (Colombia)
Luta Security (Global)
OpenZeppelin (United States)
Professional Options LLC (Global)
Stichting International Festivals for Creative Application of Technology Foundation

Karen Gullo

Draft UN Cybercrime Treaty Could Make Security Research a Crime, Leading 124 Experts to Call on UN Delegates to Fix Flawed Provisions that Weaken Everyone’s Security

2 months 3 weeks ago

Security researchers’ work discovering and reporting vulnerabilities in software, firmware,  networks, and devices protects people, businesses and governments around the world from malware, theft of  critical data, and other cyberattacks. The internet and the digital ecosystem are safer because of their work.

The UN Cybercrime Treaty, which is in the final stages of drafting in New York this week, risks criminalizing this vitally important work. This is appalling and wrong, and must be fixed.

One hundred and twenty four prominent security researchers and cybersecurity organizations from around the world voiced their concern today about the draft and called on UN delegates to modify flawed language in the text that would hinder researchers’ efforts to enhance global security and prevent the actual criminal activity the treaty is meant to rein in.

Time is running out—the final negotiations over the treaty end Feb. 9. The talks are the culmination of two years of negotiations; EFF and its international partners have raised concerns over the treaty’s flaws since the beginning. If approved as is, the treaty will substantially impact criminal laws around the world and grant new expansive police powers for both domestic and international criminal investigations.

Experts who work globally to find and fix vulnerabilities before real criminals can exploit them said in a statement today that vague language and overbroad provisions in the draft increase the risk that researchers could face prosecution. The draft fails to protect the good faith work of security researchers who may bypass security measures and gain access to computer systems in identifying vulnerabilities, the letter says.

The draft threatens security researchers because it doesn’t specify that access to computer systems with no malicious intent to cause harm, steal, or infect with malware should not be subject to prosecution. If left unchanged, the treaty would be a major blow to cybersecurity around the world.

Specifically, security researchers seek changes to Article 6, which risks criminalizing essential activities, including accessing systems without prior authorization to identify vulnerabilities. The current text also includes the ambiguous term “without right” as a basis for establishing criminal liability for unauthorized access. Clarification of this vague language as well as a  requirement that unauthorized access be done with malicious intent is needed to protect security research.

The signers also called out Article 28(4), which empowers States to force “any individual” with knowledge of computer systems to turn over any information necessary to conduct searches and seizures of computer systems. This dangerous paragraph must be removed and replaced with language specifying that custodians must only comply with lawful orders to the extent of their ability.

There are many other problems with the draft treaty—it lacks human rights safeguards, gives States’ powers to reach across borders to surveil and collect personal information of people in other States, and forces tech companies to collude with law enforcement in alleged cybercrime investigations.

EFF and its international partners have been and are pressing hard for human rights safeguards and other fixes to ensure that the fight against cybercrime does not require sacrificing fundamental rights. We stand with security researchers in demanding amendments to ensure the treaty is not used as a tool to threaten, intimidate, or prosecute them, software engineers, security teams, and developers.

 For the statement:
https://www.eff.org/deeplinks/2024/02/protect-good-faith-security-research-globally-proposed-un-cybercrime-treaty

For more on the treaty:
https://ahc.derechosdigitales.org/en/

Karen Gullo

What is Proposition E and Why Should San Francisco Voters Oppose It?

3 months ago

If you live in San Francisco, there is an election on March 5, 2024 during which voters will decide a number of specific local ballot measures—including Proposition E. Proponents of Proposition E have raised over $1 million …but what does the measure actually do? This will break down what the initiative actually does, why it is dangerous for San Franciscans, and why you should oppose it.

What Does Proposition E Do?

Proposition E is a “kitchen sink" approach to public safety that capitalizes on residents’ fear of crime in an attempt to gut common-sense democratic oversight of the San Francisco Police Department (SFPD). In addition to removing certain police oversight authority from the Police Commission and expanding the circumstances under which police may conduct high-speed vehicle chases, Proposition E would also amend existing laws passed in 2019 to protect San Franciscans from invasive, untested, or biased police technologies.

Currently, if police want to acquire a new technology, they have to go through a procedure known as CCOPS—Community Control Over Police Surveillance. This means that police need to explain why they need a new piece of technology and provide a detailed use policy to the democratically-elected Board of Supervisors, who then vote on it. The process also allows for public comment so people can voice their support for, concerns about, or opposition to the new technology. This process is in no way designed to universally deny police new technologies. Instead, it ensures that when police want new technology that may have significant impacts on communities, those voices have an opportunity to be heard and considered. San Francisco police have used this procedure to get new technological capabilities as recently as Fall 2022 in a way that stimulated discussion, garnered community involvement and opposition (including from EFF), and still passed.

Proposition E guts these common-sense protective measures designed to bring communities into the conversation about public safety. If Proposition E passes on March 5, then the SFPD can use any technology they want for a full year without publishing an official policy about how they’d use the technology or allowing community members to voice their concerns—or really allowing for any accountability or transparency at all.

Why is Proposition E Dangerous and Unnecessary?

Across the country, police often buy and deploy surveillance equipment without residents of their towns even knowing what police are using or how they’re using it. This means that dangerous technologies—technologies other cities have even banned—are being used without any transparency or accountability. San Franciscans advocated for and overwhelmingly supported a law that provides them with more knowledge of, and a voice in, what technologies the police use. Under the current law, if the SFPD wanted to use racist predictive policing algorithms that U.S. Senators are currently advising the Department of Justice to stop funding or if the SFPD wanted to buy up geolocation data being harvested from people’s cells phones and sold on the advertising data broker market, they have to let the public know and put it to a vote before the city’s democratically-elected governing body first. Proposition E would gut any meaningful democratic check on police’s acquisition and use of surveillance technologies.

It’s not just that these technologies could potentially harm San Franciscans by, for instance, directing armed police at them due to reliance on a faulty algorithm or putting already-marginalized communities at further risk of overpolicing and surveillance—it’s also important to note that studies find that these technologies just don’t work. Police often look to technology as a silver bullet to fight crime, despite evidence suggesting otherwise. Oversight over what technology the SFPD uses doesn’t just allow for scrutiny of discriminatory and biased policing, it also introduces a much-needed dose of reality. If police want to spend hundreds of thousands of dollars a year on software that has a success rate of .6% at predicting crime, they should have to go through a public process before they fork over taxpayer dollars. 

What Technology Would Proposition E Allow the Police to Use?

That's the thing—we don't know, and if Proposition E passes, we may never know. Today, if police decide to use a piece of surveillance technology, there is a process for sharing that information with the public. With Proposition E, that process won't happen until the technology has been in use for a full year. And if police abandon use of a technology before a year, we may never find out what technology police tried out and how they used it. Even though we don't know what technologies the SFPD are eyeing, we do know what technologies other police departments have been buying in cities around the country: AI-based “predictive policing,” and social media scanning tools are just two examples. And According to the City Attorney, Proposition E would even enable the SFPD to outfit surveillance tools such as drones and surveillance cameras with face recognition technology.

Why You Should Vote No on Proposition E

San Francisco, like many other cities, has its problems, but none of those problems will be solved by removing oversight over what technologies police spend our public money on and deploy in our neighborhoods—especially when so much police technology is known to be racially biased, invasive, or faulty. Voters should think about what San Francisco actually needs and how Proposion E is more likely to exacerbate the problems of police violence than it is to magically erase crime in the city. This is why we are urging a NO vote on Proposition E on the March 5 ballot.

Matthew Guariglia

San Francisco Police’s Live Surveillance Yields Almost 200 Hours of Spying–Including of Music Festivals

3 months ago

A new report reveals that in just three months, from July 1 to September 30, 2023,  the San Francisco Police Department (SFPD) racked up 193 hours and 19 minutes of live access to non-city surveillance cameras. That means for the equivalent of 8 days, police sat behind a desk and tapped into hundreds of cameras, ostensibly including San Francisco’s extensive semi-private security camera networks, to watch city residents, workers, and visitors live. An article by the San Francisco Chronicle analyzing the report also uncovered that the SFPD tapped into these cameras to watch 42 hours of live footage during the Outside Lands music festival.

The city’s Board of Supervisors granted police permission to get live access to these cameras in September 2022 as part of a 15-month pilot program to see if allowing police to conduct widespread, live surveillance would create more safety for all people. However, even before this legislation’s passage, the SFPD covertly used non-city security cameras to monitor protests and other public events. In fact, police and the rich man who funded large networks of semi-private surveillance cameras both claimed publicly that the police department could easily access historic footage of incidents after the fact to help build cases, but could not peer through the cameras live. This claim was debunked by EFF and other investigators who revealed that police requested live access to semi-private cameras to monitor protests, parades, and public events—despite being the type of activity protected by the First Amendment.

When the Board of Supervisors passed this ordinance, which allowed police live access to non-city cameras for criminal investigations (for up to 24 hours after an incident) and for large-scale events, we warned that police would use this newfound power to put huge swaths of the city under surveillance—and we were unfortunately correct.

The most egregious example from the report is the 42 hours of live surveillance conducted during the Outside Lands music festival, which yielded five arrests for theft, pickpocketing, and resisting arrest—and only one of which resulted in the District Attorney’s office filing charges. Despite proponents’ arguments that live surveillance would promote efficiency in policing, in this case, it resulted in a massive use of police resources with little to show for it.

There still remain many unanswered questions about how the police are using these cameras. As the Chronicle article recognized:

…nearly a year into the experiment, it remains unclear just how effective the strategy of using private cameras is in fighting crime in San Francisco, in part because the Police Department’s disclosures don’t provide information on how live footage was used, how it led to arrests and whether police could have used other methods to make those arrests.

The need for greater transparency—and at minimum, for the police to follow all reporting requirements mandated by the non-city surveillance camera ordinance—is crucial to truly evaluate the impact that access to live surveillance has had on policing. In particular, the SFPD’s data fails to make clear how live surveillance helps police prevent or solve crimes in a way that footage after the fact does not. 

Nonetheless, surveillance proponents tout this report as showing that real-time access to non-city surveillance cameras is effective in fighting crime. Many are using this to push for a measure on the March 5, 2024 ballot, Proposition E, which would roll back police accountability measures and grant even more surveillance powers to the SFPD. In particular, Prop E would allow the SFPD a one-year pilot period to test out any new surveillance technology, without any use policy or oversight by the Board of Supervisors. As we’ve stated before, this initiative is bad all around—for policing, for civil liberties, and for all San Franciscans.

Police in San Francisco still don’t get it. They can continue to heap more time, money, and resources into fighting oversight and amassing all sorts of surveillance technology—but at the end of the day, this still won’t help combat the societal issues the city faces. Technologies touted as being useful in extreme cases will just end up as an oversized tool for policing misdemeanors and petty infractions, and will undoubtedly put already-marginalized communities further under the microscope. Just as it’s time to continue asking questions about what live surveillance helps the SFPD accomplish, it’s also time to oppose the erosion of existing oversight by voting NO on Proposition E on March 5. 

Saira Hussain

Worried About AI Voice Clone Scams? Create a Family Password

3 months ago

Your grandfather receives a call late at night from a person pretending to be you. The caller says that you are in jail or have been kidnapped and that they need money urgently to get you out of trouble. Perhaps they then bring on a fake police officer or kidnapper to heighten the tension. The money, of course, should be wired right away to an unfamiliar account at an unfamiliar bank. 

It’s a classic and common scam, and like many scams it relies on a scary, urgent scenario to override the victim’s common sense and make them more likely to send money. Now, scammers are reportedly experimenting with a way to further heighten that panic by playing a simulated recording of “your” voice. Fortunately, there’s an easy and old-school trick you can use to preempt the scammers: creating a shared verbal password with your family.

The ability to create audio deepfakes of people's voices using machine learning and just minutes of them speaking has become relatively cheap and easy to acquire technology. There are myriad websites that will let you make voice clones. Some will let you use a variety of celebrity voices to say anything they want, while others will let you upload a new person’s voice to create a voice clone of anyone you have a recording of. Scammers have figured out that they can use this to clone the voices of regular people. Suddenly your relative isn’t talking to someone who sounds like a complete stranger, they are hearing your own voice. This makes the scam much more concerning. 

Voice generation scams aren’t widespread yet, but they do seem to be happening. There have been news stories and even congressional testimony from people who have been the targets of voice impersonation scams. Voice cloning scams are also being used in political disinformation campaigns as well. It’s impossible for us to know what kind of technology these scammers used, or if they're just really good impersonations. But it is likely that the scams will grow more prevalent as the technology gets cheaper and more ubiquitous. For now, the novelty of these scams, and the use of machine learning and deepfakes, technologies which are raising concerns across many sectors of society, seems to be driving a lot of the coverage. 

The family password is a decades-old, low tech solution to this modern high tech problem. 

The first step is to agree with your family on a password you can all remember and use. The most important thing is that it should be easy to remember in a panic, hard to forget, and not public information. You could use the name of a well known person or object in your family, an inside joke, a family meme, or any word that you can all remember easily. Despite the name, this doesn't need to be limited to your family, it can be a chosen family, workplace, anarchist witch coven, etc. Any group of people with which you associate can benefit from having a password. 

Then when someone calls you or someone that trusts you (or emails or texts you) with an urgent request for money (or iTunes gift cards) you simply ask them the password. If they can’t tell it to you, then they might be a fake. You could of course further verify this with other questions,  like, “what is my cat's name” or “when was the last time we saw each other?” These sorts of questions work even if you haven’t previously set up a passphrase in your family or friend group. But keep in mind people tend to forget basic things when they have experienced trauma or are in a panic. It might be helpful, especially for   people with less robust memories, to write down the password in case you forget it. After all, it’s not likely that the scammer will break into your house to find the family password.

These techniques can be useful against other scams which haven’t been invented yet, but which may come around as deepfakes become more prevalent, such as machine-generated video or photo avatars for “proof.” Or should you ever find yourself in a hackneyed sci-fi situation where there are two identical copies of your friend and you aren’t sure which one is the evil clone and which one is the original. 

Spider-man hopes The Avengers haven't forgotten their secret password!

The added benefit of this technique is that it gives you a minute to step back, breath, and engage in some critical thinking. Many scams of this nature rely on panic and keeping you in your lower brain, by asking for the passphrase you can also take a minute to think. Is your kid really in Mexico right now? Can you call them back at their phone number to be sure it’s them?  

So, go make a family password and a friend password to keep your family and friends from getting scammed by AI impostors (or evil clones).

Cooper Quintin

What Apple's Promise to Support RCS Means for Text Messaging

3 months ago

You may have heard recently that Apple is planning to implement Rich Communication Services (RCS) on iPhones, once again igniting the green versus blue bubble debate. RCS will thankfully bring a number of long-missing features to those green bubble conversations in Messages, but Apple's proposed implementation has a murkier future when it comes to security. 

The RCS standard will replace SMS, the protocol behind basic everyday text messages, and MMS, the protocol for sending pictures in text messages. RCS has a number of improvements over SMS, including being able to send longer messages, sending high quality pictures, read receipts, typing indicators, GIFs, location sharing, the ability to send and receive messages over Wi-Fi, and improved group messaging. Basically, it's a modern messaging standard with features people have grown to expect. 

The RCS standard is being worked on by the same standards body (GSMA) that wrote the standard for SMS and many other core mobile functions. It has been in the works since 2007 and supported by Google since 2019. Apple had previously said it wouldn’t support RCS, but recently came around and declared that it will support sending and receiving RCS messages starting some time in 2024. This is a win for user experience and interoperability, since now iPhone and Android users will be able to send each other rich modern text messages using their phone’s default messaging apps. 

But is it a win for security? 

On its own, the core RCS protocol is currently not any more secure than SMS. The protocol is not encrypted by default, meaning that anyone at your phone company or any law enforcement agent (ordinarily with a warrant) will be able to see the contents and metadata of your RCS messages. The RCS protocol by itself does not specify or recommend any type of end-to-end encryption. The only encryption of messages is in the incidental transport encryption that happens between your phone and a cell tower. This is the same way it works for SMS.

But what’s exciting about RCS is its native support for extensions. Google has taken advantage of this ability to implement its own plan for encryption on top of RCS using a version of the Signal protocol. As of now, this only works for users who are both using Google’s default messaging app (Google Messages), and whose phone companies support RCS messaging (the big three in the U.S. all do, as do a majority around the world). If encryption is not supported by either user the conversation continues to use the default unencrypted version. A user’s phone company could actively choose to block encrypted RCS in a specific region or for a specific user or for a specific pair of users by pretending it doesn’t support RCS. In that case the user will be given the option of resending the messages unencrypted, but can choose to not send the message over the unencrypted channel. Google’s implementation of encrypted RCS also doesn’t hide any metadata about your messages, so law enforcement could still get a record of who you conversed with, how many messages were sent, at what times, and how big the messages were. It's a significant security improvement over SMS, but people with heightened risk profiles should still consider apps that leak less metadata, like Signal. Despite those caveats this is a good step by Google towards a fully encrypted text messaging future.

Apple stated it will not use any type of proprietary end-to-end encryption–presumably referring to Google's approach—but did say it would work to make end-to-end encryption part of the RCS standard. Avoiding a discordant ecosystem with a different encryption protocol for each company is desirable goal. Ideally Apple and Google will work together on standardizing end-to-end encryption in RCS so that the solution is guaranteed to work with both companies’ products from the outset. Hopefully encryption will be a part of the RCS standard by the time Apple officially releases support for it, otherwise users will be left with the status quo of having to use third-party apps for interoperable encrypted messaging.

We hope that the GSMA members will agree on a standard soon, that any standard will use modern cryptographic techniques, and that the standard will do more to protect metadata and downgrade attacks than the current implementation of encrypted RCS. We urge Google and Apple to work with the GSMA to finalize and adopt such a standard quickly. Interoperable, encrypted text messaging by default can’t come soon enough.

Cooper Quintin

Dozens of Rogue California Police Agencies Still Sharing Driver Locations with Anti-Abortion States

3 months ago
Civil Liberties Groups Urge Attorney General Bonta to Enforce California's Automated License Plate Reader Laws

SAN FRANCISCO—California Attorney General Rob Bonta should crack down on police agencies that still violate Californians’ privacy by sharing automated license plate reader information with out-of-state government agencies, putting abortion seekers and providers at particular risk, the Electronic Frontier Foundation (EFF) and the state’s American Civil Liberties Union (ACLU) affiliates urged in a letter to Bonta today. 

In October 2023, Bonta issued a legal interpretation and guidance clarifying that a 2016 state law, SB 34, prohibits California’s local and state police from sharing information collected from automated license plate readers (ALPR) with out-of-state or federal agencies. However, despite the Attorney General’s definitive stance, dozens of law enforcement agencies have signaled their intent to continue defying the law. 

The EFF and ACLU letter lists 35 specific police agencies which either have informed the civil liberties organizations that they plan to keep sharing ALPR information with out-of-state law enforcement, or have failed to confirm their compliance with the law in response to inquiries by the organizations. 

“We urge your office to explore all potential avenues to ensure that state and local law enforcement agencies immediately comply,” the letter said. “We are deeply concerned that the information could be shared with agencies that do not respect California’s commitment to civil rights and liberties and are not covered by California’s privacy protections.” 

ALPR systems collect and store location information about drivers, including dates, times, and locations. This sensitive information can reveal where individuals work, live, associate, worship, or seek reproductive health services and other medical care. Sharing any ALPR information with out-of-state or federal law enforcement agencies has been forbidden by the California Civil Code since enactment of SB 34 in 2016.  

And sharing this data with law enforcement in states that criminalize abortion also undermines California’s extensive efforts to protect reproductive health privacy, especially a 2022 law (AB 1242) prohibiting state and local agencies from providing abortion-related information to out-of-state agencies. The UCLA Center on Reproductive Health, Law and Policy estimates that between 8,000 and 16,100 people will travel to California each year for reproductive care. 

An EFF investigation involving hundreds of public records requests uncovered that many California police departments continued sharing records containing residents’ detailed driving profiles with out-of-state agencies. EFF and the ACLUs of Northern and Southern California in March 2023 wrote to more than 70 such agencies to demand they comply with state law. While many complied, many others have not. 

“We appreciate your office’s statement on SB 34 and your efforts to protect the privacy and civil rights of everyone in California,” today’s letter said. “Nevertheless, it is clear that many law enforcement agencies continue to ignore your interpretation of the law by continuing to share ALPR information with out-of-state and federal agencies. This violation of SB 34 will continue to imperil marginalized communities across the country, and abortion seekers, providers, and facilitators will be at greater risk of undue criminalization and prosecution.” 

For the letter to Bonta: https://www.eff.org/document/01-31-2024-letter-california-ag-rob-bonta-re-enforcing-sb34-alprs 

For the letters sent last year to noncompliant California police agencies: https://www.eff.org/press/releases/civil-liberties-groups-demand-california-police-stop-sharing-drivers-location-data 

For information on how ALPRs threaten abortion access: https://www.eff.org/deeplinks/2022/09/automated-license-plate-readers-threaten-abortion-access-heres-how-policymakers 

For general information about ALPRs: https://sls.eff.org/technologies/automated-license-plate-readers-alprs

Contact:  JenniferPinsofStaff Attorneyjpinsof@eff.org AdamSchwartzPrivacy Litigation Directoradam@eff.org
Josh Richman

EFF and Access Now's Submission to U.N. Expert on Anti-LGBTQ+ Repression 

3 months ago

As part of the United Nations (U.N.) Independent Expert on protection against violence and discrimination based on sexual orientation and gender identity (IE SOGI) report to the U.N. Human Rights Council, EFF and Access Now have submitted information addressing digital rights and SOGI issues across the globe. 

The submission addresses the trends, challenges, and problems that people and civil society organizations face based on their real and perceived sexual orientation, gender identity, and gender expression. Our examples underscore the extensive impact of such legislation on the LGBTQ+ community, and the urgent need for legislative reform at the domestic level.

Read the full submission here.

Paige Collings

In Final Talks on Proposed UN Cybercrime Treaty, EFF Calls on Delegates to Incorporate Protections Against Spying and Restrict Overcriminalization or Reject Convention

3 months ago

Update: Delegates at the concluding negotiating session failed to reach consensus on human rights protections, government surveillance, and other key issues. The session was suspended Feb. 8 without a final draft text. Delegates will resume talks at a later day with a view to concluding their work and providing a draft convention to the UN General Assembly at its 78th session later this year.

UN Member States are meeting in New York this week to conclude negotiations over the final text of the UN Cybercrime Treaty, which—despite warnings from hundreds of civil society organizations across the globe, security researchers, media rights defenders, and the world’s largest tech companies—will, in its present form, endanger human rights and make the cyber ecosystem less secure for everyone.

EFF and its international partners are going into this last session with a unified message: without meaningful changes to limit surveillance powers for electronic evidence gathering across borders and add robust minimum human rights safeguard that apply across borders, the convention should be rejected by state delegations and not advance to the UN General Assembly in February for adoption.

EFF and its partners have for months warned that enforcement of such a treaty would have dire consequences for human rights. On a practical level, it will impede free expression and endanger activists, journalists, dissenters, and everyday people.

Under the draft treaty's current provisions on accessing personal data for criminal investigations across borders, each country is allowed to define what constitutes a "serious crime." Such definitions can be excessively broad and violate international human rights standards. States where it’s a crime to  criticize political leaders (Thailand), upload videos of yourself dancing (Iran), or wave a rainbow flag in support of LGBTQ+ rights (Egypt), can, under this UN-sanctioned treaty, require one country to conduct surveillance to aid another, in accordance with the data disclosure standards of the requesting country. This includes surveilling individuals under investigation for these offenses, with the expectation that technology companies will assist. Such assistance involves turning over personal information, location data, and private communications secretly, without any guardrails, in jurisdictions lacking robust legal protections.

The final 10-day negotiating session in New York will conclude a series of talks that started in 2022 to create a treaty to prevent and combat core computer-enabled crimes, like distribution of malware, data interception and theft, and money laundering. From the beginning, Member States failed to reach consensus on the treaty’s scope, the inclusion of human rights safeguards, and even the definition of “cybercrime.” The scope of the entire treaty was too broad from the very beginning; Member States eventually drops some of these offenses, limiting the scope of the criminalization section, but not evidence gathering provisions that hands States dangerous surveillance powers. What was supposed to be an international accord to combat core cybercrime morphed into a global surveillance agreement covering any and all crimes conceived by Member States. 

The latest draft, released last November, blatantly disregards our calls to narrow the scope, strengthen human rights safeguards, and tighten loopholes enabling countries to assist each other in spying on people. It also retains a controversial provision allowing states to compel engineers or tech employees to undermine security measures, posing a threat to encryption. Absent from the draft are protections for good-faith cybersecurity researchers and others acting in the public interest.

This is unacceptable. In a Jan. 23 joint statement to delegates participating in this final session, EFF and 110 organizations outlined non-negotiable redlines for the draft that will emerge from this session, which ends Feb. 8. These include:

  • Narrowing the scope of the entire Convention to cyber-dependent crimes specifically defined within its text.
  • Including provisions to ensure that security researchers, whistleblowers, journalists, and human rights defenders are not prosecuted for their legitimate activities and that other public interest activities are protected. 
  • Guaranteeing explicit data protection and human rights standards like legitimate purpose, nondiscrimination, prior judicial authorization, necessity and proportionality apply to the entire Convention.
  • Mainstreaming gender across the Convention as a whole and throughout each article in efforts to prevent and combat cybercrime.

It’s been a long fight pushing for a treaty that combats cybercrime without undermining basic human rights. Without these improvements, the risks of this treaty far outweigh its potential benefits. States must stand firm and reject the treaty if our redlines can’t be met. We cannot and will not support or recommend a draft that will make everyone less, instead of more, secure.

Karen Gullo

More Than a Decade Later, Site-Blocking Is Still Censorship

3 months 1 week ago

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, addressing what's at stake and what we need to do to make sure that copyright promotes creativity and innovation.

As Copyright Week comes to a close, it’s worth remembering why we have it in January. Twelve years ago, a diverse coalition of internet users, websites, and public interest activists took to the internet to protest SOPA/PIPA, proposed laws that would have, among other things, blocked access to websites if they were alleged to be used for copyright infringement. More than a decade on, there still is no way to do this without causing irreparable harm to legal online expression.

A lot has changed in twelve years. Among those changes is a major shift in how we, and legislators, view technology companies. What once were new innovations have become behemoths. And what once were underdogs are now the establishment.

What has not changed, however, is the fact that much of what internet platforms are used for is legal, protected, expression. Moreover, the typical users of those platforms are those without access to the megaphones of major studios, record labels, or publishers. Any attempt to resurrect SOPA/PIPA—no matter what it is rebranded as—remains a threat to that expression.

Site-blocking, sometimes called a “no-fault injunction,” functionally allows a rightsholder to prevent access to an entire website based on accusations of copyright infringement. Not just access to the alleged infringement, but the entire website. It is using a chainsaw to trim your nails.

We are all so used to the Digital Millennium Copyright Act (DMCA) and the safe harbor it provides that we sometimes forget how extraordinary the relief it provides really is. Instead of providing proof of their claims to a judge or jury, rightsholders merely have to contact a website with their honest belief that their copyright is being infringed, and the allegedly infringing material will be taken down almost immediately. That is a vast difference from traditional methods of shutting down expression.

Site-blocking would go even further, bypassing the website and getting internet service providers to deny their customers access to a website. This clearly imperils the expression of those not even accused of infringement, and it’s far too blunt an instrument for the problem it’s meant to solve. We remain opposed to any attempts to do this. We have a long memory, and twelve years isn’t even that long.

Katharine Trendacosta

Save Your Twitter Account

3 months 1 week ago

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, addressing what's at stake and what we need to do to make sure that copyright promotes creativity and innovation.

Amid reports that X—the site formerly known as Twitter—is dropping in value, hindering how people use the site, and engaging in controversial account removals, it has never been more precarious to rely on the site as a historical record. So, it’s important for individuals to act now and save what they can. While your tweets may feel ephemeral or inconsequential, they are part of a greater history in danger of being wiped out.

Any centralized communication platform, particularly one operated for profit, is vulnerable to being coopted by the powerful. This might mean exploiting users to maximize short-term profits or changing moderation rules to silence marginalized people and promote hate speech. The past year has seen unprecedented numbers of users fleeing X, Reddit, and other platforms over changes in policy

But leaving these platforms, whether in protest, disgust, or boredom, leaves behind an important digital record of how communities come together and grow.

Archiving tweets isn’t just for Dril and former presidents. In its heyday, Twitter was an essential platform for activists, organizers, journalists, and other everyday people around the world to speak truth to power and fight for social justice. Its importance for movements and building community was noted by oppressive governments around the world, forcing the site to ward off data requests and authoritarian speech suppression

A prominent example in the U.S. is the movement for Black Lives, where activists built momentum on the site and found effective strategies to bring global attention to their protests. Already though, #BlackLivesMatter tweets from 2014 are vanishing from X, and the site seems to be blocking and disabling  tools from archivists preserving this history.

In documenting social movements we must remember social media is not an archive, and platforms will only store (and gate keep) user work insofar as it's profitable, just as they only make it accessible to the public when it is profitable to do so. But when platforms fail, with them goes the history of everyday voices speaking to power, the very voices organizations like EFF fought to protect. The voice of power, in contrast, remains well documented.

In the battleground of history, archival work is cultural defense. Luckily, digital media can be quickly and cheaply duplicated and shared. In just a few minutes of your time, the following easy steps will help preserve not just your history, but the history of your community and the voices you supported.

1. Request Your Archive

Despite the many new restrictions on Twitter access, the site still allows users to backup their entire profile in just a few clicks.

  • First, in your browser or the X app, navigate to Settings. This will look like three dots, and may say "More" on the sidebar.

  • Select Settings and Privacy, then Your Account, if it is not already open.

  • Here, click Download an archive of your data

  • You'll be prompted to sign into your account again, and X will need to send a verification code to your email or text message. Verifying with email may be more reliable, particularly for users outside of the US.

  • Select Request archive

  • Finally—wait. This process can take a few days, but you will receive an email once it is complete. Eventually you will get an email saying that your archive is ready. Follow that link while logged in and download the ZIP files.
2. Optionally, Share with a Library or Archive.

There are many libraries, archives, and community groups who would be interested in preserving these archives. You may want to reach out to a librarian to help find one curating a collection specific to your community.

You can also request that your archive be preserved by the Internet Archive's Wayback Machine. While these steps are specific to the Internet Archive. We recommend using a desktop computer or laptop, rather than a mobile device.

  • Unpack the ZIP file you downloaded in the previous section.
  • In the Data folder, select the tweets.js file. This is a JSON file with just your tweets. JSON files are difficult to read, but you can convert it to a CSV file and view them in a spreadsheet program like Excel or LibreOffice Calc as a free alternative.
  • With your accounts and tweets.js file ready, go to the Save Page Now's Google Sheet Interface and select "Archive all your Tweets with the Wayback Machine.”

  • Fill in your Twitter handle, select your "tweets.js" file from Step 2 and click "Upload."

  • After some processing, you will be able to download the CSV file.
  • Import this CSV to a new Google Sheet. All of this information is already public on Twitter, but if you notice very sensitive content, you can remove those lines. Otherwise it is best to leave this information untampered.
  • Then, use Save Page Now's Google Sheet Interface again to archive from the sheet made in the previous step.
  • It may take hours or days for this request to fully process, but once it is complete you will get an email with the results.
  • Finally, The Wayback Machine will give you the option to also preserve all of your outlinks as well. This is a way to archive all the website URLs you shared on Twitter. This is an easy way to further preserve the messages you've promoted over the years.
3. Personal Backup Plan

Now that you have a ZIP file with all of your Twitter data, including public and private information, you may want to have a security plan on how to handle this information. This plan will differ for everyone, but these are a few steps to consider.

If you only wish to preserve the public information you already successfully shared with an archive, you can delete the archive. For anything you would like to keep but may be sensitive, you may want to use a tool to encrypt the file and keep it on a secure device.

Finally, even if this information is not sensitive, you'll want to be sure you have a solid backup plan. If you are still using Twitter, this means deciding on a schedule to repeat this process so your archive is up to date. Otherwise, you'll want to keep a few copies of the file across several devices. If you already have a plan for backing up your PC, this may not be necessary.

4. Closing Your Account

Finally, you'll want to consider what to do with your current Twitter account now that all your data is backed up and secure.

(If you are planning on leaving X, make sure to follow EFF on Mastodon, Bluesky or another platform.)

Since you have a backup, it may be a good idea to request data be deleted on the site. You can try to delete just the most sensitive information, like your account DMs, but there's no guarantee Twitter will honor these requests—or that it's even capable of honoring such requests. Even EU citizens covered by the GDPR will need to request the deletion of their entire account.

If you aren’t concerned about Twitter keeping this information, however, there is some value in keeping your old account up. Holding the username can prevent impersonators, and listing your new social media account will help people on the site find you elsewhere. In our guide for joining mastodon we recommended sharing your new account in several places. However, adding the new account to one's Twitter name will have the best visibility across search engines, screenshots, or alternative front ends like nitter.

Rory Mir

Tell the FTC: It's Time to Act on the Right to Repair

3 months 1 week ago

Update: The FTC  is no longer accepting comments for this rulemaking. More than 1,600 comments were filed in the proceeding, with many of you sharing your personal stories about why you support the right to repair. Thank you for taking action!

Do you care about being able to fix and modify your stuff? Then it's time to speak up and tell the Federal Trade Commission that you care about your right to repair.

As we have said before, you own what you buy—and you should be able do what you want with it. That should be the end of the story, whether we’re talking about a car, a tractor, a smartphone, or a computer. If something breaks, you should be able to fix it yourself, or choose who you want to take care of it for you.

The Federal Trade Commission has just opened a 30-day comment period on the right to repair, and it needs to hear from you. If you have a few minutes to share why the right to repair is important to you, or a story about something you own that you haven't been able to fix the way you want, click here and tell the agency what it needs to hear.

Take Action

Tell the FTC: Stand up for our Right to Repair

If you’re not sure what to say, there are three topics that matter most for this petition. The FTC should:

  • Make repair easy
  • Make repair parts available and reasonably priced
  • Label products with ease of repairability

If you have a personal story of why right to repair matters to you, let them know!

This is a great moment to ask for the FTC to step up. We have won some huge victories in state legislatures across the country in the past several years, with good right-to-repair bills passing in California, Minnesota, Colorado, and Massachusetts. Apple, long a critic, has come out in favor of right to repair.

With the wind at our backs, it's time for the FTC to consider nationwide solutions, such as making parts and resources more available to everyday people and independent repair shops.

EFF has worked for years with our friends at organizations including U.S. PIRG (Public Interest Research Group) and iFixit to make it easier to tinker with your stuff. We're proud to support their call to the FTC to work on right to repair, and hope you'll add your voice to the chorus.

Join the (currently) 700 people making their voice heard. 

Take Action

Tell the FTC: Stand up for our Right to Repair

 

Hayley Tsukayama

San Francisco: Vote No on Proposition E to Stop Police from Testing Dangerous Surveillance Technology on You

3 months 1 week ago

San Francisco voters will confront a looming threat to their privacy and civil liberties on the March 5, 2024 ballot. If Proposition E passes, we can expect the San Francisco Police Department (SFPD) will use untested and potentially dangerous technology on the public, any time they want, for a full year without oversight. How do we know this? Because the text of the proposition explicitly permits this, and because a city government proponent of the measure has publicly said as much.

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FXbC3QOmVz3M%3Fautoplay%3D1%26amp%3Bstart%3D53%26amp%3Bhd%3D1%26autoplay%3D1%26mute%3D1%22%20width%3D%22568%22%20height%3D%22323%22%20accelerometer%3D%22%22%20autoplay%3D%22autoplay%22%20encrypted-media%3D%22%22%20gyroscope%3D%22%22%20picture-in-picture%3D%22%22%20allowfullscreen%3D%22%22%20frameborder%3D%220%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com

While discussing Proposition E at a November 13, 2023 Board of Supervisors meeting, the city employee said the new rule, “authorizes the department to have a one-year pilot period to experiment, to work through new technology to see how they work.” Just watch the video above if you want to witness it being said for yourself.

They also should know how these technologies will impact communities, rather than taking a deploy-first and ask-questions-later approach...

Any privacy or civil liberties proponent should find this statement appalling. Police should know how technologies work (or if they work) before they deploy them on city streets. They also should know how these technologies will impact communities, rather than taking a deploy-first and ask-questions-later approach—which all but guarantees civil rights violations.

This ballot measure would erode San Francisco’s landmark 2019 surveillance ordinance that requires city agencies, including the police department, to seek approval from the democratically-elected Board of Supervisors before acquiring or deploying new surveillance technologies. Agencies also must provide a report to the public about exactly how the technology would be used. This is not just an important way of making sure people who live or work in the city have a say in surveillance technologies that could be used to police their communitiesit’s also by any measure a commonsense and reasonable provision. 

However, the new ballot initiative attempts to gut the 2019 surveillance ordinance. The measure says “..the Police Department may acquire and/or use a Surveillance Technology so long as it submits a Surveillance Technology Policy to the Board of Supervisors for approval by ordinance within one year of the use or acquisition, and may continue to use that Surveillance Technology after the end of that year unless the Board adopts an ordinance that disapproves the Policy…”  In other words, police would be able to deploy virtually any new surveillance technology they wished for a full year without any oversight, accountability, transparency, or semblance of democratic control.

This ballot measure would turn San Francisco into a laboratory where police are given free rein to use the most unproven, dangerous technologies on residents and visitors without regard for criticism or objection.

This ballot measure would turn San Francisco into a laboratory where police are given free rein to use the most unproven, dangerous technologies on residents and visitors without regard for criticism or objection. That’s one year of police having the ability to take orders from faulty and racist algorithms. One year during which police could potentially contract with companies that buy up geolocation data from millions of cellphones and sift through the data.

Trashing important oversight mechanisms that keep police from acting without democratic checks and balances will not make the city safer. With all of the mind-boggling, dangerous, nearly-science fiction surveillance technologies currently available to local police, we must ensure that the medicine doesn’t end up doing more damage to the patient. But that’s exactly what will happen if Proposition E passes and police are able to expose already marginalized and over-surveilled communities to a new and less accountable generation of surveillance technologies. 

So, tell your friends. Tell your family. Shout it from the rooftops. Talk about it with strangers when you ride MUNI or BART. We have to get organized so we can, as a community, vote NO on Proposition E on the March 5, 2024 ballot. 

Matthew Guariglia

What Home Videotaping Can Tell Us About Generative AI

3 months 1 week ago

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, addressing what's at stake and what we need to do to make sure that copyright promotes creativity and innovation.


It’s 1975. Earth, Wind and Fire rule the airwaves, Jaws is on every theater screen, All In the Family is must-see TV, and Bill Gates and Paul Allen are selling software for the first personal computer, the Altair 8800.

But for copyright lawyers, and eventually the public, something even more significant is about to happen: Sony starts selling the first videotape recorder, or VTR. Suddenly, people had the power to  store TV programs and watch them later. Does work get in the way of watching your daytime soap operas? No problem, record them and watch when you get home. Want to watch the game but hate to miss your favorite show? No problem. Or, as an ad Sony sent to Universal Studios put it, “Now you don’t have to miss Kojak because you’re watching Columbo (or vice versa).”

What does all of this have to do with Generative AI? For one thing, the reaction to the VTR was very similar to today’s AI anxieties. Copyright industry associations ran to Congress, claiming that the VTR "is to the American film producer and the American public as the Boston strangler is to the woman home alone" – rhetoric that isn’t far from some of what we’ve heard in Congress on AI lately. And then, as now, rightsholders also ran to court, claiming Sony was facilitating mass copyright infringement. The crux of the argument was a new legal theory: that a machine manufacturer could be held liable under copyright law (and thus potentially subject to ruinous statutory damages) for how others used that machine.

The case eventually worked its way up to the Supreme Court, and in 1984 the Court rejected the copyright industry’s rhetoric and ruled in Sony’s favor. Forty years later, at least two aspects of that ruling are likely to get special attention.

First, the Court observed that where copyright law has not kept up with technological innovation, courts should be careful not to expand copyright protections on their own. As the decision reads:

Congress has the constitutional authority and the institutional ability to accommodate fully the varied permutations of competing interests that are inevitably implicated by such new technology. In a case like this, in which Congress has not plainly marked our course, we must be circumspect in construing the scope of rights created by a legislative enactment which never contemplated such a calculus of interests.

Second, the Court borrowed from patent law the concept of “substantial noninfringing uses.” In order to hold Sony liable for how its customers used their VTRs, rightholders had to show that the VTR was simply a tool for infringement. If, instead, the VTR was “capable of substantial noninfringing uses,” then Sony was off the hook. The Court held that the VTR fell in the latter category because it was used for private, noncommercial time-shifting, and that time-shifting was a lawful fair use.  The Court even quoted Fred Rogers, who testified that home-taping of children’s programs served an important function for many families.

That rule helped unleash decades of technological innovation. If Sony had lost, Hollywood would have been able to legally veto any tool that could be used for infringing as well as non-infringing purposes. With Congress’ help, it has found ways to effectively do so anyway, such as Section 1201 of the DMCA. Nonetheless, Sony remains a crucial judicial protection for new creativity.

Generative AI may test the enduring strength of that protection. Rightsholders argue that generative AI toolmakers directly infringe when they used copyrighted works as training data. That use is very likely to be found lawful. The more interesting question is whether toolmakers are liable if customers use the tools to generate infringing works. To be clear, the users themselves may well be liable – but they are less likely to have the kind of deep pockets that make litigation worthwhile. Under Sony, however, the key question for the toolmakers will be whether their tools are capable of substantial non-infringing uses. The answer to that question is surely yes, which should preclude most of the copyright claims.

But there’s risk here as well – if any of these cases reach its doors, the Supreme Court could overturn Sony. Hollywood certainly hoped it would do so it when considered the legality of peer-to-peer file-sharing in MGM v Grokster. EFF and many others argued hard for the opposite result. Instead, the Court side-stepped Sony altogether in favor of creating a new form of secondary liability for “inducement.”

The current spate of litigation may end with multiple settlements, or Congress may decide to step in. If not, the Supreme Court (and a lot of lawyers) may get to party like it’s 1975. Let’s hope the justices choose once again to ensure that copyright maximalists don’t get to control our technological future.

Related Cases: MGM v. Grokster
Corynne McSherry

Victory! Ring Announces It Will No Longer Facilitate Police Requests for Footage from Users

3 months 1 week ago

Amazon’s Ring has announced that it will no longer facilitate police's warrantless requests for footage from Ring users. This is a victory in a long fight, not just against blanket police surveillance, but also against a culture in which private, for-profit companies build special tools to allow law enforcement to more easily access companies’ users and their data—all of which ultimately undermine their customers’ trust.

This announcement will also not stop police from trying to get Ring footage directly from device owners without a warrant. Ring users should also know that when police knock on their door, they have the right to—and should—request that police get a warrant before handing over footage.

Years ago, after public outcry and a lot of criticism from EFF and other organizations, Ring ended its practice of allowing police to automatically send requests for footage to a user’s email inbox, opting instead for a system where police had to publicly post requests onto Ring’s Neighbors app. Now, Ring hopefully will altogether be out of the business of platforming casual and warrantless police requests for footage to its users. This is a step in the right direction, but has come after years of cozy relationships with police and irresponsible handling of data (for which they reached a settlement with the FTC). We also helped to push Ring to implement end-to-end encryption. Ring has been forced to make some important concessions—but we still believe the company must do more. Ring can enable their devices to be encrypted end-to-end by default and turn off default audio collection, which reports have shown collect audio from greater distances than initially assumed. We also remain deeply skeptical about law enforcement’s and Ring’s ability to determine what is, or is not, an emergency that requires the company to hand over footage without a warrant or user consent.

Despite this victory, the fight for privacy and to end Ring’s historic ill-effects on society aren’t over. The mass existence of doorbell cameras, whether subsidized and organized into registries by cities or connected and centralized through technologies like Fusus, will continue to threaten civil liberties and exacerbate racial discrimination. Many other companies have also learned from Ring’s early marketing tactics and have sought to create a new generation of police-advertisers who promote the purchase and adoption of their technologies. This announcement will also not stop police from trying to get Ring footage directly from device owners without a warrant. Ring users should also know that when police knock on their door, they have the right to—and should—request that police get a warrant before handing over footage. 

Matthew Guariglia

Fragging: The Subscription Model Comes for Gamers

3 months 1 week ago

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, addressing what's at stake and what we need to do to make sure that copyright promotes creativity and innovation.

The video game industry is undergoing the same concerning changes we’ve seen before with film and TV, and it underscores the need for meaningful digital ownership.

Twenty years ago you owned DVDs. Ten years ago you probably had a Netflix subscription with a seemingly endless library. Now, you probably have two to three subscription services, and regularly hear about shows and movies you can no longer access, either because they’ve moved to yet another subscription service, or because platforms are delisting them all together.

The video game industry is getting the same treatment. While it is still common for people to purchase physical or digital copies of games, albeit often from within walled gardens like Steam or Epic Games, game subscriptions are becoming more and more common. Like the early days of movie streaming, services like Microsoft Game Pass or PlayStation Plus seem to offer a good deal. For a flat monthly fee, you have access to seemingly unlimited game choices. That is, for now.

In a recent announcement from game developer Ubisoft, their director of subscriptions said plainly that a goal of their subscription service’s rebranding is to get players “comfortable” with not owning their games. Notably, this is from a company which had developed five non-mobile games last year, hoping users will access them and older games through a $17.99 per month subscription; that is, $215.88 per year. And after a year, how many games does the end user actually own? None. 

This fragmentation of the video game subscription market isn’t just driven by greed, but answering a real frustration from users the industry itself has created. Gamers at one point could easily buy and return games, they could rent games they were only curious about, and even recoup costs by reselling their game. With the proliferation of DRM and walled-garden game vendors, ownership rights have been eroded. Reselling or giving away a copy of your game, or leaving it for your next of kin, is no longer permitted. The closest thing to a rental now available is a game demo (if it exists) or playing a game within the time frame necessary to get a refund (if a storefront offers one). These purchases are also put at risk as games are sometimes released incomplete beyond this time limit. Developers such as Ubisoft will also shut down online services which severely impact the features of these games, or even make them unplayable.

DRM and tightly controlled gaming platforms also make it harder to mod or tweak games in ways the platform doesn’t choose to support. Mods are a thriving medium for extending the functionalities, messages, and experiences facilitated by a base game, one where passion has driven contributors to design amazing things with a low barrier to entry. Mods depend on users who have the necessary access to a work to understand how to mod it and to deploy mods when running the program. A model wherein the player can only access these aspects of the game in the ways the manufacturer supports undermines the creative rights of owners as well.

This shift should raise alarms for both users and creators alike. With publishers serving as intermediaries, game developers are left either struggling to reach their audience, or settling for a fraction of the revenue they could receive from traditional sales. 

We need to preserve digital ownership before we see video games fall into the same cycles as film and TV, with users stuck paying more and receiving not robust ownership, but fragile access on the platform’s terms.

Rory Mir

FTC Bars X-Mode from Selling Sensitive Location Data

3 months 1 week ago

Update, January 23, 2024: Another week, another win! The FTC announced a successful enforcement action against another location data broker, InMarket.

Phone app location data brokers are a growing menace to our privacy and safety. All you did was click a box while downloading an app. Now the app tracks your every move and sends it to a broker, which then sells your location data to the highest bidder, from advertisers to police.

So it is welcome news that the Federal Trade Commission has brought a successful enforcement action against X-Mode Social (and its successor Outlogic).

The FTC’s complaint illustrates the dangers created by this industry. The company collects our location data through software development kits (SDKs) incorporated into third-party apps, through the company’s own apps, and through buying data from other brokers. The complaint alleged that the company then sells this raw location data, which can easily be correlated to specific individuals. The company’s customers include marketers and government contractors.

The FTC’s proposed order contains a strong set of rules to protect the public from this company.

General rules for all location data:

  • X-Mode cannot collect, use, maintain, or disclose a person’s location data absent their opt-in consent. This includes location data the company collected in the past.
  • The order defines “location data” as any data that may reveal the precise location of a person or their mobile device, including from GPS, cell towers, WiFi, and Bluetooth.
  • X-Mode must adopt policies and technical measures to prevent recipients of its data from using it to locate a political demonstration, an LGBTQ+ institution, or a person’s home.
  • X-Mode must, on request of a person, delete their location data, and inform them of every entity that received their location data.

Heightened rules for sensitive location data:

  • X-Mode cannot sell, disclose, or use any “sensitive” location data.
  • The order defines “sensitive” locations to include medical facilities (such as family planning centers), religious institutions, union offices, schools, shelters for domestic violence survivors, and immigrant services.
  • To implement this rule, the company must develop a comprehensive list of sensitive locations.
  • However, X-Mode can use sensitive location data if it has a direct relationship with a person related to that data, the person provides opt-in consent, and the company uses the data to provide a service the person directly requested.

As the FTC Chair and Commissioners explain in a statement accompanying this order’s announcement:

The explosion of business models that monetize people’s personal information has resulted in routine trafficking and marketing of Americans’ location data. As the FTC has stated, openly selling a person’s location data the highest bidder can expose people to harassment, stigma, discrimination, or even physical violence. And, as a federal court recently recognized, an invasion of privacy alone can constitute “substantial injury” in violation of the law, even if that privacy invasion does not lead to further or secondary harm.

X-Mode has disputed the implications of the FTC’s statements regarding the settlement, and asserted that the FTC did not find an instance of data misuse.

The FTC Act bans “unfair or deceptive acts or practices in or affecting commerce.” Under the Act, a practice is “unfair” if: (1) the practice “is likely to cause substantial injury to consumers”; (2) the practice “is not reasonably avoidable by consumers themselves”; and (3) the injury is “not outweighed by countervailing benefits to consumers or to competition.” The FTC has laid out a powerful case that X-Mode’s brokering of location data is unfair and thus unlawful.

The FTC’s enforcement action against X-Mode sends a strong signal that other location data brokers should take a hard look at their own business model or risk similar legal consequences.

The FTC has recently taken many other welcome actions to protect data privacy from corporate surveillance. In 2023, the agency limited Rite Aid’s use of face recognition, and fined Amazon’s Ring for failing to secure its customers’ data. In 2022, the agency brought an unfair business practices claim against another location data broker, Kochava, and began exploring issuance of new rules against commercial data surveillance.

Adam Schwartz

EFF and More Than 100+ NGOS Set Non-Negotiable Redlines Ahead of UN Cybercrime Treaty Negotiations

3 months 1 week ago

EFF has joined forces with 110 NGOs today in a joint statement delivered to the United Nations Ad Hoc Committee, clearly outlining civil society non-negotiable redlines for the proposed UN Cybercrime Treaty, and asserting that states should reject the proposed treaty if these essential changes are not implemented. 

The last draft published on November 6, 2023 does not adequately ensure adherence to human rights law and standards. Initially focused on cybercrime, the proposed Treaty has alarmingly evolved into an expansive surveillance tool.

Katitza Rodriguez, EFF Policy Director for Global Privacy, asserts ahead of the upcoming concluding negotiations:

The proposed treaty needs more than just minor adjustments; it requires a more focused, narrowly defined approach to tackle cybercrime. This change is essential to prevent the treaty from becoming a global surveillance pact rather than a tool for effectively combating core cybercrimes. With its wide-reaching scope and invasive surveillance powers, the current version raises serious concerns about cross-border repression and potential police overreach. Above all, human rights must be the treaty's cornerstone, not an afterthought. If states can't unite on these key points, they must outright reject the treaty.

Historically, cybercrime legislation has been exploited to target journalists and security researchers, suppress dissent and whistleblowers, endanger human rights defenders, limit free expression, and justify unnecessary and disproportionate state surveillance measures. We are concerned that the proposed Treaty, as it stands now, will exacerbate these problems. The proposed treaty concluding session will be held at the UN Headquarters in New York from January 29 to February 10th. EFF will be attending in person.

The joint statement specifically calls States to narrow the scope of criminalization provisions to well defined cyber dependent crimes; shield security researchers, whistleblowers, activists, and journalists from being prosecuted for their legitimate activities; explicitly include language on international human rights law, data protection, and gender mainstreaming; limit the scope of the domestic criminal procedural measures and international cooperation to core cybercrimes established in the criminalization chapter; and address concerns that the current draft could weaken cybersecurity and encryption. Additionally, it requires the necessity to establish specific safeguards, such as the principles of prior judicial authorization, necessity, legitimate aim, and proportionality.

George Wong
Checked
1 hour 34 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed