In Memoriam: John L. Young, Cryptome Co-Founder

4 days 7 hours ago

John L. Young, who died March 28 at age 89 in New York City, was among the first people to see the need for an online library of official secrets, a place where the public could find out things that governments and corporations didn’t want them to know. He made real the idea – revolutionary in its time – that the internet could make more information available to more people than ever before.

John and architect Deborah Natsios, his wife, in 1996 founded Cryptome, an online library which collects and publishes data about freedom of expression, privacy, cryptography, dual-use technologies, national security, intelligence, and government secrecy. Its slogan: “The greatest threat to democracy is official secrecy which favors a few over the many.” And its invitation: “We welcome documents for publication that are prohibited by governments worldwide.”

Cryptome soon became known for publishing an encyclopedic array of government, court, and corporate documents. Cryptome assembled an indispensable, almost daily chronicle of the ‘crypto wars’ of the 1990s – when the first generation of internet lawyers and activists recognized the need to free up encryption from government control and undertook litigation, public activism and legislative steps to do so.  Cryptome became required reading for anyone looking for information about that early fight, as well as many others.    

John and Cryptome were also among the early organizers and sponsors of WikiLeaks, though like many others, he later broke with that organization over what he saw as its monetization. Cryptome later published Wikileaks’ alleged internal emails. Transparency was the core of everything John stood for.

John was one of the early, under-recognized heroes of the digital age.

John was a West Texan by birth and an architect by training and trade. Even before he launched the website, his lifelong pursuit of not-for-profit, public-good ideals led him to seek access to documents about shadowy public development entities that seemed to ignore public safety, health, and welfare. As the digital age dawned, this expertise in and passion for exposing secrets evolved into Cryptome with John its chief information architect, designing and building a real-time archive of seminal debates shaping cyberspace’s evolving information infrastructures.

The FBI and Secret Service tried to chill his activities. Big Tech companies like Microsoft tried to bully him into pulling documents off the internet. But through it all, John remained a steadfast if iconoclastic librarian without fear or favor.

John served in the United States Army Corps of Engineers in Germany (1953–1956) and earned degrees in philosophy and architecture from Rice University (1957–1963) and his graduate degree in architecture from Columbia University in 1969. A self-identified radical, he became an activist and helped create the community service group Urban Deadline, where his fellow student-activists initially suspected him of being a police spy. Urban Deadline went on to receive citations from the Citizens Union of the City of New York and the New York City Council.

John was one of the early, under-recognized heroes of the digital age. He not only saw the promise of digital technology to help democratize access to information, he brought that idea into being and nurtured it for many years.  We will miss him and his unswerving commitment to the public’s right to know.

Cindy Cohn

The Kids Online Safety Act Will Make the Internet Worse for Everyone

4 days 9 hours ago

The Kids Online Safety Act (KOSA) is back in the Senate. Sponsors are claiming—again—that the latest version won’t censor online content. It isn’t true. This bill still sets up a censorship regime disguised as a “duty of care,” and it will do what previous versions threatened: suppress lawful, important speech online, especially for young people.

TAKE ACTION

KOSA Will silence kids and adults

KOSA Still Forces Platforms to Police Legal Speech

At the center of the bill is a requirement that platforms “exercise reasonable care” to prevent and mitigate a sweeping list of harms to minors, including depression, anxiety, eating disorders, substance use, bullying, and “compulsive usage.” The bill claims to bar lawsuits over “the viewpoint of users,” but that’s a smokescreen. Its core function is to let government agencies sue platforms, big or small, that don’t block or restrict content someone later claims contributed to one of these harms. 

When the safest legal option is to delete a forum, platforms will delete the forum.

This bill won’t bother big tech. Large companies will be able to manage this regulation, which is why Apple and X have agreed to support it. In fact, X helped negotiate the text of the last version of this bill we saw. Meanwhile, those companies’ smaller competitors will be left scrambling to comply. Under KOSA, a small platform hosting mental health discussion boards will be just as vulnerable as Meta or TikTok—but much less able to defend itself. 

To avoid liability, platforms will over-censor. It’s not merely hypothetical. It’s what happens when speech becomes a legal risk. The list of harms in KOSA’s “duty of care” provision is so broad and vague that no platform will know what to do regarding any given piece of content. Forums won’t be able to host posts with messages like “love your body,” “please don’t do drugs,” or “here’s how I got through depression” without fearing that an attorney general or FTC lawyer might later decide the content was harmful. Support groups and anti-harm communities, which can’t do their work without talking about difficult subjects like eating disorders, mental health, and drug abuse, will get caught in the dragnet. 

When the safest legal option is to delete a forum, platforms will delete the forum.

There’s Still No Science Behind KOSA’s Core Claims

KOSA relies heavily on vague, subjective harms like “compulsive usage.” The bill defines it as repetitive online behavior that disrupts life activities like eating, sleeping, or socializing. But here’s the problem: there is no accepted clinical definition of “compulsive usage” of online services.

There’s no scientific consensus that online platforms cause mental health disorders, nor agreement on how to measure so-called “addictive” behavior online. The term sounds like settled medical science, but it’s legislative sleight-of-hand: an undefined concept given legal teeth, with major consequences for speech and access to information.

Carveouts Don’t Fix the First Amendment Problem

The bill says it can’t be enforced based on a user’s “viewpoint.” But the text of the bill itself preferences certain viewpoints over others. Plus, liability in KOSA attaches to the platform, not the user. The only way for platforms to reduce risk in the world of KOSA is to monitor, filter, and restrict what users say.

If the FTC can sue a platform because minors saw a medical forum discussing anorexia, or posts about LGBTQ identity, or posts discussing how to help a friend who’s depressed, then that’s censorship. The bill’s stock language that “viewpoints are protected” won’t matter. The legal incentives guarantee that platforms will silence even remotely controversial speech to stay safe.

Lawmakers who support KOSA today are choosing to trust the current administration, and future administrations, to define what youth—and to some degree, all of us—should be allowed to read online. 

KOSA will not make kids safer. It will make the internet more dangerous for anyone who relies on it to learn, connect, or speak freely. Lawmakers should reject it, and fast. 

TAKE ACTION

TELL CONGRESS: OPPOSE KOSA

Joe Mullin

【オピニオン】自衛隊に「統合作戦司令部」=丸山 重威

4 days 10 hours ago
 「トランプ関税」に世論の注目が集まっている陰で、日本の3自衛隊を束ねる「統合作戦司令官」発足を受け、日米同盟の下での日米軍事一体化がまた一段進んだ。在日米軍の構想は横田に統合軍司令部を置き、六本木に戦闘司令部の拠点を置く。日本を米国の世界覇権戦略に組み込んだ危険な戦争路線がいよいよ現実化しつつある。そこで懸念されるのは、過去の「統帥権の独立」を思い起こさせる「軍令」の独走だ。沖縄では台湾有事の際、先島諸島から住民らを九州などに避難させる計画まで浮上した。公然たる「戦争準備」..
JCJ

EFF to California Lawmakers: There’s a Better Way to Help Young People Online

4 days 11 hours ago

We’ve covered a lot of federal and state proposals that badly miss the mark when attempting to grapple with protecting young people’s safety online. These include bills that threaten to cut young people off from vital information, infringe on their First Amendment rights to speak for themselves, subject them (and adults) to invasive and insecure age verification technology, and expose them to danger by sharing personal information with people they may not want to see it.

Several such bills are moving through the California legislature this year, continuing a troubling years-long trend of lawmakers pushing similarly problematic proposals. This week, EFF sent a letter to the California legislature expressing grave concerns with lawmakers’ approach to regulating young people’s ability to speak online.

We’re far from the only ones who have issues with this approach. Many of the laws California has passed attempting to address young people’s online safety have been subsequently challenged in court and stopped from going into effect.

Our letter outlines the legal, technical, and policy problems with proposed “solutions” including age verification mandates, age gating, mandatory parental controls, and proposals that will encourage platforms to take down speech that’s even remotely controversial.

There are better paths that don’t hurt young people’s First Amendment rights.

We also note that the current approach completely ignores what we’ve heard from thousands of young people: the online platforms and communities they frequent can be among the safest spaces for them in the physical or digital world. These responses show the relationship between social media and young people’s mental health is far more nuanced than many lawmakers are willing to believe.

While our letter is addressed to California’s Assembly and Senate, they are not the only state lawmakers taking this approach. All lawmakers should listen to the people they’re trying to protect and find ways to help young people without hurting the spaces that are so important to them.

There are better paths that don’t hurt young people’s First Amendment rights and still help protect them against many of the harms that lawmakers have raised. In fact, elements of such approaches, such as data minimization, are already included in some of these otherwise problematic bills. A well-crafted privacy law that empowers everyone—children and adults—to control how their data is collected and used would be a crucial step in curbing many of these problems.

We recognize that many young people face real harms online, that families are grappling with how to deal with them, and that tech companies are not offering much help.

However, many of the California legislature’s proposals—this year, and for several years—miss the root of the problem. We call on lawmakers work with us to enact better solutions.

Hayley Tsukayama

Keeping People Safe Online – Fundamental Rights Protective Alternatives to Age Checks

4 days 18 hours ago

This is the final part of a three-part series about age verification in the European Union. In part one, we give an overview of the political debate around age verification and explore the age verification proposal introduced by the European Commission, based on digital identities. Part two takes a closer look at the European Commission’s age verification app, and part three explores measures to keep all users safe that do not require age checks. 

When thinking about the safety of young people online, it is helpful to remember that we can build on and learn from the decades of experience we already have thinking through risks that can stem from content online. Before mandating a “fix,” like age checks or age assurance obligations, we should take the time to reflect on what it is exactly we are trying to address, and whether the proposed solution is able to solve the problem.

The approach of analyzing, defining and mitigating risks is a helpful one in this regard as it allows us to take a holistic look at possible risks, which includes thinking about the likelihood of a risk materializing, the severity of a certain risk and how risks may affect different groups of people very differently. 

In the context of child safety online, mandatory age checks are often presented as a solution to a number of risks potentially faced by minors online. The most common concerns to which policymakers refer in the context of age checks can be broken down into three categories of risks:

  • Content risks: This refers to the negative implications from the exposure to online content that might be age-inappropriate, such as violent or sexually explicit content, or content that incites dangerous behavior like self-harm. 
  • Conduct risks: Conduct risks involve behavior by children or teenagers that might be harmful to themselves or others, like cyberbullying, sharing intimate or personal information or problematic overuse of a service.
  • Contact risks: This includes potential harms stemming from contact with people that might pose a risk to minors, including grooming or being forced to exchange sexually explicit material. 

Taking a closer look at these risk categories, we can see that mandatory age checks are an ineffective and disproportionate tool to mitigate many risks at the top of policymakers’ minds.

Mitigating risks stemming from contact between minors and adults usually means ensuring that adults are barred from spaces designated for children. Age checks, especially age verification depending on ID documents like the European Commission’s mini-ID wallet, are not a helpful tool in this regard as children routinely do not have access to the kind of documentation allowing them to prove their age. Adults with bad intentions, on the other hand, are much more likely to be able to circumvent any measures put in place to keep them out.

Conduct risks have little to do with how old a specific user is, and much more to do with social dynamics and the affordances and constraints of online services. Differently put: Whether a platform knows a user’s age will not change how minor users themselves decide to behave and interact on the platform. Age verification won’t prevent users from choosing to engage in harmful or risky behavior, like freely posting personal information or spending too much time online. 

Finally, mitigating risks related to content deemed inappropriate is often thought of as shutting minors out from accessing certain information. Age check mandates seek to limit access to services and content without much granularity. They don’t allow for a nuanced weighing of the ways in which accessing the internet and social media can be a net positive for young people, and the ways in which it can lead to harm. This is complicated by the fact that although arguments in favour of age checks claim that the science on the relationship between the internet and young people is clear, the evidence on the effects of social media on minors is unsettled, and researchers have refuted claims that social media use is responsible for wellbeing crises among teenagers. This doesn’t mean that we shouldn’t consider the risks that may be associated with being young and online. 

But it’s clear that banning all access to certain information for an entire age cohort interferes with all users’ fundamental rights, and is therefore not a proportionate risk mitigation strategy. Under a mandatory age check regime, adults are also required to upload identifying documents just to access websites, interfering with their speech, privacy and security online. At the same time, age checks are not even effective at accomplishing what they’re intended to achieve. Assuming that all age check mandates can and will be circumvented, they seem to do little in the way of protecting children but rather undermine their fundamental rights to privacy, freedom of expression and access to information crucial for their development. 

At EFF, we have been firm in our advocacy against age verification mandates and often get asked what we think policymakers should do instead to protect users online. Our response is a nuanced one, recognizing that there is no easy technological fix for complex, societal challenges: Take a holistic approach to risk mitigation, strengthen user choice, and adopt a privacy-first approach to fighting online harms. 

Taking a Holistic Approach to Risk Mitigation 

In the European Union, the past years have seen the adoption of a number of landmark laws to regulate online services. With new rules such as the Digital Services Act or the AI Act, lawmakers are increasingly pivoting to risk-based approaches to regulate online services, attempting to square the circle by addressing known cases of harm while also providing a framework for dealing with possible future risks. It remains to be seen how risk mitigation will work out in practice and whether enforcement will genuinely uphold fundamental rights without enabling overreach. 

Under the Digital Services Act, this framework also encompasses rights-protective moderation of content relevant to the risks faced by young people using their services. Platforms may also come up with their own policies on how to moderate legal content that may be considered harmful, such as hate speech or violent content. Robust enforcement of their own community guidelines is one of the most important tools at the disposal of online platforms, but unfortunately often lacking – also for categories of content harmful to children and teenagers, like pro-anorexia content

To counterbalance potential negative implications on users’ rights to free expression, the DSA puts boundaries on platforms’ content moderation: Platforms must act objectively and proportionately and must take users’ fundamental rights into account when restricting access to content. Additionally, users have the right to appeal content moderation decisions and can ask platforms to review content moderation decisions they disagree with. Users can also seek resolution through out-of-court dispute settlement bodies, at no cost, and can ask nonprofits to represent them in the platform’s internal dispute resolution process, in out-of-court dispute settlements and in court. Platforms must also publish detailed transparency reports, and give researchers and non-profits access to data to study the impacts of online platforms on society. 

Beyond these specific obligations on platforms regarding content moderation, the protection of user rights, and improving transparency, the DSA obliges online platforms to take appropriate and proportionate measures to protect the privacy, security and safety of minors. Upcoming guidelines will hopefully provide more clarity on what this means in practice, but it is clear that there are a host of measures platforms can adopt before resorting to approaches as disproportionate as age verification.

The DSA also foresees obligations on the largest platforms and search engines – so called Very Large Online Platforms (VLOPs) and Very Large Search Engines (VLOSEs) that have more than 45 million monthly users in the EU – to analyze and mitigate so-called systemic risks posed by their services. This includes analyzing and mitigating risks to the protection of minors and the rights of the child, including freedom of expression and access to information. While we have some critiques of the DSA’s systemic risk governance approach, it is helpful for thinking through the actual risks for young people that may be associated with different categories of content, platforms and their functionalities.

However, it is crucial that such risk assessments are not treated as mere regulatory compliance exercises, but put fundamental rights – and the impact of platforms and their features on those rights – front and center, especially in relation to the rights of children. Platforms would be well-advised to use risk assessments responsibly for their regular product and policy assessments when mitigating risks stemming from content, design choices or features, like recommender systems, ways of engaging with content and users and or online ads. Especially when it comes to possible negative and positive effects of these features on children and teenagers, such assessments should be frequent and granular, expanding the evidence base available to both platforms and regulators. Additionally, platforms should allow external researchers to challenge and validate their assumptions and should provide extensive access to research data, as mandated by the DSA. 

The regulatory framework to deal with potentially harmful content and protect minors in the EU is a new and complex one, and enforcement is still in its early days. We believe that its robust, rights-respecting enforcement should be prioritized before eyeing new rules and legal mandates. 

Strengthening Users’ Choice 

Many online platforms also deploy their own tools to help families navigate their services, including parental control settings and apps, specific offers tailored to the needs of children and teens, or features like reminders to take a break. While these tools are certainly far from perfect, and should not be seen as a sufficient measure to address all concerns, they do offer families an opportunity to set boundaries that work for them. 

Academic and civil society research underlines that better and more granular user controls can also be an effective tool to minimize content and contact risks: Allowing users to integrate third-party content moderation systems or recommendation algorithms would enable families to alter their childrens’ online experiences according to their needs. 

The DSA takes a first helpful step in this direction by mandating that online platforms give users transparency about the main parameters used to recommend content to users, and to allow users to easily choose between different recommendation systems when multiple options are available. The DSA also obliges VLOPs that use recommender systems to offer at least one option that is not based on profiling users, thereby giving users of large platforms the choice to protect themselves from the often privacy-invasive personalization of their feeds. However, forgoing all personalization will likely not be attractive to most users, and platforms should give users the choice to use third-party recommender systems that better mirror their privacy preferences.

Giving users more control over which accounts can interact with them, and in which ways, can also help protect children and teenagers against unwanted interactions. Strengthening users’ choice also includes prohibiting companies from implementing user interfaces that have the intent or substantial effect of impairing autonomy and choice. This so-called “deceptive design” can take many forms, from tricking people into giving consent to the collection of their personal data, to encouraging the use of certain features. The DSA takes steps to ban dark patterns, but European consumer protection law must make sure that this prohibition is strictly enforced and that no loopholes remain. 

A Privacy First Approach to Addressing Online Harms 

While rights-respecting content moderation and tools to strengthen parents’ and childrens’ self-determination online are part of the answer, we have long advocated for a privacy-focused approach to fighting online harms. 

We follow this approach for two reasons: On the one hand, privacy risks are complex and young people cannot be expected to predict risks that may materialize in the future. On the other hand, many of the ways in which children and teenangers can be harmed online are directly linked to the accumulation and exploitation of their personal data. 

Online services collect enormous amounts of personal data and personalize or target their services – displaying ads or recommender systems – based on that data. While the systems that target and display ads and curate online content are distinct, both are based on the surveillance and profiling of users. In addition to allowing users to choose a recommender system, settings for all users should by default turn off recommender systems based on behavioral data. To protect all users’ privacy and data protection rights, platforms should have to ask for users’ informed, specific, voluntary, opt-in consent before collecting their data to personalize recommender systems. Privacy settings should be easily accessible and allow users to enable additional protections. 

Data collection in the context of online ads is even more opaque. Due to the large number of ad tech actors and data brokers involved, it is practically impossible for users to give informed consent for the processing of their personal data. This data is used by ad tech companies and data brokers to profile users to draw inferences about what they like, what kind of person they are (including demographics like age and gender), and what they might be interested in buying, seeing, or engaging with. This information is then used by ad tech companies to target advertisements, including for children. Beyond undermining children’s privacy and autonomy, the online behavioral ad system teaches users from a young age that data collection, tracking, and profiling are evils that come with using the web, thereby normalizing being tracked, profiled, and surveilled. 

This is why we have long advocated for a ban of online behavioral advertising. Banning behavioral ads would remove a major incentive for companies to collect as much data as they do. The DSA already bans targeting minors with behavioral ads, but this protection should be extended to everyone. Banning behavioral advertising will be the most effective path to disincentivize the collection and processing of personal data and end the surveillance of all users, including children, online. 

Similarly, pay-for-privacy schemes should be banned, and we welcome the recent decision by the European Commission to fine Meta for breaching the Digital Markets Act by offering its users a binary choice between paying for privacy or having their personal data used for ads targeting. Especially in the face of recent political pressure from the Trump administration to not enforce European tech laws, we applaud the European Commission for taking a clear stance and confirming that the protection of privacy online should never be a luxury or privilege. And especially vulnerable users like children should not be confronted with the choice between paying extra (something that many children will not be able to do) or being surveilled.

Svea Windwehr