[B] 中学校始まる!~ チャオ!イタリア通信

1 week 5 days ago
9月から、我が家の双子が中学校に通い始めました。小学校5年間、長かったような短かったような、、。中学校は3年間なので、あっと言う間に過ぎるよ、と上に兄弟がいる同級生のお母さんから言われました。(サトウノリコ=イタリア在住)
日刊ベリタ

Gate Crashing: An Interview Series

1 week 5 days ago

There is a lot of bad on the internet and it seems to only be getting worse. But one of the things the internet did well, and is worth preserving, is nontraditional paths for creativity, journalism, and criticism. As governments and major corporations throw up more barriers to expression—and more and more gatekeepers try to control the internet—it’s important to learn how to crash through those gates. 

In EFF's interview series, Gate Crashing, we talk to people who have used the internet to take nontraditional paths to the very traditional worlds of journalism, creativity, and criticism. We hope it's both inspiring to see these people and enlightening for anyone trying to find voices they like online.  

Our mini-series will be dropping an episode each month closing out 2025 in style.

  • Episode 1: Fanfiction Becomes Mainstream – Launching October 1*
  • Episode 2: From DIY to Publishing – Launching November 1
  • Episode 3: A New Path for Journalism – Launching December 1

Be sure to mark your calendar or check our socials on drop dates. If you have a friend or colleague that might be interested in watching our series, please forward this link: eff.org/gatecrashing

Check Out Episode 1

For over 35 years, EFF members have empowered attorneys, activists, and technologists to defend civil liberties and human rights online for everyone.

Tech should be a tool for the people, and we need you in this fight.

Donate Today


* This interview was originally published in December 2024. No changes have been made

Katharine Trendacosta

Weekly Report: Cisco ASAおよびFTDにおける複数の脆弱性(CVE-2025-20333、CVE-2025-20362)に関する注意喚起

1 week 5 days ago
Cisco Adaptive Security Appliance(ASA)およびFirewall Threat Defense(FTD)には、複数の脆弱性があります。これらの脆弱性の悪用を開発元は確認しているとのことです。この問題は、当該製品を修正済みのバージョンに更新することで解決します。詳細は、開発者が提供する情報を参照してください。

Wave of Phony News Quotes Affects Everyone—Including EFF

1 week 5 days ago

Whether due to generative AI hallucinations or human sloppiness, the internet is increasingly rife with bogus news content—and you can count EFF among the victims. 

WinBuzzer published a story June 26 with the headline, “Microsoft Is Getting Sued over Using Nearly 200,000 Pirated Books for AI Training,” containing this passage:  winbuzzer_june_26.png

That quotation from EFF’s Corynne McSherry was cited again in two subsequent, related stories by the same journalist—one published July 27, the other August 27

But the link in that original June 26 post was fake. Corynne McSherry never wrote such an article, and the quote was bogus. 

Interestingly, we noted a similar issue with a June 13 post by the same journalist, in which he cited work by EFF Director of Cybersecurity Eva Galperin; this quote included the phrase “get-out-of-jail-free card” too. 

winbuzzer_june_13.png

Again, the link he inserted leads nowhere because Eva Galperin never wrote such a blog or white paper.  

When EFF reached out, the journalist—WinBuzzer founder and editor-in-chief Markus Kasanmascheff—acknowledged via email that the quotes were bogus. 

“This indeed must be a case of AI slop. We are using AI tools for research/source analysis/citations. I sincerely apologize for that and this is not the content quality we are aiming for,” he wrote. “I myself have noticed that in the particular case of the EFF for whatever reason non-existing quotes are manufactured. This usually does not happen and I have taken the necessary measures to avoid this in the future. Every single citation and source mention must always be double checked. I have been doing this already but obviously not to the required level. 

“I am actually manually editing each article and using AI for some helping tasks. I must have relied too much on it,” he added. 

AI slop abounds 

It’s not an isolated incident. Media companies large and small are using AI to generate news content because it’s cheaper than paying for journalists’ salaries, but that savings can come at the cost of the outlets’ reputations.  

The U.K.’s Press Gazette reported last month that Wired and Business Insider had to remove news features written by one freelance journalist after concerns the articles are likely AI-generated works of fiction: “Most of the published stories contained case studies of named people whose details Press Gazette was unable to verify online, casting doubt on whether any of the quotes or facts contained in the articles are real.” 

And back in May, the Chicago Sun-Times had to apologize after publishing an AI-generated list of books that would make good summer reads—with 10 of the 15 recommended book descriptions and titles found to be “false, or invented out of whole cloth.” 

As journalist Peter Sterne wrote for Nieman Lab in 2022: 

Another potential risk of relying on large language models to write news articles is the potential for the AI to insert fake quotes. Since the AI is not bound by the same ethical standards as a human journalist, it may include quotes from sources that do not actually exist, or even attribute fake quotes to real people. This could lead to false or misleading reporting, which could damage the credibility of the news organization. It will be important for journalists and newsrooms to carefully fact check any articles written with the help of AI, to ensure the accuracy and integrity of their reporting. 

(Or did he write that? Sterne disclosed in that article that he used OpenAI’s ChatGPT-3 to generate that paragraph, ironically enough.) 

The Radio Television Digital News Association issued guidelines a few years ago for the use of AI in journalism, and the Associated Press is among many outlets that have developed guidelines of their own. The Poynter Institute offers a template for developing such policies.  

Nonetheless, some journalists or media outlets have been caught using AI to generate stories including fake quotes; for example, the Associated Press reported last year that a Wyoming newspaper reporter had filed at least seven stories that included AI-generated quotations from six people.  

WinBuzzer wasn’t the only outlet to falsely quote EFF this year. An April 19 article in Wander contained another bogus quotation from Eva Galperin: 

April 19 Wander clipping with fake quote from Eva Galperin

An email to the outlet demanding the article’s retraction went unanswered. 

In another case, WebProNews published a July 24 article quoting Eva Galperin under the headline “Risika Data Breach Exposes 100M Swedish Records to Fraud Risks,” but Eva confirmed she’d never spoken with them or given that quotation to anyone. The article no longer seems to exist on the outlet’s own website, but it was captured by the Internet Archive’s Wayback Machine

07-24-2025_webpronews_screenshot.png  

A request for comment made through WebProNews’ “Contact Us” page went unanswered, and then they did it again on September 2, this time misattributing a statement to Corynne McSherry: 

09-02-2025_webpronews_corynne_mcsherry.png
No such article in The Verge seems to exist, and the statement is not at all in line with EFF’s stance. 

Our most egregious example 

The top prize for audacious falsity goes to a June 18 article in the Arabian Post, since removed from the site after we flagged it to an editor. The Arabian Post is part of the Hyphen Digital Network, which describes itself as “at the forefront of AI innovation” and offering “software solutions that streamline workflows to focus on what matters most: insightful storytelling.” The article in question included this passage: 

Privacy advocate Linh Nguyen from the Electronic Frontier Foundation remarked that community monitoring tools are playing a civic role, though she warned of the potential for misinformation. “Crowdsourced neighbourhood policing walks a thin line—useful in forcing transparency, but also vulnerable to misidentification and fear-mongering,” she noted in a discussion on digital civil rights. 

muck_rack_june_19_-_arabian_post.png

Nobody at EFF recalls anyone named Linh Nguyen ever having worked here, nor have we been able to find anyone by that name who works in the digital privacy sector. So not only was the quotation fake, but apparently the purported source was, too.  

Now, EFF is all about having our words spread far and wide. Per our copyright policy, any and all original material on the EFF website may be freely distributed at will under the Creative Commons Attribution 4.0 International License (CC-BY), unless otherwise noted. 

But we don't want AI and/or disreputable media outlets making up words for us. False quotations that misstate our positions damage the trust that the public and more reputable media outlets have in us. 

If you're worried about this (and rightfully so), the best thing a news consumer can do is invest a little time and energy to learn how to discern the real from the fake. It’s unfortunate that it's the public’s burden to put in this much effort, but while we're adjusting to new tools and a new normal, a little effort now can go a long way.  

As we’ve noted before in the context of election misinformation, the nonprofit journalism organization ProPublica has published a handy guide about how to tell if what you’re reading is accurate or “fake news.” And the International Federation of Library Associations and Institutions infographic on How to Spot Fake News is a quick and easy-to-read reference you can share with friends: 

how_to_spot_fake_news.jpg

Josh Richman

Decoding Meta's Advertising Policies for Abortion Content

1 week 5 days ago

This is the seventh installment in a blog series documenting EFF's findings from the Stop Censoring Abortion campaign. You can read additional posts here. 

For users hoping to promote or boost an abortion-related post on Meta platforms, the Community Standards are just step one. While the Community Standards apply to all posts, paid posts and advertisements must also comply with Meta's Advertising Standards. It’s easy to understand why Meta places extra requirements on paid content. In fact, their “advertising policy principles” outline several important and laudable goals, including promoting transparency and protecting users from scams, fraud, and unsafe and discriminatory practices. 

But additional standards bring additional content moderation, and with that comes increased potential for user confusion and moderation errors. Meta’s ad policies, like its enforcement policies, are vague on a number of important questions. Because of this, it’s no surprise that Meta's ad policies repeatedly came up as we reviewed our Stop Censoring Abortion submissions. 

There are two important things to understand about these ad policies. First, the ad policies do indeed impose stricter rules on content about abortion—and specifically medication abortion—than Meta’s Community Standards do. To help users better understand what is and isn’t allowed, we took a closer look at the policies and what Meta has said about them. 

Second, despite these requirements, the ad policies do not categorically block abortion-related posts from being promoted as ads. In other words, while Meta’s ad policies introduce extra hurdles, they should not, in theory, be a complete barrier to promoting abortion-related posts as boosted content. Still, our analysis revealed that Meta is falling short in several areas. 

What’s Allowed Under the Drugs and Pharmaceuticals Policy? 

When EFF asked Meta about potential ad policy violations, the company first pointed to its Drugs and Pharmaceuticals policy. In the abortion care context, this policy applies to paid content specifically about medication abortion and use of abortion pills. Ads promoting these and other prescription drugs are permitted, but there are additional requirements: 

  • To reduce risks to consumers, Meta requires advertisers to prove they’re appropriately licensed and get prior authorization from Meta.  
  • Authorization is limited to online pharmacies, telehealth providers, and pharmaceutical manufacturers.  
  • The ads also must only target people 18 and older, and only in the countries in which the user is licensed.  

Understanding what counts as “promoting prescription drugs” is where things get murky. Crucially, the written policy states that advertisers do not need authorization to run ads that “educate, advocate or give public service announcements related to prescription drugs” or that “promote telehealth services generally.” This should, in theory, leave a critical opening for abortion advocates focused on education and advocacy rather than direct prescription drug sales. 

But Meta told EFF that advertisers “must obtain authorization to post ads discussing medical efficacy, legality, accessibility, affordability, and scientific merits and restrict these ads to adults aged 18 or older.” Yet many of these topics—medical efficacy, legality, accessibility—are precisely what educational content and advocacy often address. Where’s the line? This vagueness makes it difficult for abortion pill advocates to understand what’s actually permitted. 

What’s Allowed Under the Social Issues Policy?  

Meta also told EFF that its Ads about Social Issues, Elections or Politics policy may apply to a range of abortion-related content. Under this policy, advertisers within certain countries—including the U.S.—must meet several requirements before running ads about certain “social issues.” Requirements include: 

  • Completing Meta’s social issues authorization process
  • Including a verified "Paid for by" disclaimer on the ad; and 
  • Complying with all applicable laws and regulations. 

While certain news publishers are exempt from the policy, it otherwise applies to a wide range of accounts, including activists, brands, non-profit groups and political organizations. 

Meta defines “social issues” as “sensitive topics that are heavily debated, may influence the outcome of an election or result in/relate to existing or proposed legislation.” What falls under this definition differs by country, and Meta provides country-specific topics lists and examples. In the U.S. and several other countries, ads that include “discussion, debate, or advocacy for or against...abortion services and pro-choice/pro-life advocacy” qualify as social issues ads under the “Civil and Social Rights” category.

Confusingly, Meta differentiates this from ads that primarily sell a product or promote a service, which do not require authorization or disclaimers, even if the ad secondarily includes advocacy for an issue. For instance, according to Meta's examples, an ad that says, “How can we address systemic racism?” counts as a social issues ad and requires authorization and disclaimers. On the other hand, an ad that says, “We have over 100 newly-published books about systemic racism and Black History now on sale” primarily promotes a product, and would not require authorization and disclaimers. But even with Meta's examples, the line is still blurry. This vagueness invites confusion and content moderation errors.

What About the Health and Wellness Policy? 

Oddly, Meta never specifically identified its Health and Wellness ad policy to EFF, though the policy is directly relevant to abortion-related paid content. This policy addresses ads about reproductive health and family planning services, and requires ads regarding “abortion medical consultation and related services” to be targeted at users 18 and older. It also expressly states that for paid content involving “[r]eproductive health and wellness drugs or treatments that require prescription,” accounts must comply with both this policy and the Drugs and Pharmaceuticals policy. 

This means abortion advocates must navigate the Drugs and Pharmaceuticals policy, the Social Issues policy, and the Health and Wellness policy—each with its own requirements and authorization processes. That Meta didn’t mention this highly relevant policy when asked about abortion advertising underscores how confusingly dispersed these rules are. 

Like the Drugs policy, the Health and Wellness policy contains an important education exception for abortion advocates: The age-targeting requirements do not apply to “[e]ducational material or information about family planning services without any direct promotion or facilitation of the services.”  

When Content Moderation Makes Mistakes 

Meta's complex policies create fertile ground for automated moderation errors. Our Stop Censoring Abortion survey submissions revealed that Meta's systems repeatedly misidentified educational abortion content as Community Standards violations. The same over-moderation problems are also a risk in the advertising context.  

On top of that, content moderation errors even on unpaid posts can trigger advertising restrictions and penalties. Meta's advertising restrictions policy states that Community Standards violations can result in restricted advertising features or complete advertising bans. This creates a compounding problem when educational content about abortion is wrongly flagged. Abortion advocates could face a double penalty: first their content is removed, then their ability to advertise is restricted. 

This may be, in part, what happened to Red River Women's Clinic, a Minnesota abortion clinic we wrote about earlier in this series. When its account was incorrectly suspended for violating the “Community Standards on drugs,” the clinic appealed and eventually reached out to a contact at Meta. When Meta finally removed the incorrect flag and restored the account, Red River received a message informing them they were no longer out of compliance with the advertising restrictions policy. 

Screenshot submitted by Red River Women's Clinic to EFF

How Meta Can Improve 

Our review of the ad policies and survey submissions showed that there is room for improvement in how Meta handles abortion-related advertising. 

First, Meta should clarify what is permitted without prior authorization under the Drugs and Pharmaceuticals policy. As noted above, the policies say advertisers do not need authorization to “educate, advocate or give public service announcements,” but Meta told EFF authorization is needed to promote posts discussing “medical efficacy, legality, accessibility, affordability, and scientific merits.” Users should be able to more easily determine what content falls on each side of that line.  

Second, Meta should clarify when its Social Issues policy applies. Does discussing abortion at all trigger its application? Meta says the policy excludes posts primarily advertising a service, yet this is not what survey respondent Lynsey Bourke experienced. She runs the Instagram account Rouge Doulas, a global abortion support collective and doula training school. Rouge Doulas had a paid post removed under this very policy for advertising something that is clearly a service: its doula training program called “Rouge Abortion Doula School.” The policy’s current ambiguity makes it difficult for advocates to create compliant content with confidence.

Third, and as EFF has previously argued, Meta should ensure its automated system is not over-moderating. Meta must also provide a meaningful appeals process for when errors inevitably occur. Automated systems are blunt tools and are bound to make mistakes on complex topics like abortion. But simply using an image of a pill on an educational post shouldn’t automatically trigger takedowns. Improving automated moderation will help correct the cascading effect of incorrect Community Standards flags triggering advertising restrictions. 

With clearer policies, better moderation, and a commitment to transparency, Meta can make it easier for accounts to share and boost vital reproductive health information. 

This is the seventh post in our blog series documenting the findings from our Stop Censoring Abortion campaign. Read more at https://www.eff.org/pages/stop-censoring-abortion   

Affected by unjust censorship? Share your story using the hashtag #StopCensoringAbortion. Amplify censored posts and accounts, share screenshots of removals and platform messages—together, we can demonstrate how these policies harm real people. 

Lisa Femia