衛星放送ワーキンググループ(第17回)
「固定電話サービスの円滑な移行の在り方」の情報通信審議会への諮問
G20デジタル経済大臣会合及びAIタスクフォース大臣会合の開催結果
第328回 官民競争入札等監理委員会(開催案内)
IPネットワーク設備委員会 報告(案)に対する意見募集
労働力調査(基本集計)2025年(令和7年)8月分
デジタル時代における放送制度の在り方に関する検討会(第37回)配付資料
Tips to Protect Your Posts About Reproductive Health From Being Removed
This is the ninth installment in a blog series documenting EFF’s findings from the Stop Censoring Abortion campaign. You can read additional posts here.
Meta has been getting content moderation wrong for years, like most platforms that host user-generated content. Sometimes it’s a result of deliberate design choices—privacy rollbacks, opaque policies, features that prioritize growth over safety—made even when the company knows that those choices could negatively impact users. Other times, it’s simply the inevitable outcome of trying to govern billions of posts with a mix of algorithms and overstretched human reviewers. Importantly, users shouldn’t have to worry about their posts being deleted or their accounts getting banned when they share factual health information that doesn’t violate the platforms' policies. But knowing more about what the algorithmic moderation is likely to flag can help you to avoid its mistakes.
We analyzed the roughly one-hundred survey submissions we received from social media users in response to our Stop Censoring Abortion campaign. Their stories revealed some clear patterns: certain words, images, and phrases seemed to trigger takedowns, even when posts didn’t come close to violating Meta’s rules.
For example, your post linking to information on how people are accessing abortion pills online clearly is not an offer to buy or sell pills, but an algorithm, or a human content reviewer who doesn’t know for sure, might wrongly flag it for violating Meta’s policies on promoting or selling “restricted goods.”
That doesn’t mean you’re powerless. For years, people have used “algospeak”—creative spelling, euphemisms, or indirection—to sidestep platform filters. Abortion rights advocates are now forced into similar strategies, even when their speech is perfectly legal. It’s not fair, but it might help you keep your content online. Here are some things we learned from our survey:
Practical Tips to Reduce the Risk of TakedownsWhile traditional social media platforms can help people reach larger audiences, using them also generally means you have to hand over control of what you and others are able to see to the people who run the company. This is the deal that large platforms offer—and while most of us want platforms to moderate some content (even if that moderation is imperfect), current systems of moderation often reflect existing societal power imbalances and impact marginalized voices the most.
There are ways companies and governments could better balance the power between users and platforms. In the meantime, there are steps you can take right now to break the hold these platforms have:
- Images and keywords matter. Posts with pill images, or accounts with “pill” in their names, were flagged often—even when the posts weren’t offering to sell medication. Before posting, consider whether you need to include an image of, or the word “pill,” or whether there’s another way to communicate your message.
- Clarity beats vagueness. Saying “we can help you find what you need” or “contact me for more info” might sound innocuous, but to an algorithm, it can look like an offer to sell drugs. Spell out what kind of support you do and don’t provide—for example: “We can talk through options and point you toward trusted resources. We don’t provide medical services or medication.”
- Be careful with links. Direct links to organizations or services that provide abortion pills were often flagged, even if the organizations operate legally. Instead of linking, try spelling out the name of the site or account.
- Certain word combos are red flags. Posts that included words like “mifepristone,” “abortion,” and “mail” together were frequently removed. You may still want to use them—they’re accurate and important—but know they make your post more likely to be flagged.
- Ads are even stricter. Meta requires pharmaceutical advertisers to prove they’re licensed in the countries they target. If you boost posts, assume the more stringent advertising standards will be applied.
Big platforms give you reach, but they also set the rules—and those rules usually favor corporate interests over human rights. You don’t have to accept that as the only way forward:
- Keep a backup. Export your data regularly so you’re not left empty-handed if your account disappears overnight.
- Build your own space. Hosting a website isn’t free, but it puts you in control.
- Explore other platforms. Newsletters, Discord, and other community tools offer more control than Facebook or Instagram. Decentralized platforms like Mastodon and Bluesky aren’t perfect, but they show what’s possible when moderation isn’t dictated from the top down. (Learn more about the differences between Mastodon, Bluesky, and Threads, and how these kinds of platforms help us build a better internet.)
- Push for interoperability. Imagine being able to take your audience with you when you leave a platform. That’s the future we should be fighting for. (For more on interoperability and Meta, check out this video where Cory Doctorow explains what an interoperable Facebook would look like.)
If you’re working in abortion access—whether as a provider, activist, or volunteer—your privacy and security matter. The same is true for patients. Check out EFF’s Surveillance Self-Defense for tailored guides. Look at resources from groups like Digital Defense Fund and learn how location tracking tools can endanger abortion access. If you run an organization, consider some of the ways you can minimize what information you collect about patients, clients, or customers, in our guide to Online Privacy for Nonprofits.
Platforms like Meta insist they want to balance free expression and safety, but their blunt systems consistently end up reinforcing existing inequalities—silencing the very people who most need to be heard. Until they do better, it’s on us to protect ourselves, share our stories, and keep building the kind of internet that respects our rights.
This is the ninth post in our blog series documenting the findings from our Stop Censoring Abortion campaign. Read more in the series: https://www.eff.org/pages/stop-censoring-abortion
Affected by unjust censorship? Share your story using the hashtag #StopCensoringAbortion. Amplify censored posts and accounts, share screenshots of removals and platform messages—together, we can demonstrate how these policies harm real people.
LocNet Catalytic Microgrant – 2025: Supporting co-creation and prototyping of community-driven digital solutions for climate resilience and advancing environmental justice in Kenya
【月刊マスコミ評・新聞】各紙は「政権交代」一顧だにしない=白垣 詔男
Flock’s Gunshot Detection Microphones Will Start Listening for Human Voices
Flock Safety, the police technology company most notable for their extensive network of automated license plate readers spread throughout the United States, is rolling out a new and troubling product that may create headaches for the cities that adopt it: detection of “human distress” via audio. As part of their suite of technologies, Flock has been pushing Raven, their version of acoustic gunshot detection. These devices capture sounds in public places and use machine learning to try to identify gunshots and then alert police—but EFF has long warned that they are also high powered microphones parked above densely-populated city streets. Cities now have one more reason to follow the lead of many other municipalities and cancel their Flock contracts, before this new feature causes civil liberties harms to residents and headaches for cities.
In marketing materials, Flock has been touting new features to their Raven product—including the ability of the device to alert police based on sounds, including “distress.” The online ad for the product, which allows cities to apply for early access to the technology, shows the image of police getting an alert for “screaming.”
It’s unclear how this technology works. For acoustic gunshot detection, generally the microphones are looking for sounds that would signify gunshots (though in practice they often mistake car backfires or fireworks for gunshots). Flock needs to come forward now with an explanation of exactly how their new technology functions. It is unclear how these devices will interact with state “eavesdropping” laws that limit listening to or recording the private conversations that often take place in public.
Flock is no stranger to causing legal challenges for the cities and states that adopt their products. In Illinois, Flock was accused of violating state law by allowing Immigration and Customs Enforcement (ICE), a federal agency, access to license plate reader data taken within the state. That’s not all. In 2023, a North Carolina judge halted the installation of Flock cameras statewide for operating in the state without a license. When the city of Evanston, Illinois recently canceled its contract with Flock, it ordered the company to take down their license plate readers–only for Flock to mysteriously reinstall them a few days later. This city has now sent Flock a cease and desist order and in the meantime, has put black tape over the cameras. For some, the technology isn’t worth its mounting downsides. As one Illinois village trustee wrote while explaining his vote to cancel the city’s contract with Flock, “According to our own Civilian Police Oversight Commission, over 99% of Flock alerts do not result in any police action.”
Gunshot detection technology is dangerous enough as it is—police showing up to alerts they think are gunfire only to find children playing with fireworks is a recipe for innocent people to get hurt. This isn’t hypothetical: in Chicago a child really was shot at by police who thought they were responding to a shooting thanks to a ShotSpotter alert. Introducing a new feature that allows these pre-installed Raven microphones all over cities to begin listening for human voices in distress is likely to open up a whole new can of unforeseen legal, civil liberties, and even bodily safety consequences.