[B] 「二か国共存」【西サハラ最新情報】  平田伊都子

1 week 6 days ago
2025年7月25日、ネタニヤフ。イスラエル首相は「なんて大胆な嘘でしょう。(我々に)ガザ飢餓作戦などなく、ガザには飢餓などありません」と、白々とヌケヌケと<大胆な嘘>をつきました。 オープンした自分のゴルフ場宣伝にやってきたトランプ米大統領は、スコットランドでキーア・スターマー英国首相との会談中、ネタニヤフの<噓>について質問され、「あの子たちはとてもお腹が空いているようだ、、ガザには本当の飢餓がある」と、や〜っと、認めました。 ガザの飢餓を認めたんだから、トランプ。ゴッドファーザー、一刻も早くネタニヤフのガザ封鎖を止めさせてください!  ネタニヤフの虐殺を止めさせてください!! マフィアにも<義侠心>という情けがあるんでは?
日刊ベリタ

No, the UK’s Online Safety Act Doesn’t Make Children Safer Online

1 week 6 days ago

Young people should be able to access information, speak to each other and to the world, play games, and express themselves online without the government making decisions about what speech is permissible. But in one of the latest misguided attempts to protect children online, internet users of all ages in the UK are being forced to prove their age before they can access millions of websites under the country’s Online Safety Act (OSA). 

The legislation attempts to make the UK the “the safest place” in the world to be online by placing a duty of care on online platforms to protect their users from harmful content. It mandates that any site accessible in the UK—including social media, search engines, music sites, and adult content providers—enforce age checks to prevent children from seeing harmful content. This is defined in three categories, and failure to comply could result in fines of up to 10% of global revenue or courts blocking services:

  1. Primary priority content that is harmful to children: 
    1. Pornographic content.
    2. Content which encourages, promotes or provides instructions for:
      1. suicide;
      2. self-harm; or 
      3. an eating disorder or behaviours associated with an eating disorder.
  2. Priority content that is harmful to children: 
    1. Content that is abusive on the basis of race, religion, sex, sexual orientation, disability or gender reassignment;
    2. Content that incites hatred against people on the basis of race, religion, sex, sexual orientation, disability or gender reassignment; 
    3. Content that encourages, promotes or provides instructions for serious violence against a person; 
    4. Bullying content;
    5. Content which depicts serious violence against or graphicly depicts serious injury to a person or animal (whether real or fictional); 
    6. Content that encourages, promotes or provides instructions for stunts and challenges that are highly likely to result in serious injury; and 
    7. Content that encourages the self-administration of harmful substances.
  3. Non-designated content that is harmful to children (NDC): 
    1. Content is NDC if it presents a material risk of significant harm to an appreciable number of children in the UK, provided that the risk of harm does not flow from any of the following:
      1. the content’s potential financial impact;
      2. the safety or quality of goods featured in the content; or
      3. the way in which a service featured in the content may be performed.

    Online service providers must make a judgement about whether the content they host is harmful to children, and if so, address the risk by implementing a number of measures, which includes, but is not limited to:

    1. Robust age checks: Services must use “highly effective age assurance to protect children from this content. If services have minimum age requirements and are not using highly effective age assurance to prevent children under that age using the service, they should assume that younger children are on their service and take appropriate steps to protect them from harm.”

      To do this, all users on sites that host this content must verify their age, for example by uploading a form of ID like a passport, taking a face selfie or video to facilitate age assurance through third-party services, or giving permission for the age-check service to access information from your bank about whether you are over 18. 

    2. Safer algorithms: Services “will be expected to configure their algorithms to ensure children are not presented with the most harmful content and take appropriate action to protect them from other harmful content.”

    3. Effective moderation: All services “must have content moderation systems in place to take swift action against content harmful to children when they become aware of it.” 

    Since these measures took effect in late July, social media platforms Reddit, Bluesky, Discord, and X all introduced age checks to block children from seeing harmful content on their sites. Porn websites like Pornhub and YouPorn implemented age assurance checks on their sites, now asking users to either upload government-issued ID, provide an email address for technology to analyze other online services where it has been used, or submit their information to a third-party vendor for age verification. Sites like Spotify are also requiring users to submit face scans to third-party digital identity company Yoti to access content labelled 18+. Ofcom, which oversees implementation of the OSA, went further by sending letters to try to enforce the UK legislation on U.S.-based companies such as the right-wing platform Gab

    The UK Must Do Better

    The UK is not alone in pursuing such a misguided approach to protect children online: the U.S. Supreme Court recently paved the way for states to require websites to check the ages of users before allowing them access to graphic sexual materials; courts in France last week ruled that porn websites can check users’ ages; the European Commission is pushing forward with plans to test its age-verification app; and Australia’s ban on youth under the age of 16 accessing social media is likely to be implemented in December. 

    But the UK’s scramble to find an effective age verification method shows us that there isn't one, and it’s high time for politicians to take that seriously. The Online Safety Act is a threat to the privacy of users, restricts free expression by arbitrating speech online, exposes users to algorithmic discrimination through face checks, and leaves millions of people without a personal device or form of ID excluded from accessing the internet.

    And, to top it all off, UK internet users are sending a very clear message that they do not want anything to do with this censorship regime. Just days after age checks came into effect, VPN apps became the most downloaded on Apple's App Store in the UK, and a petition calling for the repeal of the Online Safety Act recently hit more than 400,000 signatures. 

    The internet must remain a place where all voices can be heard, free from discrimination or censorship by government agencies. If the UK really wants to achieve its goal of being the safest place in the world to go online, it must lead the way in introducing policies that actually protect all users—including children—rather than pushing the enforcement of legislation that harms the very people it was meant to protect.

    Paige Collings

    New GenderIT.org edition: Unmasking digital stalkers

    1 week 6 days ago
    In this series, writers from across the Global Majority explore various questions and raise important points around how cyber stalking defines and dictates different aspects of one’s experiences of…
    GenderIT.org

    参政党が『神奈川新聞』記者排除 公党にあるまじき暴挙許すな(『神奈川新聞』記者・石橋学)

    1 week 6 days ago
     外国人を排斥する「日本人ファースト」を掲げて、参院選で14もの議席を獲得した参政党が『神奈川新聞』記者(筆者)を会見から閉め出す暴挙に出て、〝極右政党〟の素顔をむき出しにしている。権力がかさにかかって都合の悪い批判を封 […]
    admin

    [B] リチウム開発がもたらす環境破壊と人権侵害 アルゼンチンからの訴え  田氏 滋 

    1 week 6 days ago
    気候変動が悪化し、各地で干ばつ、豪雨災害や台風の大型化頻発化が進む中で、世界の自動車産業や発電事業者はリチウムイオンバッテリを活用した脱炭素化を進めています。しかし、そのことはバッテリ材料の鉱床を有する国を翻弄し、住民の望まぬ開発を推し進める正当化材料として使われています。アルゼンチンもそんな国の一つであり、国の経済を支えるために犠牲にされる人びとが怒りの声を上げています。
    日刊ベリタ