【出版界の動き】ガンバレ! 出版振興に向けての地道な取り組み
高市右翼政権誕生含みの情勢下、「市民の反撃行動」に参加しよう
中国:ジャシック争議に関する10のウソと10の真実
投稿 : 総裁選の向こうに見えたもの
[B] 西方ちひろ『ミャンマー、優しい市民はなぜ武器を手にしたのか』 アジアの隣人が問う、民主主義とは何か
【リレー時評】 なぜ日本は、対米開戦をしたのか=藤森 研(JCJ代表委員)
韓国労働ニュース9月後半号:重大労働災害の根絶に向けた動き
海渡雄一 : 高市氏が自民党総裁に選ばれた、ついに恐れてきたときが来た
『フツーの仕事がしたい』 皆倉信和さん 七回忌 追悼・リマスター版上映
経産省前脱原発テント日誌(10/2)東電が柏崎刈羽6号機を再稼働することを許すな
Looking back on 30 years of advocacy for a gender-inclusive internet: Carrying hope and lessons forward
What Europe’s New Gig Work Law Means for Unions and Technology
At EFF, we believe that tech rights are worker’s rights. Since the pandemic, workers of all kinds have been subjected to increasingly invasive forms of bossware. These are the “algorithmic management” tools that surveil workers on and off the job, often running on devices that (nominally) belong to workers, hijacking our phones and laptops. On the job, digital technology can become both a system of ubiquitous surveillance and a means of total control.
Enter the EU’s Platform Work Directive (PWD). The PWD was finalized in 2024, and every EU member state will have to implement (“transpose”) it by 2026. The PWD contains far-reaching measures to protect workers from abuse, wage theft, and other unfair working conditions.
But the PWD isn’t self-enforcing! Over the decades that EFF has fought for user rights, we’ve proved that having a legal right on paper isn’t the same as having that right in the real world. And workers are rarely positioned to take on their bosses in court or at a regulatory body. To do that, they need advocates.
That’s where unions come in. Unions are well-positioned to defend their members – and all workers (EFF employees are proudly organized under the International Federation of Professional and Technical Engineers).
The European Trade Union Confederation has just published “Negotiating the Algorithm,” a visionary – but detailed and down-to-earth – manual for unions seeking to leverage the PWD to protect and advance workers’ interests in Europe.
The report notes the alarming growth of algorithmic management, with 79% of European firms employing some form of bossware. Report author Ben Wray enumerates many of the harms of algorithmic management, such as “algorithmic wage discrimination,” where each worker is offered a different payscale based on surveillance data that is used to infer how economically desperate they are.
Algorithmic management tools can also be used for wage theft, for example, by systematically undercounting the distances traveled by delivery drivers or riders. These tools can also subject workers to danger by penalizing workers who deviate from prescribed tasks (for example, when riders are downranked for taking an alternate route to avoid a traffic accident).
Gig workers live under the constant threat of being “deactivated” (kicked off the app) and feel pressure to do unpaid work for clients who can threaten their livelihoods with one-star reviews. Workers also face automated de-activation: a whole host of “anti-fraud” tripwires can see workers de-activated without appeal. These risks do not befall all workers equally: Black and brown workers face a disproportionate risk of de-activation when they fail facial recognition checks meant to prevent workers from sharing an account (facial recognition systems make more errors when dealing with darker skin tones).
Algorithmic management is typically accompanied by a raft of cost-cutting measures, and workers under algorithmic management often find that their employer’s human resources department has been replaced with chatbots, web-forms, and seemingly unattended email boxes. When algorithmic management goes wrong, workers struggle to reach a human being who can hear their appeal.
For these reasons and more, the ETUC believes that unions need to invest in technical capacity to protect workers’ interests in the age of algorithmic management.
The report sets out many technological activities that unions can get involved with. At the most basic level, unions can invest in developing analytical capabilities, so that when they request logs from algorithmic management systems as part of a labor dispute, they can independently analyze those files.
But that’s just table-stakes. Unions should also consider investing in “counter apps” that help workers. There are workers that act as an external check on employers’ automation, like the UberCheats app, which double-checked the mileage that Uber drivers were paid for. There are apps that enable gig workers to collectively refuse lowball offers, raising the prevailing wage for all the workers in a region, such as the Brazilian StopClub app. Indonesian gig riders have a wide range of “tuyul” apps that let them modify the functionality of their dispatch apps. We love this kind of “adversarial interoperability.” Any time the users of technology get to decide how it works, we celebrate. And in the US, this sort of tech-enabled collective action by workers is likely to be shielded from antitrust liability even if the workers involved are classified as independent contractors.
Developing in-house tech teams also gives unions the know-how to develop the tools for organizers and workers to coordinate their efforts to protect workers. The report acknowledges that this is a lot of tech work to ask individual unions to fund, and it moots the possibility of unions forming cooperative ventures to do this work for the unions in the co-op. At EFF, we regularly hear from skilled people who want to become public interest technologists, and we bet there’d be plenty of people who’d jump at the chance to do this work.
The new Platform Work Directive gives workers and their representatives the right to challenge automated decision-making, to peer inside the algorithms used to dispatch and pay workers, to speak to a responsible human about disputes, and to have their privacy and other fundamental rights protected on the job. It represents a big step forward for workers’ rights in the digital age.
But as the European Trade Union Confederation’s report reminds us, these rights are only as good as workers’ ability to claim them. After 35 years of standing up for people’s digital rights, we couldn’t agree more.
Tile’s Lack of Encryption Is a Danger for Users Everywhere
In research shared with Wired this week, security researchers detailed a series of vulnerabilities and design flaws with Life360’s Tile Bluetooth trackers that make it easy for stalkers and the company itself to track the location of Tile devices.
Tile trackers are small Bluetooth trackers, similar to Apple’s Airtags, but they work on their own network, not Apple’s. We’ve been raising concerns about these types of trackers since they were first introduced and provide guidance for finding them if you think someone is using them to track you without your knowledge.
EFF has worked on improving the Detecting Unwanted Location Trackers standard that Apple, Google, and Samsung use, and these companies have at least made incremental improvements. But Tile has done little to mitigate the concerns we’ve raised around stalkers using their devices to track people.
One of the core fundamentals of that standard is that Bluetooth trackers should rotate their MAC address, making them harder for a third-party to track, and that they should encrypt information sent. According to the researchers, Tile does neither.
This has a direct impact on the privacy of legitimate users and opens the device up to potentially even more dangerous stalking. Tile devices do have a rotating ID, but since the MAC address is static and unencrypted, anyone in the vicinity could pick up and track that Bluetooth device.
Other Bluetooth trackers don’t broadcast their MAC address, and instead use only a rotating ID, which makes it much harder for someone to record and track the movement of that tag. Apple, Google, and Samsung also all use end-to-end encryption when data about the location is sent to the companies’ servers, meaning the companies themselves cannot access that information.
In its privacy policy, Life360 states that, “You are the only one with the ability to see your Tile location and your device location.” But if the information from a tracker is sent to and stored by Tile in cleartext (i.e. unencrypted text) as the researchers believe, then the company itself can see the location of the tags and their owners, turning them from single item trackers into surveillance tools.
There are also issues with the “anti-theft mode” that Tile offers. The anti-theft setting hides the tracker from Tile’s “Scan and Secure” detection feature, so it can’t be easily found using the app. Ostensibly this is a feature meant to make it harder for a thief to just use the app to locate a tracker. In exchange for enabling the anti-theft feature, a user has to submit a photo ID and agree to pay a $1 million fine if they’re convicted of misusing the tracker.
But that’s only helpful if the stalker gets caught, which is a lot less likely when the person being tracked can’t use the anti-stalking protection feature in the app to find the tracker following them. As we’ve said before, it is impossible to make an anti-theft device that secretly notifies only the owner without also making a perfect tool for stalking.
Life360, the company that owns Tile, told Wired it “made a number of improvements” after the researchers reported them, but did not detail what those improvements are.
Many of these issues would be mitigated by doing what their competition is already doing: encrypting the broadcasts from its Bluetooth trackers and randomizing MAC addresses. Every company involved in the location tracker industry business has the responsibility to create a safeguard for people, not just for their lost keys.
Hey, San Francisco, There Should be Consequences When Police Spy Illegally
A San Francisco supervisor has proposed that police and other city agencies should have no financial consequences for breaking a landmark surveillance oversight law. In 2019, organizations from across the city worked together to help pass that law, which required law enforcement to get the approval of democratically elected officials before they bought and used new spying technologies. Bit by bit, the San Francisco Police Department and the Board of Supervisors have weakened that law—but one important feature of the law remained: if city officials are caught breaking this law, residents can sue to enforce it, and if they prevail they are entitled to attorney fees.
Now Supervisor Matt Dorsey believes that this important accountability feature is “incentivizing baseless but costly lawsuits that have already squandered hundreds of thousands of taxpayer dollars over bogus alleged violations of a law that has been an onerous mess since it was first enacted.”
Between 2010 and 2023, San Francisco had to spend roughly $70 million to settle civil suits brought against the SFPD for alleged misconduct ranging from shooting city residents to wrongfully firing whistleblowers. This is not “squandered” money; it is compensating people for injury. We are all governed by laws and are all expected to act accordingly—police are not exempt from consequences for using their power wrongfully. In the 21st century, this accountability must extend to using powerful surveillance technology responsibly.
The ability to sue a police department when they violate the law is called a “private right of action” and it is absolutely essential to enforcing the law. Government officials tasked with making other government officials turn square corners will rarely have sufficient resources to do the job alone, and often they will not want to blow the whistle on peers. But city residents empowered to bring a private right of action typically cannot do the job alone, either—they need a lawyer to represent them. So private rights of action provide for an attorney fee award to people who win these cases. This is a routine part of scores of public interest laws involving civil rights, labor safeguards, environmental protection, and more.
Without an enforcement mechanism to hold police accountable, many will just ignore the law. They’ve done it before. AB 481 is a California state law that requires police to get elected official approval before attempting to acquire military equipment, including drones. The SFPD knowingly ignored this law. If it had an enforcement mechanism, more police would follow the rules.
President Trump recently included San Francisco in a list of cities he would like the military to occupy. Law enforcement agencies across the country, either willingly or by compulsion, have been collaborating with federal agencies operating at the behest of the White House. So it would be best for cities to keep their co-optable surveillance infrastructure small, transparent, and accountable. With authoritarianism looming, now is not the time to make police less hard to control—especially considering SFPD has already disclosed surveillance data to Immigration and Customs Enforcement (ICE) in violation of California state law.
We’re calling on the Board of Supervisors to reject Supervisor Dorsey’s proposal. If police want to avoid being sued and forced to pay the prevailing party’s attorney fees, they should avoid breaking the laws that govern police surveillance in the city.
Related Cases: Williams v. San Francisco