Quoting Phil Gyford

Simon Willison
simonwillison.net
2025-10-21 03:40:56
Since getting a modem at the start of the month, and hooking up to the Internet, I’ve spent about an hour every evening actually online (which I guess is costing me about £1 a night), and much of the days and early evenings fiddling about with things. It’s so complicated. All the hype never mentione...
Original Article

Since getting a modem at the start of the month, and hooking up to the Internet, I’ve spent about an hour every evening actually online (which I guess is costing me about £1 a night), and much of the days and early evenings fiddling about with things. It’s so complicated. All the hype never mentioned that. I guess journalists just have it all set up for them so they don’t have to worry too much about that side of things. It’s been a nightmare, but an enjoyable one, and in the end, satisfying.

Phil Gyford , Diary entry, Friday February 17th 1995 1.50 am

Quoting Bruce Schneier and Barath Raghavan

Simon Willison
simonwillison.net
2025-10-21 03:28:39
Prompt injection might be unsolvable in today’s LLMs. LLMs process token sequences, but no mechanism exists to mark token privileges. Every solution proposed introduces new injection vectors: Delimiter? Attackers include delimiters. Instruction hierarchy? Attackers claim priority. Separate models? D...
Original Article

Prompt injection might be unsolvable in today’s LLMs. LLMs process token sequences, but no mechanism exists to mark token privileges. Every solution proposed introduces new injection vectors: Delimiter? Attackers include delimiters. Instruction hierarchy? Attackers claim priority. Separate models? Double the attack surface. Security requires boundaries, but LLMs dissolve boundaries. [...]

Poisoned states generate poisoned outputs, which poison future states. Try to summarize the conversation history? The summary includes the injection. Clear the cache to remove the poison? Lose all context. Keep the cache for continuity? Keep the contamination. Stateful systems can’t forget attacks, and so memory becomes a liability. Adversaries can craft inputs that corrupt future outputs.

Bruce Schneier and Barath Raghavan , Agentic AI’s OODA Loop Problem

Argentine peso weakens to fresh low despite US interventions

Hacker News
www.ft.com
2025-10-21 02:35:30
Comments...
Original Article

US Politics 2025

Subscribe to unlock this article

Limited time offer. Save 40% on Standard Digital.

was $540 now $319 for your first year

WhiteHouseWatch

Explore more offers.

Trial

$1 for 4 weeks

Then $75 per month. Complete digital access to quality FT journalism on any device. Cancel or change your plan anytime during your trial.

Print + Premium Digital

was $755 now $719 for your first year

Get Premium & FT Weekend Print edition for the price of Premium. Complete digital access to quality analysis and expert insights, complemented with our award-winning Weekend Print edition.

Print

was $409 now $99 for your first year

FT newspaper delivered Monday-Saturday, plus FT Digital Edition delivered to your device Monday-Saturday.

Explore our full range of subscriptions.

For individuals

Discover all the plans currently available in your country

For multiple readers

Digital access for organisations. Includes exclusive features and content.

Why the FT?

See why over a million readers pay to read the Financial Times.

Find out why

Wikipedia says traffic is falling due to AI search summaries and social video

Hacker News
techcrunch.com
2025-10-21 02:29:03
Comments...
Original Article

Wikipedia is often described as the last good website on an internet increasingly filled with toxic social media and AI slop. But it seems the online encyclopedia is not completely immune to broader trends, with human page views falling 8% year-over-year, according to a new blog post from Marshall Miller of the Wikimedia Foundation.

The foundation works to distinguish between traffic from humans and bots, and Miller writes that the decline “over the past few months” was revealed after an update to Wikipedia’s bot-detection systems appeared to show that “much of the unusually high traffic for the period of May and June was coming from bots that were built to evade detection.”

Why is traffic falling? Miller points to “the impact of generative AI and social media on how people seek information,” particularly as “search engines are increasingly using generative AI to provide answers directly to searchers rather than linking to sites like ours” and as “younger generations are seeking information on social video platforms rather than the open web.” (Google has disputed the claim that AI summaries reduce traffic from search.)

Miller says the foundation welcomes “new ways for people to gain knowledge” and argues this doesn’t make Wikipedia any less important, since knowledge sourced from the encyclopedia is still reaching people even if they don’t visit the website. Wikipedia even experimented with AI summaries of its own, though it paused the effort after editors complained .

But this shift does present risks, particularly if people are becoming less aware of where their information actually comes from. As Miller puts it, “With fewer visits to Wikipedia, fewer volunteers may grow and enrich the content, and fewer individual donors may support this work.” (Some of those volunteers are truly remarkable, reportedly disarming a gunman at a Wikipedia editors’ conference on Friday.)

For that reason, he argues that AI, search, and social companies using content from Wikipedia “must encourage more visitors” to the website itself.

And he says Wikipedia is taking steps of its own — for example, by developing a new framework for attributing content from the encyclopedia. The organization also has two teams tasked with helping Wikipedia reach new readers, and it’s looking for volunteers to help.

Techcrunch event

San Francisco | October 27-29, 2025

Miller also encourages readers to “support content integrity and content creation” more broadly.

“When you search for information online, look for citations and click through to the original source material,” he writes. “Talk with the people you know about the importance of trusted, human curated knowledge, and help them understand that the content underlying generative AI was created by real people who deserve their support.”

Anthony Ha is TechCrunch’s weekend editor. Previously, he worked as a tech reporter at Adweek, a senior editor at VentureBeat, a local government reporter at the Hollister Free Lance, and vice president of content at a VC firm. He lives in New York City.

You can contact or verify outreach from Anthony by emailing anthony.ha@techcrunch.com .

View Bio

Judo (Jujutsu GUI)

Lobsters
judojj.com
2025-10-21 01:14:55
Comments...
Original Article

The full-featured GUI for JJ VCS

(works with Git repos too!)

Judo application screenshot

Operation Log

Restore your repo to any point in time with the Operation Log. Undo and redo any change.

Combined diffs

View combined diffs of multiple commits, or the diff between commits

Apply or revert hunks

Apply or revert hunks of any diff, files, commits, or even multiple commits at once

Custom revsets

Use custom revsets to select which commits are shown. Filter by descriptions, authors, ancestry, and more.

Drag and drop rebase

Drag and drop rebase

Advanced operations

Duplicate, split, abandon, revert, absorb, squash, and more

Bookmark management

Keep your bookmarks managed

Journalist Jelani Cobb on Race in Donald Trump’s America

Portside
portside.org
2025-10-21 01:12:35
Journalist Jelani Cobb on Race in Donald Trump’s America barry Mon, 10/20/2025 - 20:12 ...
Original Article

From the vantage point of the newsroom, the first story is almost never the full story,” writes Jelani Cobb . “You hear stray wisps of information, almost always the most inflammatory strands of a much bigger, more complicated set of circumstances.”

The dean of Columbia University’s Graduate School of Journalism in New York could be reflecting on the recent killing of the racist provocateur Charlie Kirk. In fact, he is thinking back to Trayvon Martin , a 17-year-old African American student from Florida who was shot dead by a white Latino neighbourhood watch volunteer in 2012.

“The Martin case – the nightmare specter of a lynching screaming across the void of history – ruined the mood of a nation that had, just a few years earlier, elected its first black president, and in a dizzying moment of self-congratulation, began to ponder on editorial pages whether the nation was now ‘post-racial’,” Cobb writes in the introduction to his book Three or More Is a Riot: Notes on How We Got Here: 2012-2025 .

Many of the essays in the collection were written contemporaneously, affording them the irony – sometimes bitter irony – of distance. Together they form a portrait of an era bookended by the killing of Martin and the return to power of Donald Trump, with frontline reporting from Ferguson and Minneapolis along the way. They make a compelling argument that everything is connected and nothing is inevitable about racial justice or democracy.

As Cobb chronicles across 437 pages, the 2013 acquittal of Martin’s killer, George Zimmerman, became a catalyst for conversations about racial profiling, gun laws and systemic racism, helping to inspire the formation of the Black Lives Matter movement.

Three years later, Dylann Roof , a 21-year-old white supremacist, attended a Bible study session at Emanuel African Methodist Episcopal Church in Charleston, South Carolina, then opened fire and killed nine Black parishioners. Cobb notes that Roof told police he had been “radicalised” by the aftermath of Martin’s killing and wanted to start a “race war”.

Speaking by phone from his office at Columbia, Cobb, 56, says: “It was a very upside-down version of the facts because he looked on Martin’s death and somehow took the reaction to it as a threat to white people and that was what set him on his path. Roof was this kind of precursor of the cause of white nationalism and white supremacy that becomes so prominent now.”

Then, in the pandemic-racked summer of 2020, came George Floyd , a 46-year-old African American man murdered by a white police officer who kneeled on his neck for almost nine minutes as Floyd said, “I can’t breathe,” more than 20 times. Black Lives Matter protesters took to the streets with demands to end police brutality, invest in Black communities and address systemic racism across various institutions.

Cobb, an author, historian and staff writer at the New Yorker magazine, continues: “It was the high tide. A lot of the organising, a lot of the kinds of thinking, the perspective and the work and the cultural kinds of representations – these things had begun eight years earlier with Trayvon Martin’s death.

“This was an excruciating, nearly nine-minute-long video of a person’s life being extinguished and it happened at a time when people had nothing to do but watch it. They weren’t able to go to work because people were in lockdown. All of those things made his death resonate in a way that it might not have otherwise. There had been egregious instances of Black people being killed prior to that and they hadn’t generated that kind of societal response.”

Cities such as Minneapolis, Seattle and Los Angeles reallocated portions of police budgets to community programmes; companies committed millions of dollars to racial-equity initiatives; for a time, discussions of systemic racism entered mainstream discourse. But not for the first time in US history, progress – or at least the perception of it – sowed the seeds of backlash.

“It also was a signal for people who are on the opposite side of this to start pushing in the opposite direction and that happened incredibly swiftly and with incredible consequences to such an extent that we are now in a more reactionary place than we were when George Floyd died in the first place,” Cobb says.

No one better embodies that reactionary spirit than Donald Trump , who rose to political prominence pushing conspiracy theories about Barack Obama’s birthplace and demonising immigrants as criminals and rapists. His second term has included a cabinet dominated by white people and a purge of diversity, equity and inclusion (DEI) initiatives.

Trump lost the presidential election a few months after Floyd died but returned to power last year, defeating a Black and south Asian challenger in Kamala Harris. According to Pew Research , Trump made important gains with Latino voters (51% Harris, 48% Trump) and won 15% of Black voters – up from 8% in 2020.

What does Cobb make of the notion that class now outweighs race in electoral politics? “One of the things that they did brilliantly was that typically politics has worked on the basis of: ‘What will you do for me?’” Cobb says. “That’s retail politics. That’s what you expect.

“The Trump campaign in 24 was much more contingent upon the question of: ‘What will you do to people who I don’t like?’ There were some Black men who thought their marginal position in society was a product of the advances that women made and that was something the Republican party said overtly, which is why I think their appeal was so masculinist.”

Trump and his allies weaponised prejudice against transgender people to attract socially and religiously conservative voters, including demographics they would otherwise hold in “contempt”. “I also think that we tended to overlook the question of the extent to which Joe Biden simply handing the nomination to Kamala Harris turned off a part of the electorate,” Cobb says.

He expresses frustration with the well-rehearsed argument that Democrats became too fixated on “woke” identity politics at the expense of economic populism: “They make it seem as if these groups created identity politics. Almost every group that’s in the Democratic fold was made into an identity group by the actions of people who were outside.

“If you were talking about African Americans, Black politics was created by segregation. White people said that they were going to act in their interest in order to prevent African Americans from having access. Women, through the call of feminism, came to address the fact that they were excluded from politics because men wanted more power. You could go through every single group .”

Yet it remains commonplace to talk about appealing to evangelical Christian voters or working-class non-college-educated voters, he says: “The presumption implicit in this is that all those people see the world in a particular way that is understandable or legible by their identity, and so there’s a one-sidedness to it. For the entirety of his political career, Trump has simply been a shrewd promulgator of white-identity politics.”

That trend has become supercharged in Trump’s second term. He has amplified the great replacement theory , sought to purge diversity, equity and inclusion initiatives and complained that museums over-emphasise slavery. His actions have built a permission structure for white nationalists who boast they now have a seat at the top table.

Many observers have also expressed dismay at Trump’s concentration of executive power and the speed and scale of his assault on democratic institutions . Cobb, however, is not surprised.

“It’s about what I expected, honestly,” he says, “because throughout the course of the 2024 campaign, Trump mainly campaigned on the promises of what he was going to do to get back at people. They’re using the power of the state to pursue personal and ideological grievances, which is what autocracy does.”

It is now fashionable on the left to bemoan the rise of US authoritarianism as a novel concept, a betrayal of constitutional ideals envied by the world. Cobb has a more complex take, suggesting that the US’s claim to moral primacy, rooted in the idea of exceptionalism, is based on a false premise.

He argues: “America has been autocratic previously. We just don’t think about it. It’s never been useful … to actually grapple with what America was, and America had no interest in grappling with these questions itself . Who has ever managed personal growth while constantly screaming to the world about how special and amazing they are?”

Cobb’s book maps an arc of the moral universe that is crooked and uneven, pointing out that, between the end of reconstruction and 1965, 11 states in the south effectively nullified the protections of the 13th, 14th and 15th amendments of the constitution, imposing Jim Crow laws, voter suppression and violence to disenfranchise Black citizens.

“The constitution gave Black people the right to vote but, if you voted, you’d be killed and this was a known fact,” he says. “This went on for decade after decade after decade. You can call that a lot of things. You can’t call that democracy. It was a kind of racial autocracy that extended in lots of different directions.”

He adds: “We should have been mindful that the country could always return to form in that way, that its commitment to democracy had been tenuous. That was why race has played such a central role in the dawning of this current autocratic moment. But it’s not the only dynamic.

“Immigration, which is tied to race in some ways, is another dynamic. The advances that women have made, the increasing acceptance and tolerance of people in the LGBTQ communities – all those things, combined with an economic tenuousness, have made it possible to just catalyse this resurgence of autocracy in the country .”

It is therefore hardly unexpected that business leaders and institutions would capitulate, as they have in the past, he says: “We might hope that they would react differently but it’s not a shock when they don’t. Go back to the McCarthy era. We see that in more instances than not, McCarthy and other similar kinds of red-baiting forces were able to exert their will on American institutions.

Cobb’s own employer has been caught in the maelstrom. In February, the Trump administration froze $400m in federal research grants and funding to Columbia, citing the university’s “failure to protect Jewish students from antisemitic harassment” during Gaza protests last year. Columbia has since announced it would comply with nearly all the administration’s demands and agreed to a $221m settlement, restoring most frozen funds but with ongoing oversight.

Cobb does not have much to add, partly for confidentiality reasons, though he does comment: “In life, I have tended to not grade harshly for exams that people should never have been required to take in the first place.”

He is unwavering, however, in his critique of Trump’s attack on the university sector: “What’s happening is people emulating Viktor Orbán [the leader of Hungary] to try to crush any independent centres of dissent and to utilise the full weight of the government to do it, and also to do it in hypocritical fashion.

“The cover story was that Columbia and other universities were being punished for their failure to uproot antisemitism on their campuses. But it’s difficult to understand how you punish an institution for being too lenient about antisemitism and the punishment is that you take away its ability to do cancer research, or you defund its ability to do research on the best medical protocols for sick children or to work on heart disease and all the things that were being done with the money that was taken from the university.

“In fact, what is being done is that we are criminalising the liberal or progressive ideas and centres that are tolerant of people having a diverse array of ideas or progressive ideas. The irony, of course, is that one of the things that happens in autocracy is the supreme amount of hypocrisy. They have an incredible tolerance for hypocrisy and so all these things are being done under the banner of protecting free speech .”

That hypocrisy has been on extravagant display again in the aftermath of Kirk’s killing by a lone gunman on a university campus in Utah. Trump and his allies have been quick to blame the “radical left” and “domestic terrorists” and threaten draconian action against those who criticise Kirk or celebrate his demise. The response is only likely to deepen the US’s political polarisation and threat of further violence.

Spencer Cox , the governor of Utah and a rare voice urging civil discourse, wondered whether thwhois was the end of a dark chapter of US history – or the beginning. What does Cobb think? “There’s a strong possibility that it will get worse before it gets better,” he says frankly.

“We’re at a point where we navigated the volatile moment of the 1950s, the 1960s, because we were able to build a social consensus around what we thought was decent and what we thought was right, and we’re now seeing that undone. Indecency has become a new hallmark.

“But we should take some solace in the fact that people have done the thing that we need to do now previously. The situation we’re in I don’t think is impossible.”

David Smith is the Guardian's Washington DC bureau chief. Click here for David's public key. Twitter @smithinamerica

The Guardian is globally renowned for its coverage of politics, the environment, science, social justice, sport and culture. Scroll less and understand more about the subjects you care about with the Guardian's brilliant email newsletters , free to your inbox.

Americans can't afford their cars any more and Wall Street is worried

Hacker News
www.telegraph.co.uk
2025-10-21 00:46:05
Comments...
Original Article

TELEGRAPH MEDIA GROUP HOLDINGS Ltd

Access Restricted

Thank you for your interest. Unauthorised access is prohibited. To access this content, you must have prior permission and a valid contract. Please contact our team at licensing@telegraph.co.uk to discuss licensing options.

If you are a Telegraph Subscriber, please contact customerservice@telegraph.co.uk , quoting the reference code on this page.

Reference ID: 0.ce24c317.1761006394.1ba099bd

This Week in People’s History, Oct 22–28, 2025

Portside
portside.org
2025-10-21 00:45:14
This Week in People’s History, Oct 22–28, 2025 Jonathan Bennett Mon, 10/20/2025 - 19:45 ...
Original Article

DuPont Has DDT to Sell and Won’t Let Anyone Stand in Its Way

OCTOBER 22 IS THE 80TH ANNIVERSARY of a warning about environmental damage that was ignored for far too long.

It was in 1945, only six years after DDT had been discovered to be a “miracle” insecticide, that biologists discovered that it was also highly toxic to beneficial insects such as pollinators, as well as to birds, fish, and amphibians. On this day a symposium of the National Audubon Society heard several presentations that included powerful evidence about the environmental damage DDT was beginning to cause.

The warning was not heeded for many years, thanks largely to a disinformation campaign that cost  chemical manufacturers and the agricultural industry many hundreds of thousands of dollars. As a result, not only was DDT not restricted, it was widely used in residential and institutional settings where humans, pets and other organisms were needlessly exposed to it.

DDT’s advocates not only ignored and denied evidence of harm, they used every possible political or legal means to prevent or delay government regulation. In 1962, when Rachel Carson’s book, Silent Spring, was published, including a detailed examination of DDT’s dangers, the chemical industry and its allies attacked Carson's credibility, saying she was "hysterical" and a "fanatic". https://www.dollarsandsense.org/industry-attacks-on-dissent-from-rachel-carson-to-oprah

Women Would Have the Right to Vote If Only They Could Persuade Half the Male Voters

OCTOBER 23 IS THE 110TH ANNIVERSARY of a red-letter day for the campaign to make it possible for U.S. women to vote.

Woman suffrage advocates had succeeded in putting constitutional amendments that would give women the right to vote in three of the country’s largest states, New York, Pennsylvania and Massachusetts. Ten days before election day in 1915, woman suffrage organizations staged the largest-ever equal-rights demonstration, a parade of some 30,000 people marching three miles up Manhattan’s Fifth Avenue while being cheered all along the route by a crowd of spectators that was estimated to total a quarter million.

The marchers were highly organized, representing groups of women from each borough, from cities and towns all over New York, and from other states.  There were also groups organized by profession, including teachers, medical workers, religious workers, garment workers and culinary workers.

Sadly, the enormously impressive demonstration did not have its desired effect. Each of the three state ballot measures was defeated by a large margin by the all-male electorate on November 2. https://feminist.org/news/today-in-1915-over-250000-suffragists-took-over-fifth-avenue/

Icelandic Women Defeat Gender Discrimination with a General Strike

OCTOBER 24 IS THE 50TH ANNIVERSARY of a hugely successful 1-day general strike by Icelandic women, who were demanding an end to the country’s 40 percent pay gap.

An estimated 90 percent of women in Iceland refused to perform both paid and unpaid labor for the day in 1975, with the result that most offices, factories, schools, hotels and other businesses could not function, and many men were prevented from working by the lack of transportation and child-care. It was also a day when most meals were eaten cold if they were eaten at all.

Some 25,000 (mostly) women (more than 10 per cent of the nation’s population) brought traffic to a complete standstill by rallying in Iceland’s capital. Seven months later Iceland’s parliament passed the Gender  Equality Act, which sharply reduced gender discrimination in workplaces and schools. It was not everything the strikers demanded, but it was a good start. https://libcom.org/article/iceland-womens-strike-1975

Many Million Killed by Lung Cancer, and the End Is Not in Sight

OCTOBER 25 IS THE 85TH ANNIVERSARY of a day when a brave doctor tried to sound the alarm about the growing epidemic of lung cancer and the tobacco industry dug in its heels to protect its profits.

On that day in 1940, at the annual meeting of the American College of Surgeons in Chicago, Dr. Alton Ochsner delivered a research paper that he summarized this way: “Smoking cigarettes is a cause of cancer of the lung.” Not a possible cause or suspected cause, but a cause.

Ochsner, who had been studying lung cancer for two decades, had discovered the causal connection between the disease and cigarette smoking by carefully observing his patients’ health and their smoking habits. In many cases, he had only been able to definitively diagnose lung cancer by means of autopsy. He was not the first to suspect that smoking was not a healthy habit, but he was the first scientist to prove the link between smoking and cancer.

His reward for what ought to have been considered a major contribution to the cause of public health was to become the target of bitter vituperation and ridicule by professional colleagues, many of whom were either financial beneficiaries of the tobacco industry or addicted to tobacco themselves.

The tobacco industry, which spent hundreds of millions of dollars to “prove” that Ochsner and those who agreed with him were wrong, was completely successful until 1964, when the accuracy of Ochsner’s discovery was certified by the Surgeon General of the U.S.

It is conservatively estimated that more than 40 million people in the US and Canada have died from smoking-related disease, including lung cancer, since 1970. https://pmc.ncbi.nlm.nih.gov/articles/PMC12061630/

Happy Birthday, Mike Doonesbury, and Many More!

OCTOBER 26 IS THE 55TH ANNIVERSARY of the very first appearance of Doonesbury, by master satirist Gary Trudeau.

Before the feature was five years old, it became the first daily “comic” strip to be awarded the Pulitzer Prize for Editorial Cartooning (which was re-branded Pulitzer Prize for Illustrated Reporting and Commentary in 2022). When Trudeau was honored, he joined an elite crew that included Rube Goldberg, Herblock, Bill Mauldin, Paul Conrad and Patrick Oliphant.

Here’s to Mike Doonesbury, 55 years young and still going strong! (But only on Sundays, now.) https://www.npr.org/2010/10/26/130809883/doonesbury-decades-a-timeline-of-turning-points

For more People's History, visit
https://www.facebook.com/jonathan.bennett.7771/

Turn off Cursor, turn on your mind

Lobsters
allvpv.org
2025-10-21 00:41:01
Comments...
Original Article

Turn off Cursor, turn on your mind

A case against agentic coding

Photo of VT320 terminal in ambient surroundings

Two approaches to using AI

I believe that AI can make you smarter . It can help you grow faster, especially when it comes to gaining new knowledge or developing new skills. It is the perfect learning companion and a great language tutor, and it’s ability to browse the web and surface relevant links far exceeds my own. Heck, I’ve even used it – albeit very modestly – to polish this draft so it’s bearable to the native speakers.

However, I would call this way of using AI – for the sake of this essay – approach one.

  • Use AI to learn faster and understand the whole system better.

That’s not the main way people – especially software engineers – use or are expected to use AI. Let’s call the other emerging paradigm approach two.

  • Using AI to tackle problems for you (so, at the end of the day, you neither learn much nor deepen your understanding of the system).

I think that approach 2 is the most straightforward – or let’s say, the laziest – way to use AI tools, especially agentic coding tools like Cursor or Claude Code.

And I think it’s a trap. Unless AI systems can completely replace us, we shouldn’t let ourselves get lazy – even for promised efficiency gains .

Not only coding

Prompt: asking AI to rephrase the essay

There’s a spectrum when it comes to outsourcing tasks to AI. For example, I could:

  1. Ask AI to draft this essay based on the title.
  2. Ask AI to rephrase my essay to make it ChatGPT-esque.
  3. Ask AI to fix the draft’s grammar.
  4. Ask AI to list grammatical or stylistic issues in this essay and suggest improvements.

Of these four choices, the first two are detrimental to my essay; option 3 fixes it but doesn’t help me learn; and option 4 not only helps the essay but also makes me a slightly better writer in English.

I believe the same logic holds for coding. Because coding is an ongoing learning process, even those in senior positions shouldn’t think they have it all figured out . Let me elucidate this point.

What makes you a faster, more skilled programmer?

Let’s start with something we can probably all agree on.

A new hire generally won’t perform at the same pace as someone who is fully onboarded into the project and has been working in it for some time.

(Holds true when comparing people of similar skills and drive).

Let’s say you are new to the project: you pick up a task and it takes you Y1 to complete. Some time passes, you implemented a huge chunk of the project. Then you take a different task of the same complexity as the first, and it takes you Y2.

If you’re like me, you’ll agree that Y2 is smaller than Y1.

You see, programming a large project is not a repetitive task but an ongoing learning process .

When you work in a large codebase, you are essentially learning, and building a mind map of how the system fits together.

I’m sure you can think of a situation in which you develped a feature very quickly – maybe in a day or two. And another, in which more trivial problem took the same amount of time, or even longer.

For me, the differentiator was knowledge of the project and the related technologies.

Learning versus agentic coding

The paragraph above isn’t controversial – I’d say it’s something most sensible people would agree on. What might be more controversial is the second premise:

Agentic coding reduces the learning experience substantially.

I won’t justify that with anecdotes (though I could share one in which ‘vibe coding’ led to a loss of control and a lot of debugging).

Instead, I’ll appeal to common sense: learning takes time .

Learning takes time

You’ve probably heard claims that, in the dawn of AI coding, a team of three engineers working for six months can be replaced by a single engineer over a weekend with the help of an AI agent. Many find those claims exaggerated, but let’s assume for the sake of argument that they’re true.

My question is this: who, do you think, will learn more about the underlying system? Who will better sharpen their coding and debugging skills in a new project with new technologies – the person who spent three days or the one who spent six months on it?

Slowly losing control

How many times, in the middle of a coding session, have you deleted your code and reworked it substantially? I bet it took you some time to figure that out. Someone might say that, with AI coding, you can spin up two prototypes in 15 minutes and discard them easily, but I’d argue that you often can’t competently and thoroughly evaluate the AI-generated code .

As the system grows denser and more complicated, and as AI compresses the time you spend thinking about it, you slowly lose control .

If you’re like me, you’ve probably had the unpleasant gut feeling of losing control when coding with an AI agent. At the end of the day, I am responsible for the code I ship . I can’t say that an AI agent messed something up unless the agent is also accountable for it. So I think: better to implement this myself, and use AI to suggest improvements or explore alternatives.

Epilogue

Maybe we’ll reach a day when AI agents join our daily standups, take tasks, ship them, and assume full responsibility. Maybe, at some point, they’ll take over coding entirely, laying us all off.

But until then, I won’t entrust them with my skills, my learning, or my responsibilities. The moment they break down, I have to take over – so I need to know exactly what’s going on in the system.

At the same time, I’m all for using AI to learn, explore alternatives, and gain efficiency by means other than outsourcing coding to AI.

Old Computer Challenge – Modern Web for the ZX Spectrum

Hacker News
0x00.cl
2025-10-21 00:40:25
Comments...
Original Article

2025/10/19

Table of contents


#

Last year I participated in the OCC challenge and between work and personal life I totally forgot about it, luckily I saw a post online of someone doing the challenge and remembered about it. I didn’t want to miss this year challenge as it was fun learning something new last year, so hopefully its not too late.

This years challenge it was a DIY, you could create your own OCC challenge. So I thought it’d be fun to try to recreate a website and navigation for the ZX Spectrum using BASIC and the limited graphics capabilities of the computer.

Hardware #

Well, in this case the hardware is the ZX Spectrum, though I must say that I used an emulator (Fuse) to test and run my code.

Limitations #

The ZX Spectrum image resolution is 256x192 pixels , so the space is very limited. The colour palette is made up of 8 colors , with a “brighter” variation.

With this in mind I had to design a website for the ZX Spectrum and how you’d navigate with it.

Challenge #

So given the hardware and limitations I thought that I’d first do Google, since they have a simple design and is I’d say well known by everyone.

Google #

The image below, is what Googles homepage would look like.

Google homepage in the ZX Spectrum

You’d access their services listed by pressing the first letter of that service, in case there is more than 1 with the same letter you’d use another letter or simply add a second page of services.

In this case you can access Search by pressing “S”. It will simply prompt for your search and in this case I searched for “How to run LLM on a microprocessor” and you’d navigate through the different results by going pressing “w” (up) or “s” (down).

Google search results in the ZX Spectrum

Hacker news #

Hacker news is text heavy content, so it makes it “easy” to design it for the ZX Spectrum as its graphics capabilities are kinda limited.

Hacker news homepage in the ZX Spectrum

As you can see, you’ll be able to see your username and the amount of points you have in the top bar and in this case I decided that navigation would be done “less” interactively and you’ll access posts by simply selecting the number (1 to 6). If you want to go to the second page of posts you can press “M” and if you want to submit you press “S”.

So if we select the first post by pressing 1, we’ll load more details about the post and the comment section.

Hacker news detail post in the ZX Spectrum

You’ll see the title, user that posted it, time and the comment section with the amount of comments, in this case 42. The problem with the comment section is that some comment threads can go deep and have different branches as multiple users will reply. So in this case what I thought was to simply navigate the top level comments with “M” for the next comment and “B” to go back to the previous top level comment. I guess you could add a “[R]eply” option, to reply to the comment you are seeing on screen and navigating through the comment replies, you’d press something like “C” and you’ll enter a page of the child comments of the parent comment. The only problem with this is that because the screen is so tiny and limited, that most of the time you’ll only be able to see 1 comment at a time and when posts have 300 comments, navigating them is going to be hard.

Hacker news submit new post in the ZX Spectrum

And this is what submitting a new link would look like, you’d simply “edit” the different fields and then “S"ubmit

Conclusion #

Personally I had fun doing this, I didn’t have to spend a lot of time researching before doing that as I had already done a lot of that work in last year challenge, so I could focus on what I wanted to do. You can check the source code and try it yourself in my repository https://codeberg.org/0x00cl/Web-ZX

Appendix #

Here are videos of the interactions in the ZX Spectrum

Matthew Garrett: Where are we on XChat security?

PlanetDebian
mjg59.dreamwidth.org
2025-10-21 00:36:19
AWS had an outage today and Signal was unavailable for some users for a while. This has confused some people, including Elon Musk, who are concerned that having a dependency on AWS means that Signal could somehow be compromised by anyone with sufficient influence over AWS (it can't). Which means we'...
Original Article

[personal profile] mjg59

AWS had an outage today and Signal was unavailable for some users for a while. This has confused some people, including Elon Musk, who are concerned that having a dependency on AWS means that Signal could somehow be compromised by anyone with sufficient influence over AWS (it can't). Which means we're back to the richest man in the world recommending his own "X Chat", saying The messages are fully encrypted with no advertising hooks or strange “AWS dependencies” such that I can’t read your messages even if someone put a gun to my head .

Elon is either uninformed about his own product, lying, or both.

As I wrote back in June , X Chat genuinely end-to-end encrypted, but ownership of the keys is complicated. The encryption key is stored using the Juicebox protocol, sharded between multiple backends. Two of these are asserted to be HSM backed - a discussion of the commissioning ceremony was recently posted here . I have not watched the almost 7 hours of video to verify that this was performed correctly, and I also haven't been able to verify that the public keys included in the post were the keys generated during the ceremony, although that may be down to me just not finding the appropriate point in the video (sorry, Twitter's video hosting doesn't appear to have any skip feature and would frequently just sit spinning if I tried to seek to far and I should probably just download them and figure it out but I'm not doing that now). With enough effort it would probably also have been possible to fake the entire thing - I have no reason to believe that this has happened, but it's not externally verifiable.

But let's assume these published public keys are legitimately the ones used in the HSM Juicebox realms[1] and that everything was done correctly. Does that prevent Elon from obtaining your key and decrypting your messages? No.

On startup, the X Chat client makes an API call called GetPublicKeysResult, and the public keys of the realms are returned. Right now when I make that call I get the public keys listed above, so there's at least some indication that I'm going to be communicating with actual HSMs. But what if that API call returned different keys? Could Elon stick a proxy in front of the HSMs and grab a cleartext portion of the key shards? Yes, he absolutely could, and then he'd be able to decrypt your messages.

(I will accept that there is a plausible argument that Elon is telling the truth in that even if you held a gun to his head he's not smart enough to be able to do this himself, but that'd be true even if there were no security whatsoever, so it still says nothing about the security of his product)

The solution to this is remote attestation - a process where the device you're speaking to proves its identity to you. In theory the endpoint could attest that it's an HSM running this specific code, and we could look at the Juicebox repo and verify that it's that code and hasn't been tampered with, and then we'd know that our communication channel was secure. Elon hasn't done that, despite it being table stakes for this sort of thing (Signal uses remote attestation to verify the enclave code used for private contact discovery, for instance, which ensures that the client will refuse to hand over any data until it's verified the identity and state of the enclave). There's no excuse whatsoever to build a new end-to-end encrypted messenger which relies on a network service for security without providing a trustworthy mechanism to verify you're speaking to the real service.

We know how to do this properly. We have done for years. Launching without it is unforgivable.

[1] There are three Juicebox realms overall, one of which doesn't appear to use HSMs, but you need at least two in order to obtain the key so at least part of the key will always be held in HSMs

The Politics of Double Deadlock

OrganizingUp
convergencemag.com
2025-10-21 00:26:55
Max Elbaum’s “A Path to Pushing…...

Seven Million Turn Out for “No Kings” Protests Nationwide. Next Up, Massive Disruptions Backed by Unions?

Portside
portside.org
2025-10-21 00:22:57
Seven Million Turn Out for “No Kings” Protests Nationwide. Next Up, Massive Disruptions Backed by Unions? Stephanie Mon, 10/20/2025 - 19:22 ...
Original Article

We are on an inexorable trajectory of escalation. As President Donald Trump’s authoritarian attacks are increasing, working people are defying his abuses of power, refusing to live under the yoke of a wannabe strongman who seeks to bend society to the dictatorship of the privately owned Trump Organization . It’s not simply that Trump has the typical bossman’s bullying streak. He’s stranded the country somewhere between the bedlam of his whims and the corruption of a modern-day Gomorrah, down-zoned to the dregs of a Florida swamp. The people have had enough from the madman don in Mar-a-Lago.

Millions of people took to the streets nationwide October 18 as part of a pro-democracy movement rising up in peaceful resistance against President Trump’s authoritarian rule. Under the banner of ​“ No Kings ,” more than 2,700 demonstrations were held across every state, from big cities like Chicago ( population 2.7 million) to small towns like Bryson City, N.C. (population 1,500).

In all, organizers estimate nearly 7 million people participated. The demonstrations rank among the largest single-day protests in U.S. history, surpassing the turnout of more than 5 million for those held June 14 , and the January 2017 Women’s Marches, whose crowd totals were estimated at 3.3 to 5.2 million.

New from June , when union banners and delegations were largely absent (except for highly motivated rank-and-file members donning their union-branded shirts and a few outlier union locals), was the effort to join forces with organized labor.

That’s why the participation of unions in this weekend’s rallies isn’t just yet another Saturday march that leads nowhere. It represents the building of a united front with durable and democratic institutional networks, capable of potentially rejoining the world of politics and work — a fusionism that creates workplace leaders to grind the gears of the boss’s machine to a halt.

But despite the overtures to work in coalition, coordination between unions and liberal groups still proved challenging — even as the coordinated economic disruption that unions are best equipped to organize becomes increasingly necessary against an authoritarian threat.

NO KINGS DAY, OCTOBER 18

The main organizers of October’s weekend demonstration were progressive groups like Indivisible , MoveOn and the 50501 movement , alongside more than 200 other national organizations and thousands of local groups.

“Authoritarians want us to believe resistance is futile, but every person who turned out today proved the opposite,” said Ezra Levin and Leah Greenberg, co-founders of Indivisible, in a statement. ​“This movement isn’t about a single protest; it’s about a growing chorus of Americans who refuse to be ruled.”

In New York City, people returned to the streets wearing tricornered hats, colonial wigs, and makeup, holding up handmade signs decrying the country’s slide into, by turns, fascism, tyranny, dictatorship, autocracy, and monarchy. The same irreverent air of the carnivalesque was in full display across the country.

Adults dressed up in inflatable costumes was another common sight, a whimsical rebuke of the Trump regime’s efforts to smear protestors as domestic terrorists. In Alabama, cops arrested a 53-year-old woman for ​“lewd conduct” because she dressed as an inflatable penis and held a sign that read ​“No Dick-tator.” The inflatables have emerged as a symbol of resistance through mockery since, at a protest against Immigration and Customs Enforcement ( ICE) in Portland, Ore., on October 2, federal agents pepper-sprayed a protester clad in an inflatable frog costume directly through his air vent.

“I’ve definitely had spicier tamales,” the frog-costumed protester said afterward.

Less prominent, compared with the June protests, were calls to take on the billionaire power grab. Palestine solidarity, except for a number of banners in the labor march, was largely absent, as well as remarks about a potential U.S. military invasion of Venezuela.

But the protest’s main themes were also more diverse than the June mobilization — especially the labor march — involving a defense of democracy, immigrants, and health care. The tenor of the messages were mainly denunciations of Trump sending masked federal agents into America’s cities, the Republican shutdown of the country’s government, and the health-care cuts to public programs like Medicaid.

ENTER LABOR

The rally organizers also partnered with the Service Employees International Union (SEIU), National Nurses United and the American Federation of Teachers, as well as labor coalitions May Day Strong and Labor for Democracy , to drive turnout of union workers and their communities. In New York, a coalition of the city’s biggest unions turned out thousands, including the Communications Workers of America, 1199SEIU, 32BJ SEIU, the United Federation of Teachers and the Professional Staff Congress.

Vishally Persaud is a certified nursing assistant at Staten Island’s Richmond University Medical Center and a member of 1199SEIU. She came to the No Kings labor march after a 16-hour shift in the intensive care unit. ​“We’re here to support our health-care workers,” Persaud said. ​“We want to keep fighting so that we don’t get hospitals shut down, nursing homes shut down and Medicaid cut.” She also works in home care and worries whether the health-care cuts will prevent her from working the 130 hours a month necessary to maintain Medicaid benefits.

Since the COVID pandemic, the understaffing crisis has only worsened the working conditions of health-care workers. ​“We can’t take it anymore,” Persaud said. ​“We’re all very tired.”

Jamie Partridge, a retired member of the National Association of Letter Carriers Local 82, said the labor feeder march in Portland brought about 1,000 people, joining about 40,000 No Kings protesters. SEIU served as the main organizer of the labor contingent and featured speakers from the Oregon AFL-CIO and the Federal Unionists Network, the scrappy group leading the response to the Trump administration’s attacks on federal workplaces.

“While exciting to see unions step up, the number of union signs and banners were few,” said Partridge. Among the unions and labor groups present in Portland were Oregon AFL-CIO, SEIU, Oregon Nurses Association, AFSCME 88, and the Oregon Federation of Nurses and Health Professionals, which just ended a 5-day strike against Kaiser.

Chicago’s protests were massive, drawing 100,000 people. Isaac Silver, a self-employed repair person who also runs an archive of union buttons , said the march filled 6 lanes of traffic for many blocks. (Disclaimer: Silver has been hired by In These Times for a variety of projects). ​“The march looked much more like a cross-section of the city than the Hands Off rally in April, although it would be great to see a more organized presence by unions,” Silver said via text message. ​“Most signs were homemade, and like the chants, foregrounded defense of immigrants, trans people, and Palestinians alongside the official messaging opposing the billionaires’ attacks on democracy.”

In New York City, where more than 100,000 marched across all five boroughs, a dozen unions organized a feeder march up Sixth Avenue that ended in Union Square in Manhattan. But the main march, starting from 47 th Street down Seventh Avenue, never linked up with the union contingent. Workers milled about waiting until the crowd thinned, as many just went home with no clear next steps.

I bolted to Seventh Avenue, where I saw a huge yellow banner that read, ​“Our March is Over: Keep Resisting. NoK​ingsNYC​.com.”

“That’s it?” an incredulous man said to the No Kings demonstration marshal at the intersection of 14 th Street and Seventh Avenue. ​“Aren’t there going to be any speakers?” Through a blowhorn, the marshal repeated: ​“The march is over. Go to brunch!”

BUILDING POWER

The ​“We marched, now what?” attitude is emblematic of the limitations of loosely organized protests without clear next steps for demonstrators to plug into more organizing opportunities. No Kings has knitted together a broad coalition against authoritarianism by harkening back to the country’s revolutionary founding in resistance to the rule of autocratic kings.

But how can it build the organizing infrastructure necessary to stay in touch with these millions of people, involve them enough for them to develop as leaders in their own right, and plug them into efforts that bolster their confidence and understanding of building power?

Fortunately, the liberal groups making up the No Kings coalition already have key partners who can help shift away from the mobilizing model into an organizing approach geared toward disruptive escalations.

Those partners, of course, are organized labor.

Rebecca Givan, associate professor of labor studies and president of the Rutgers chapter of the American Association of University Professors and American Federation of Teachers (AAUP-AFT), points to the increased participation of unions in this round of No Kings protests as part of the maturation of the resistance to Trump. The AAUP-AFT marched with other union contingents in New York City.

“Local unions are turning out their members, and national unions are signing on as supporters,” Givan says. ​“If No Kings marches are to turn into meaningful change, they will need organizations with roots in neighborhoods and workplaces. Organized labor has a key role to play in translating single-day actions into a sustained movement for change.”

Faye Guenther, president of United Food Commercial Workers Local 3000, notes how, during Trump’s first term, the country saw one of the largest marches in history around women’s rights, then the largest racial-justice marches after the murder of George Floyd. She says it’s not too difficult to feel as though every advance has been beaten back, as women have fewer rights today and federal agents are snatching people off the streets with greater impunity than ever. Federal budgets for repression have increased, while funding is being slashed for health-care and food assistance. Protests are important, of course, but, Guenther wonders, how do we make them work?

“The No Kings protests help work this country’s mass-action muscle, and we’re going to need mass action to disrupt business as usual,” Guenther goes on. ​“But without strong, permanent infrastructure in place to maintain and build on our victory over this authoritarian regime, then we’ll just be back in the streets next year blowing off steam. That’s why the labor movement is central to this moment — we know how to build durable, democratic power structures.”

As president of the Baltimore Teachers Union, Diamonté Brown thinks a lot about how to build — from the bottom up, rather than from the top down — democratic structures in which members are in command of their union. She bristles at the assumption that unions are already democratic; they need to be transformed to truly become so. She also doesn’t embrace the whole spiel of how what’s needed is a return to normalcy, but an advance for a broader transformation of longstanding systemic issues.

“What labor can do is take it a step further and start talking about, what are the systemic structures, processes, that are creating these situations, over and over again?” Brown says. ​“Whoever we’re fighting, whoever our target may be at any given time, it seems like they’re always on offense, and we’re always on defense.

The No Kings Day mobilization is also risk-averse, being part and parcel of a reactive-activism pattern that prizes the speed of mobilization over everything else. ​“We’re not willing to put everything on hold to make sure all of our members turn out for No Kings Day, because that’s just not what’s been bubbling up in our membership,” Brown adds. ​“What’s been bubbling up in our membership is that people feel like there’s a retaliatory culture within our school system. We have a 45% chronic absenteeism rate and no school-bus fleet for our students.”

“Unfortunately, no matter who the president is, no matter what type of government we’ve had in my 43 years of living, none of those things have changed where I live. The things that we’re seeing now happen to Black people so much that I think sometimes I’ve normalized tragedy. I’ve normalized oppression. I’ve normalized violence just because I’m Black. So seeing it happen on a bigger scale, it’s something I’m reckoning with.”

“Yes, we want to participate in protests, and we want to show our strength and our unity and our solidarity by making sure we are participants in the No Kings rally, but we just want to highlight that there is so much more that needs to be done, and we have to start connecting the dots. Does this action lead us to the outcome we want? Have we even determined what outcome we want? I haven’t heard that.”

STRIKING QUESTIONS

Trump isn’t an anomaly in U.S. history. He’s like a piece of shit traveling down sewer pipes and becoming denser as he’s agglomerated sediments of other excrement. That makes all the special-pleading about democracy sound like a load of crap. ​“All I’ve heard was we’re fighting for democracy,” says Brown. ​“We don’t want an authoritarian government. But what do we want? What is it that we want that is important?”

To answer — and to win back members drifting away in search of other political homes, members who often don’t feel their unions are the means to transform their workplace and society — the labor movement needs to make union democracy the cornerstone of what it means to be a trade unionist.

Brown gives the example of bringing questions back to her members from Randi Weingarten, president of the American Federation of Teachers, and how ​“it never turns out to be what Randi wants when we ask our membership, when we do it from the bottom up, and that’s when we’re always met with that tension — because we asked our members. And I think that should be scaled across all locals, state feds, and even the national and international unions.”

“And that means we have to go slow. You can go fast alone, but you can go further together, and we’re not moving together.”

In other words, if unions are to organize massive disruptive actions with majority participation from their members and the broader community, then they need to find a way to bring members along with them and loosen the reins for them to lead.

That’s what makes unions more than one more letterhead-coalition partner in a demonstration.

“Labor is the essential part of society,” says Dominic Renda, a Verizon call center worker with CWA Local 1101. ​“So without labor, nothing moves, and we could really put our foot down. We could stop the Trump administration’s worst excesses. If a protest of this size doesn’t succeed, you know, maybe something bigger will: strike actions.”

Unions offer a path to disruptive escalations that continue to grow the movement while remaining nonviolent and oriented to the broad mainstream coalition offering it legitimacy. Jessica Tang, president of the American Federation of Teachers Massachusetts, wants to scale up the resistance by using official union channels to build the institutional infrastructure. She says the union is coordinating efforts across the state, including resource fairs and school safety watches, to sustain activism and resistance.

“I do think that there is a purpose for public actions that bring visibility to an issue, and do let others, who may be more quiet in their criticism at home or thinking that they’re alone, realize that they’re not, actually — and that the majority of people are against the current policies that are harming immigrants, that are hurting the economy, that are stripping away due process rights, rule of law and eroding democracy,” Tang says.

Tang cites the research of political scientists Erica Chenoweth and Maria Stephan, who argue that just 3.5% of a population taking to the streets in peaceful resistance is enough to block an authoritarian takeover. In the United States, that would mean about 12 million people. The 3.5% number isn’t necessarily an automatic prediction, but it reflects the kind of sustained, nonviolent mass mobilization that has worked in the past.

Tang also notes how the AFT passed a resolution to support this weekend’s large-scale protests, including supporting efforts to do more political education within unions and engage members in nonviolent actions. She mentions the possibility of rolling strikes.

The historian Nelson Lichtenstein offers the possibility of teachers’ unions saying that schools have to be shut down because it’s unsafe to hold classes with armed federal agents roaming the streets. ​“Imagine if the Chicago Teachers Unions said it’s too dangerous to have the kids in school,” Lichtenstein says. ​“They’re been picked up by ICE, so we’re urging all our teachers and students to stay away for the rest of the week.” Short of that action, No Kings organizers can also call the next mass demonstrations for a workday when there will be more potential for disruption.

In Chicago’s lakefront Grant Park, Mayor Brandon Johnson ended his No Kings Day remarks with a call for a national general strike. ​“If my ancestors, as slaves, can lead the greatest general strike in the history of this country, taking it to the ultra-rich and big corporations, we can do it too!” Johnson said . Ten days earlier, Trump demanded he be jailed in a post on Truth Social . In his reference to a general strike, Johnson was citing an argument by W.E.B. Du Bois — advanced in his book, Black Reconstruction in America 1860-1880 —which made the case that enslaved workers had freed themselves in a great general strike that won the Civil War.

“General strikes aren’t planned three years in advance,” Lichtenstein says. They require a galvanizing occasion, similar to the example he offered about the dangerous environment created by the federal agents in Chicago. But unions aren’t there yet.

“The general strike discussion, while much needed in these authoritarian times, was being put forward not by the unions but by the youth climate movement, Sunrise, with the slogan, ​‘Visualize a general strike,’” said retired letter carrier Jamie Partidge. ​“Despite the most brutal assault on unions in my fifty years in this movement, we have yet to rise to the challenge.”

But the term ​“general strike” is on more lips than usual. Indivisible’s Ezra Levin told Bloomberg that, in the group’s weekly planning calls, participants often ask about organizing a general strike, perhaps a growing recognition that economic leverage is necessary. ​“It is possible to execute on that, but it requires a much greater level of planning and infrastructure” compared with weekend rallies, he said. But ​“I suspect it is where we are going.”

IN SOLIDARITY

In his second term, Trump’s authoritarian gyre is spinning faster and wider. But demonstrators and organizers say, to defeat the authoritarian consolidation of power in Trump’s hands and those of a cabal of tech oligarchs , they are building a stronger and more resilient fightback against the escalating attacks on working people, deepening their networks and bolstering their organizations.

Unions are key not only to standing up to Trump, but to offering an alternative economic and social agenda to address what attracts people to Trump’s brand of authoritarian power.

In response to criticisms about a narrowly defensive approach that emphasizes resisting authoritarianism and defending democracy, Tang says: ​“This is actually about fighting for working people and a working people’s agenda over a billionaire’s agenda, which is what we have now, and making the connections between how this is related to making lives better for working people.”

The largest health-care workers’ union in New York and the country, 1199SEIU — representing nearly 450,000 workers — decried the slashing of health-care funding to give tax handouts to billionaires while also pouring money into ICE raids. A University of Pennsylvania study released in June found that health-care cuts could cause the deaths of 51,000 Americans annually.

Trump isn’t just instituting the largest upward redistribution of wealth in American history for the benefit of his billionaire pals; he’s doubling down on the economic and political repression of the working class by cutting food assistance and health insurance for working people; by eroding collective-bargaining rights for millions; by sending armed federal agents with military-style weaponry to racially profile and abduct workers and children from their homes, workplaces, parks, and schools; and by detaining more than 170 who are U.S. citizens , but of the wrong color.

At the same time, Trump is developing a refugee policy that favors Europeans; murdering Venezuelan civilians in the Caribbean Sea by bombing alleged drug-smuggling boats; starving millions through cuts to humanitarian aid; and bailing out Argentina’s far-right libertarian president with $40 billion (almost the same amount he’s cut from food aid for the entire world). He’s not just tossing the United States, but the whole planet, into a vortex of repression and misery.

Trump isn’t unprecedented in the annals of American history, which is drenched in the blood and sweat of enslaved people, indigenous slaughter, and ethnic cleansing — but he does mark something uniquely malignant in the modern era.

MOVING CAPITAL

How best to make sense of Trump’s authoritarian escalation?

Roman emperor Caesar’s rampage gave us the word Rubicon to describe a point of no return. Trump’s autocratic regime hasn’t given us any neologism yet, but his escalating authoritarianism has called back into popular usage old German words like blitzkrieg, meaning lighting war, to describe his administration’s dizzying assaults on immigrants, LGBTQ people, women (especially Black women ), democratic norms, labor rights, and public programs.

Among the panoply of signs people held aloft at nationwide protests, a set of words stood out from the bunch: king, tyrant, fascist.

These words indicate a broad but inchoate understanding of a dictatorial political regime tightening its hold on society. There’s also an economic dynamic at play, upending old shibboleths about the capitalist management of the economy and replacing them with an older model of brute political force that appears to subordinate the economic structures of capitalism to the whims of a strongman. Call it authoritarianism, corporatism, or fascism, but we are facing a calamitous onslaught of reaction that is reshaping the world.

At root, it’s about capitalism.

Which is how the involvement of labor unions in these broad liberal coalitions becomes pivotal.

In a review of Melinda Cooper’s book Counterrevolution: Extravagance and Austerity in Public Finance, historian Kim Phillips-Fein wrote in The Nation , ​“Trump’s politics and his appeal are not only inspired by far-right ideologies, culture-war passions, age-old xenophobic prejudices, and a long-standing Republican animus toward the welfare state. They emerge out of a capitalist order that has ceased to be constrained by any of the institutional, intellectual, or professional limits that defined corporate capitalism in an earlier era.”

If one thing could have been even more prominent at the No Kings protests, it was a recognition of how hollowed out democracy becomes when billionaires rule under capitalism. Forget Trump’s boorish personality antics and hedonistic relishing in spectacles of cruelty; take note of corporate ownership structures — namely, privately owned firms.

As Phillips-Fein continued: ​“E-mails demanding that workers justify their jobs or face layoffs; scolding and humiliation of underlings who dare to disagree; drastic cuts to programs just because the boss doesn’t like them; arbitrarily lobbing tariffs anywhere he pleases; insisting on payback for those perceived as enemies — these signature Trumpian actions all echo the practices of business owners in their private fiefdoms, who do not have to answer to shareholders or, for that matter, anybody else.”

But if they don’t answer to moral suasion, they do answer to massive and coordinated disruptive actions to stop business as usual.

​“Successful and dynamic social movements create a gravitational field that pulls into its orbit a strata of the body politic that would be hostile or disinterested,” according to Lichtenstein. ​“During the red-state revolt seven years ago, school superintendents, local politicians, parents and students were all mobilized in support of that social movement.”

EARTHQUAKE MOMENTS

Might we be seeing a similar shift in the structure of old social hierarchies? It’s not simply that material conditions have been intolerable for the working class; there has to be a social earthquake that makes the existing order intolerable for both the working class and the dominant classes who jockey to re-establish authority and stabilize the quaking social edifice. There’s no guarantee that, in these tectonic shifts, workers will come out on top, but open conflict for a new legitimacy can potentially reorder the battle lines, even if temporarily. That’s the opening we must seize without any illusions that the outcome is already predetermined.

Cooptation occurs in any mass movement. But any genuine mass movement still needs more than the usual suspects who agree on everything. To defeat a greater threat, it sometimes even takes a cross-class dynamic to create the conditions for a social revolt.

As Trump’s attacks on labor rights and unions have intensified and threats to the U.S. constitutional order increase — including pressing misdemeanor charges against SEIU leader David Huerta for observing an immigration raid, unleashing the Justice Department to indict political enemies, and commuting prison sentences for allies — the burgeoning pro-democracy movement must continue to expand its ranks.

Nowhere more than among organized labor.

Unions have largely stayed out of the spotlight, fearful of uttering even the mildest criticisms and incurring Trump’s wrath. Many have preemptively bent the knee to curry favor with the administration in the hopes that their groveling self-abnegation might spare their own members from broader attacks on working people. But that’s beginning to change, and more union leaders are speaking out against the president without mincing words.

At a recent union conference, Brian Bryant, international president of the International Association of Machinists & Aerospace Workers, spoke directly to pro-Trump members who may have voted for the union-buster who has gutted the National Labor Relations Board and the Occupational Safety and Health Administration.

“I know nobody in this union voted for President Trump thinking that he was going to take away bargaining rights for over a million people in the federal government, because that’s just flat-out wrong,” Bryant said . ​“I know they didn’t vote for President Trump hoping that he would defund OSHA and incapacitate the NLRB and other critical departments that our members need.”

Bryant continued : ​“If you voted for him, you’ve got to make sure he knows that you don’t agree with what’s happening, because when it comes to union rights, there are no ifs, ands, or buts. There’s nowhere to be but on the side of workers. Unfortunately, President Trump and his billionaire buddies do not like organized labor, because they know we’re the only voice for the working people.”

[ LUIS FELIZ LEON is an associate editor and organizer at Labor Notes.]

[Sponsor] WorkOS: Scalable, Secure Authentication

Daring Fireball
workos.com
2025-10-20 23:55:19
With WorkOS you can start selling to enterprises with just a few lines of code. It provides a complete User Management solution along with SSO, SCIM, and FGA. The APIs are modular and easy-to-use, allowing integrations to be completed in minutes instead of months. WorkOS simplifies MCP authorization...
Original Article

Trusted by

SAML Lock

Auth for all SSO providers

Support any SAML or OIDC identity provider with a single integration

SAML Lock

Complete User Management

Manage users and organizations, set policies, and support all auth types

SAML Lock

Social authentication

Sign in to your app with Microsoft, Google, and more

SAML Lock

AuthKit

Flexible authentication UI powered by WorkOS and Radix

SAML Lock

Magic Auth

Passwordless auth with a six-digit code sent via email

SAML Lock

Multi-Factor Auth

Manage users and organizations, set policies, and support all auth types

WorkOS provides a single, elegant interface that abstracts dozens of enterprise integrations.

RESTful APIs, JSON responses, and normalized objects

Dashboard seamlessly integrates WorkOS into your application

Realtime updates from directory services with webhook events

Modern SDKs for Node.js, Ruby, Python, .NET, Go and more

Multiple environments to map to your application development process

Supports 20+ enterprise services with a single integration point


// Get a user’s access token and
// profile data from an Identity Provider

import WorkOS from '@workos-inc/node';

const workos = new WorkOS('sk_example_123456789');

const profile = await workos.sso.getProfileAndToken({
  code: '01E2RJ4C05B52KKZ8FSRDAP23J',
  clientID: 'client_123456789',
});



# Get a user’s access token and
# profile data from an Identity Provider

require 'workos'

WorkOS.key = 'sk_example_123456789'

WorkOS::SSO.profile_and_token(
  code: '01E2RJ4C05B52KKZ8FSRDAP23J',
  client_id: 'client_123456789',
)



# Get a user’s access token and
# profile data from an Identity Provider

import workos
from workos import client

workos.api_key = 'sk_example_123456789'
workos.client_id = 'client_123456789'

client.sso.get_profile_and_token('01E2RJ4C05B52KKZ8FSRDAP23J')




// Get a user’s access token and
// profile data from an Identity Provider

import "github.com/workos/workos-go/pkg/sso"

sso.SetAPIKey("sk_example_123456789")

sso.GetProfileAndToken(
  context.Background(),
  sso.GetProfileAndTokenOptions{
    Code: "01E2RJ4C05B52KKZ8FSRDAP23J",
  }
)

// Get a user’s access token and
// profile data from an Identity Provider

$this->sso = new WorkOS\SSO();

$profile = $this->sso->getProfileAndToken("01E2RJ4C05B52KKZ8FSRDAP23J");








// Get a user’s access token and
// profile data from an Identity Provider

import com.workos.WorkOS;

WorkOS workos = new WorkOS("sk_example_123456");

ProfileAndToken profileAndToken = workos.sso.getProfileAndToken("01E2RJ4C05B52KKZ8FSRDAP23J", "client_123456789");

Profile profile = profileAndToken.profile;




// Get a user’s access token and
// profile data from an Identity Provider

WorkOS.SetApiKey("sk_example_123456");

var ssoService = new SSOService();
var options = new GetProfileAndTokenOptions
{
    ClientId = "client_123456789",
    Code = "01E2RJ4C05B52KKZ8FSRDAP23J",
};

var profile = await ssoService.GetProfileAndToken(options);

curl --request POST \
  --url "https://api.workos.com/sso/token? \
  client_id=client_123456789& \
  client_secret=sk_example_123456789& \
  grant_type=authorization_code& \
  code=01E2RJ4C05B52KKZ8FSRDAP23J"
  
  
  
  
  
  
  

HTTP 200


Response {...}
  "access_token": "01DMEK0J53CVMC32CK5SE0KZ8Q",
  "profile": {
    "id": "prof_01DMC79VCBZ0NY2099737PSVF1",
    "connection_id": "conn_01E4ZCR3C56J083X43JQXF3JK5",
    "connection_type": "okta",
    "email": "alan@foo-corp.com",
    "first_name": "Alan",
    "last_name": "Turing",
    "idp_id": "00u1a0ufowBJlzPlk357",
    "object": "profile",
    "raw_attributes": {...}
  }

SCIM Provisioning

SCIM provisioning

with Okta, Entra ID, ADFS, and more

HRIS Integration

HRIS integration

with Bamboo, Rippling, and others

Admin Portal Screenshot

Effortlessly connect any identity provider or directory

Customize the look and feel to match your brand

Host on your custom domain (CNAME)

A polished experience for IT administrators

Step-by-step setup guides with detailed screenshots

Seamlessly integrates into your existing application

“With our in-house solution we had to spend 2-4 hours provisioning each SSO connection. I wanted to find a solution that would allow us to focus on building core-products.”

“​​If you work with another provider, you probably won’t get the same level of support that WorkOS provides. We'd get immediate responses to technical questions we posted in Slack.”

Hannah Han-Ciaravella

Senior Product Manager

Read more

“The decision to use WorkOS was straightforward. We saw good feedback from existing customers and reviewing the documentation made us confident that our needs would be addressed.”

“SSO is a crucial part of our Enterprise Pro plan, and with WorkOS we could roll it out in less than a week. Docs are comprehensive, the Admin Portal has made customer onboarding a breeze, and the level of support we received directly in Slack has been phenomenal.”

Dmitry Shevelenko

Chief Business Officer

Read more

“What sold WorkOS was the Admin Portal. Our UI gives customers a link; they go to the Admin Portal, complete onboarding steps at their leisure, and never have to talk to us, making the onboarding process a breeze.”

“SCIM is a game-changer for our customers, saving them a lot of time and reducing friction. It's crucial because, without SCIM, customers would likely get frustrated and might even switch to another provider with a smoother experience.”

“The onboarding process has been excellent. I love sending IT admins the link to set it up themselves and getting an email notification when it's completed. It's so efficient not having to get into meetings or go back and forth via email with IT personnel. ”

“I think we could have done even more business if we had partnered with WorkOS earlier, it's been incredibly well received.”

“Integrating was straightforward, pleasant, and fun. The team answered my questions quickly, and went above and beyond to help when I ran into issues.”

“WorkOS simplified the complexity of implementing single sign-on (SSO) with multiple identity providers.”

“WorkOS’ SCIM API has been a game-changer, enabling us to meet the user lifecycle management needs of our largest enterprise customers.”

“With WorkOS we got exactly what we needed, and the integration process was straightforward.”

Dor Zimberg

Software Engineering Manager

Read more

“When I asked the team, how was the experience with WorkOS? They were just like, this is incredible.”

“We did consider open source, but WorkOS provided a far superior developer experience.”

“With Audit Logs, we were able to launch the storage, querying, and export capabilities we needed into production in just a matter of days.”

“We viewed WorkOS’ connections-based pricing as a more viable option aligned with our projected growth. The Admin Portal has also been a critical feature allowing us to save engineering time and provide a more polished enterprise experience.”

“WorkOS was super easy to self serve and get up and running. It provided us super easy-to-implement must-have functionality out of the box.”

Electron Apps Causing System-Wide Lag on MacOS 26 Tahoe

Daring Fireball
mjtsai.com
2025-10-20 23:49:47
Michael Tsai, back on September 30, compiled a roundup of links regarding Electron apps causing systemwide lag on MacOS 26 Tahoe. The reason, seemingly, is that the Electron framework was overriding a private AppKit API. Of course. Tomas Kafka wrote a shell script to find un-updated Electron apps o...
Original Article

ToxicLand (via Hacker News ):

Using an M1 Max MacBook Pro, having Electron-based apps open / not minimized causes a huge lag.

CPU and GPU usage remains low, but if I have Discord and VS Code open, moving windows, scrolling is stuttery. It happens even when only Discord is open but it gets worse if I open a second Electron app.

This is kind of weird because while having Discord open and I’m in Chrome, the lag still occurs, but it’s fixed if I minimize.

avarayr :

After a lot of digging, I believe I’ve found the root cause of the WindowServer GPU spike on macOS 26 when shadows are enabled.

It turns out Electron was overriding a private AppKit API ( _cornerMask ) to apply custom corner masks to vibrant views.

This method is called by WindowServer to calculate the shadow of the window. I’m speculating that Apple uses some sort of memoization by reference, and this method breaks the memoization and forces WindowServer to repeatedly recalculate and repaint the shadow.

🤔 What’s particularly funny is that even a simple override that does nothing but call super still presents the issue.

Previously:

Update ( 2025-10-01 ): Tomas Kafka :

This script detects apps with not yet updated versions of Electron.

See also: Hacker News .

Update ( 2025-10-06 ): shamelectron (via Hacker News ):

Tracking problematic Electron apps macOS Tahoe.

Craig Hockenberry :

I took that script, updated some parts that required Xcode to be installed, and wrapped it up in an Apple Script applet that’s easy to download and run[…]

[…]

We’re hearing from customers that some of our apps are running slowly on Tahoe and I suspect that this bug has something to do with it.

Update ( 2025-10-14 ): Michael Burkhardt :

Now, the Electron team has fixed the issue, and said fix is beginning to roll out in popular third party apps that utilize the framework.

[…]

There are also a lot of major apps that haven’t yet updated their Electron version, including 1Password, Bitwarden, Cursor, Dropbox, Windsurf, and more. You can check out the tracker for a more comprehensive list.

17 Comments


















The joy of rediscovering Perl

Lobsters
blog.zerodogg.org
2025-10-20 23:38:23
Comments...
Original Article

A camel facing left, seen from
the side. It has one hump and is contained inside a blue circle. It's the logo
for the Perl programming language.

Lately I've gotten a lot back into Perl -programming, and it has honestly been an absolute delight.

Perl was my first proper programming language. As a 17 year old I picked up the llama book, Learning Perl , the third edition, covering Perl 5.6. The choice of language was largely made because it was used extensively in Mandrake Linux, that I was contributing to at the time. Learning Perl was an excellent introduction, getting me quickly up-and-running, and pretty much sealing Perl as my go to programming language. My first long-term programming job was also Perl.

As time went on and I was no longer programming for a living, I drifted towards other languages. I never really left Perl, since I was maintaining several programs written in it, and it was still my go-to for one-off scripts, but for larger projects I drifted along with a lot of the programming community towards, among many others , JavaScript.

The past months NPM-shenanigans made me long for the days where I could, for the most part, install my dependencies through my distro package manager, and not pull the latest and greatest (and backdoored) version from a third party package registry.

So, since I wanted to get away from the node-ecosystem, and my static site generator was written in JS, I thought I would try my hand at writing one in Perl. This is the most fun I've had writing code for years . It's just so nice to write.

Sure, part of it is that I'm very used to writing Perl. But a lot really comes down to the various niceties that Perl offers. strict mode makes it harder to make simple mistakes, like mistyping a variable, as it will refuse to run if you have done so. state variables, which lets subroutines maintain state between invocations without using global variables, and without having to pass a state object around, are great. I miss one or both of these features when writing in other languages.

Perl has, of course, evolved quite a bit , in recent years. With proper subroutine signatures, and builtin try/catch probably being the most prominent features. A new object-system is available as an experimental feature, but in the meantime we've got Moo , providing a very nice syntax for writing classes.

Perl is also fast , not only at runtime, but also at compile time . The latter is really important for writing client-side programs, where any startup delay will be immediately visible to the user.

Writing Perl is fun. My SSG, BoringSSG , is coming along nicely. Even though it's sort of old-school, it feels fresh and very comfortable to be able to get my dependencies through my regular package manager.

It's not the same language I learned from the llama many years ago. It has improved immensely, while never losing any of the things that made it great in the first place. If you've overlooked Perl, I recommend giving it another look. Or perhaps another language that you enjoyed a long time ago. You might feel some renewed joy in coding too.

atproto Check-in (Fall 2025)

Lobsters
docs.bsky.app
2025-10-20 23:32:50
Comments...
Original Article

We last shared a protocol roadmap back in March 2025 , and wow does time fly. If we're being honest, we haven't tied a bow on as many of these threads as we would've hoped. Oh time, strength, cash, and patience!

Fortunately, we have more capacity on the team for protocol work than we did even a couple months ago. Expect to see a lot of work start to land in the coming months.

The Atmosphere is Thriving

Before we dive into what we’re up to, let’s take a moment to celebrate what's happening in the Atmosphere. Our little network is really starting to hit its stride. The energy is incredible and growing by the day! We're seeing new projects pop up constantly, and there's a new level of maturity across the board.

What's really amazing is watching developers help other developers. Development is happening over on Tangled . Devs are sharing updates through Leaflet . Projects like Slices , Microcosm , PDSls , and Graze are making it easier for everyone to build. The AT Protocol Community just announced the second AtmosphereConf this March in Vancouver. This is what decentralized development looks like. Remember: on an open protocol we can just do things .

Big Picture

We’re close to a big milestone for the protocol. Think of it as the “AT 1.0 moment” (even if we don’t literally call it that). As we wrap up our protocol work on auth scopes and Sync1.1 , we believe that we’ve fleshed out a complete set of primitives for working with online identities and public broadcast data. This doesn’t mean that development on new features (i.e. private data) isn’t happening. But we think it’s important that we land and mature the work that we’ve already done around public broadcast data before we move on to the next big chunk of work.

With that in mind, our current focus is on adding a layer of maturity and polish to AT to make it the obvious choice when building public sphere social apps.

We’re pursuing this through three main tracks:

  • Developer Experience : Making AT fun and easy to work with. Product-focused devs should be able to build a new social app in a weekend.
  • Governance : Ensuring that AT is something larger and longer-lived than Bluesky
  • Hard Decentralization : Encouraging the emergence of more stakeholders in the network for a more resilient Atmosphere

Developer Experience

We know there are rough edges in the developer experience, but we’ve been hard pressed to find the time to smooth them out while also adding new protocol functionality. With a bit of polish, we’re confident that AT can be fun and easy to build on.

OAuth Cookbooks & Tutorials

OAuth is one of the trickiest parts of building on the protocol — tricky enough that Nick Gerakines is selling out courses on how to do it! OAuth in general is unfortunately complicated in itself, and the decentralized bits of AT only add to that complexity, but that doesn’t mean that it needs to be unapproachable.

We're taking inspiration from Auth0 's approach and putting together some comprehensive examples and tutorials that we hope will make getting started with OAuth way easier. Expect to see these published in the next week or two.

We recently wrapped up our dev work on auth scopes and permission sets . Expect an announcement and guides on how to make use of those shortly.

Lexicon SDK

Lexicons are the coordination points for schematic data in the network. As more and more applications are publishing new Lexicons, it’s important that developers can actually make use of them to build native integrations between apps.

The current Lexicon SDK was really a prototype that ended up living on way longer than it probably should have. It doesn’t have to be like this. We’re putting together a new SDK that takes inspiration from Buf ’s approach to codegen and schema management. This SDK should make it a breeze to pull in schemas from across the Atmosphere into your application.

Sync Tool

Repository synchronization is at the core of AT. Every app has to do it. And unfortunately it’s difficult to do and even more difficult to do correctly.

We’re continuing to roll out Sync1.1 on the producer (Relay) side, but a fully spec-compliant consumer for it still doesn’t exist. Most developers currently rely on Jetstream to drive their application, but that only helps with live data. There’s no easy way to perform backfill and sync full subsets of the network.

We’re working on a tool that should smooth over the tricky parts of doing sync in AT. This tool will likely take the form of a small service you can run that handles backfill, cutover, cursor management, Sync1.1, and dynamic filtering by DID and collection. That service will do all the tricky stuff and then hand off a nice clean set of record-level operations to your application. It will offer several interfaces including a websocket interface and the ability to translate the firehose into webhooks — meaning AT can work with serverless architectures!

Website

We’re going to be giving atproto.com a facelift in the coming months. You can expect all the work mentioned above to make its appearance there. We’ll also be overhauling the information architecture and publishing new tutorials and guides to make AT more approachable.

Governance

As the Atmosphere matures and more devs are putting time and resources into building companies/projects in the ecosystem, we believe it’s our responsibility to ensure that the protocol has a neutral long-term governance structure around it. The governance of the protocol should outlive Bluesky and be resilient to shifts in the incentive structure that could compromise a future Bluesky PBC.

To that end, we have 3 major developments:

Patent Pledge

We announced our Patent Non-Aggression Pledge at the start of October. Our SDKs and reference implementations are all open source and licensed under permissive software licenses. This patent pledge takes it a step further and establishes additional assurance around patent rights.

Independent PLC Organization

We announced in September that we were working to establish an independent organization to operate the PLC (Public Ledger of Credentials) Directory. PLC is the most common identity method for accounts in the Atmosphere.

Currently this directory is owned and operated by Bluesky PBC. We’re working to establish a Swiss association that will operate the directory and own all assets related to PLC (such as the domain name registration). We’re working with lawyers now to get this done right. Expect an update soon on our progress here.

IETF

We’re hoping to take pieces of AT to the IETF . We've submitted Internet Drafts on the IETF Datatracker and established a mailing list . We're hoping to establish a working group and towards that end, have requested a Birds of a Feather session in Montreal the first week of November. Some folks from the community will be attending and getting together informally. Leave a comment in the community forum if you’ll be around. If you're interested in shaping the future of the protocol at the standards level, we encourage you to get involved!

Hard Decentralization

Hard decentralization refers to the emergence of a resilient and multi-stakeholder Atmosphere that relies less on Bluesky PBC’s existence. There's some overlap with the other two goals here. Improving things like sync make it easier to run alternate infrastructure like Relays and Appviews, and our governance work should help build confidence that the protocol is a genuine public good that’s larger than Bluesky.

To support the goal of hard decentralization, we're also tackling some specific technical challenges.

Improving Non-Bluesky PDS Hosting

The decentralization guarantees of AT come from the locked-open substrate of data hosted by Personal Data Servers (PDSes). One of our current goals to increase the resilience of the network is to encourage more non-Bluesky PDS hosting.

We recently enabled account migration back to bsky.social . We hope this will give users the confidence to experiment with other hosts in the network, knowing they can always migrate back if they need to. Already we’re seeing an uptick in users posting from non-Bluesky PDSes .

Some developers in the network have launched account migration tools that make it easier for non-technical users to migrate between hosts. Examples include PDS MOOver and Tektite . We believe that the next step is to introduce an account migration UI into the PDS itself.

We also intend to make running a PDS more approachable for mid-size hosts. This includes adding auto-scaling rate limits to the Relay reference implementation so that hosts can scale up organically without needing permission or approval. We’re also looking at ways to improve the PDS distribution to make it easier to run and administer with thousands of users.

Technical Improvements to PLC

While we’re working to move PLC out into an independent organization, we’re also planning some technical improvements to PLC to make it more auditable.

Specifically, we want to make it easier to mirror the directory. We intend to introduce a new WebSocket route to the directory that allows new PLC operations to go out in real time. With this, we’ll also publish a PLC Mirror Service reference implementation. This improves both the auditability of PLC and has operational benefits for developers that may wish to run a PLC Mirror closer to their infrastructure.

There are also some legacy non-spec-compliant operations in the directory that make it difficult to write an alternate implementation of PLC that interoperates with the directory. Upon investigation, these have all been traced back to developers probing the PLC system itself, not regular network accounts. We plan to clean those up and harden the directory against similar operations.

This work is building towards the introduction of Transparency Logs (tlogs). Check out Sunlight to see where we’re heading. This probably won’t land in the next six months, but it’s the clear next step for improving trust in PLC.

Alternate Infrastructure

We’re excited to see that more and more devs are experimenting with running alternate infrastructure in the network. Blacksky currently runs a full-network relay (using the rksy-relay implementation!), and is working on a full-network Bluesky appview. Futur showed us all that it was possible with Zeppelin , a full network appview that is now decommissioned. And Red Dwarf is a new Bluesky client that doesn’t use an Appview but rather drives the experience via direct calls to the PDS and generic indices provided by Microcosm .

Please reach out to us if you’re working on running alternate infrastructure. We’re eager to lend a hand.

Private Data

We believe that group-private data is absolutely necessary for the long-term success of the protocol and the Atmosphere. Every day, we wish that we had this figured out and developed already. But as mentioned earlier, we believe that we need to land and mature the existing protocol around public broadcast data before we move on to the next big chunk of work.

We continue to have internal discussions around private data. Paul shared some leaflets that give a sense of the approaches that we’re considering and the rubric by which we’re judging them. The AT Protocol Community is also coordinating a Private Data Working Group to explore some designs for how the system could work.

In the meantime, if you’re building an Atmosphere app, please don’t let the lack of a private data protocol prevent you from building the features that you need to build. Our advice is to publish identities and public data through AT and store any private data that you need on your own server. The semantics of private data will likely look very similar to public data (CRUD operations over a key-value store of typed records, sorted by collection), so if you want to get ahead of the ball, model your data accordingly.

For E2EE DMs, Germ has put together a lovely app built on MLS (Messaging Layer Security) with an AT integration that’s in beta.

Keep up with the Atmosphere

The Atmosphere is getting bigger every day, and it’s starting to get tough to keep up with everything that’s happening! Here are some ways to stay in the loop:

Eavesdropping on Internal Networks via Unencrypted Satellites

Daring Fireball
satcom.sysnet.ucsd.edu
2025-10-20 23:20:45
SATCOM Security — a team of researchers from UC San Diego and the University of Maryland: We pointed a commercial-off-the-shelf satellite dish at the sky and carried out the most comprehensive public study to date of geostationary satellite communication. A shockingly large amount of sensitive t...
Original Article

Eavesdropping on Internal Networks via Unencrypted Satellites

UC San Diego logo University of Maryland logo

Summary

We pointed a commercial-off-the-shelf satellite dish at the sky and carried out the most comprehensive public study to date of geostationary satellite communication. A shockingly large amount of sensitive traffic is being broadcast unencrypted, including critical infrastructure, internal corporate and government communications, private citizens’ voice calls and SMS, and consumer Internet traffic from in-flight wifi and mobile networks. This data can be passively observed by anyone with a few hundred dollars of consumer-grade hardware. There are thousands of geostationary satellite transponders globally, and data from a single transponder may be visible from an area as large as 40% of the surface of the earth.

What type of network traffic was exposed?

    Cellular Backhaul

    We observed unencrypted cellular backhaul data sent from the core network of multiple telecom providers and destined for specific cell towers in remote areas. This traffic included unencrypted calls, SMS, end user Internet traffic, hardware IDs (e.g. IMSI), and cellular communication encryption keys.

    Military and Government

    We observed unencrypted VoIP and internet traffic and encrypted internal communications from ships, unencrypted traffic for military systems with detailed tracking data for coastal vessel surveillance, and operations of a police force.

    In‑flight Wi‑Fi

    We observed unprotected passenger Internet traffic destined for in-flight Wi-Fi users on airplanes. Visible traffic included passenger web browsing (DNS lookups and HTTPS traffic), encrypted pilot flight‑information systems, and in‑flight entertainment.

    VoIP

    Multiple VoIP providers were using unencrypted satellite backhaul, exposing unencrypted call audio and metadata from end users.

    Internal Commercial Networks

    Retail, financial, and banking companies all used unencrypted satellite communications for their internal networks. We observed unencrypted login credentials, corporate emails, inventory records, and ATM networking information.

    Critical Infrastructure

    Power utility companies and oil and gas pipelines used GEO satellite links to support remotely operated SCADA infrastructure and power grid repair tickets.

Full Technical Paper

Has the Issue been Fixed?

There is no single stakeholder responsible for encrypting GEO satellite communications. Each time we discovered sensitive information in our data, we went through considerable effort to determine the responsible party, establish contact, and disclose the vulnerability. In several cases, the responsible party told us that they had deployed a remedy. For the following parties, we re-scanned with their permission and were able to verify a remedy had been deployed: T-Mobile, WalMart, and KPU .

We are only publishing information about affected systems and naming relevant parties once the issue has been remediated or an industry-standard 90-day disclosure window has elapsed without response or justification for extending the embargo. In some cases, we are withholding information about affected systems because disclosure is still ongoing as of October 13, 2025. We will update this page with additional information as we are able to.

Who is Affected & What Should They Do?

End Users

There is no way for end users to know if upstream providers are encrypting their traffic. Nearly all Internet browsing traffic was encrypted by end-user devices using TLS or QUIC, and cellular backhaul exposure was limited to a relatively small number of cell towers in specific remote areas. End users can encrypt network traffic they directly generate by using a VPN; for messaging and voice communications use end‑to‑end encrypted apps like Signal.

Organizations Using Satellite Communications

Satellite communication links should be treated like unsecured and public wireless networks. There are many opportunities for encryption: TLS can protect individual application traffic, IPsec or multipoint encrypted VPNs can encrypt networks, and satellite communication providers may offer encryption of the satellite beam. Encryption should be used at every layer as defense-in-depth protection against individual failures. Treat encryption as mandatory, not an add‑on.

Reference Guidance

See the NSA’s VSAT recommendations (2022).

Q&A

What did your study do?

We set up a consumer-grade satellite dish on the roof of a university building in San Diego, California with a positioning motor and a consumer-grade TV tuner card to capture raw bytes. We then scanned for all transponders (position and frequency) visible from our fixed location, and wrote custom protocol-parsing code to reconstruct network packets from the quirky custom protocol stacks we reverse-engineered from different vendors. We observed 411 transponders across 39 GEO satellites, and our single fixed-location dish could receive IP traffic from 14% of all global Ku-band satellites.

Why aren't all GEO satellite links encrypted?

There are direct costs to enabling encryption. Encryption imposes additional overhead to an already limited bandwidth, decryption hardware may exceed the power budget of remote, off-grid receivers, and satellite terminal vendors can charge additional license fees for enabling link-layer encryption. In addition, encryption makes it harder to troubleshoot network issues and can degrade the reliability of emergency services. Some users may forgo encryption intentionally; others may be unaware these links are unencrypted or underestimate the risk and ease of eavesdropping attacks. While significant academic and activist attention has been put into ensuring nearly universal use of encryption for modern web browsers, there has been much less visiblity and attention paid to satellite network communications.

Several vendors told us they were in the process of transitioning to encrypted links.

What about Starlink/LEO?

Our study focused on GEO satellite systems, which remain in a fixed point relative to the surface of the earth. These systems remain in wide use for critical infrastructure because of their reliability and backwards compatibility. We did not study LEO (Low Earth Orbit) systems (e.g., Starlink), which offer higher bandwidth and greater coverage but require more complicated receiving hardware. Our understanding is those links are encrypted, but we have not independently verified this.

What about the uplink?

The downlink signal from a satellite typically broadcasts to a wide geographic area, but the returning uplink is more focused. This means that from our single vantage point in San Diego, California, we were only able to observe one half of a given network connection.

Did you have to hack or interfere with any satellites?

Our study was fully passive; we simply set up a consumer-grade satellite dish on Earth and observed traffic without transmitting ourselves.

Can you tell if someone is listening to traffic?

Since this unencrypted data can be observed fully passively, there is no way to know if someone has set up a dish to listen.

Can you audit our network?

If you would like our assistance in determining whether your network traffic has been exposed, please get in touch .

Is this legal/ethical?

We consulted with the University of California legal counsel on the design of our study and worked closely with them during disclosure. We have gone through considerable effort to attempt to disclose the vulnerabilities we found to affected parties.

Source Code

Contact

Contact our research group at satcomsec-g@ucsd.edu for both academic and press questions about this project. Also, we will answer questions about how to use our open source dontlookup DVB-S2(X) IP Packet extractor . If you find a bug, please directly file an issue on the GitHub.

Team

External Links

We will update this section with links to external media coverage and press releases.

Update History

  • (2025-10-13) Initial release

When Design Drives Behavior

Daring Fireball
world.hey.com
2025-10-20 23:13:18
Jason Fried: So what’s the net effect of this tiny little design detail that the owner may not even understand? Well, it looks like the watch is already half-way out of power after the first day, so it encourages the owner to wind the watch more frequently. To keep it closer to topped off, even ...
Original Article

In some cases, design is what something looks like.

In other cases, design is how something works.

But the most interesting designs to me are when design changes your behavior. Even the smallest details can change how someone interacts with something.

Take the power reserve indicator on the A. Lange & Söhne Lange 1 watch. The power reserve indicator indicates how much "power" (wind) is left. It's pictured below on the right side of the dial. It starts with AUF ("up") and ends with AB ("down"). A fully wound Lange 1 (indicator up at AUF) will give you about 72 hours before the watch fully runs out of power, stops, and must be wound again. It moves down as the watch runs until you're out of power. Wind it again to fill it back up.

a-lange-sohne-lange-1-pink-gold-191.032-power-reserve-indication.jpg

Simple enough, right? An indicator and a scale for fully wound through unwound. Just like a car's fuel gauge. You have full through empty, with a few ticks in between to indicate 3/4 or 1/4 tank left, and typically a red zone at the end saying you really need to fill this thing up soon or you're going to be stranded.

However, all is not as it seems on the Lange 1. There's something very clever going on here to change your behavior .

First you'll notice five triangles between AUF and AUB. They aren't equally spaced. At first you might think it looks like each is about a quarter of the scale, and then the last two at the bottom would be like the red zone on your fuel gage.

But no. The indicator follows a non-linear progression downwards. It doesn't sweep from top to bottom evenly over time. It's actually accelerated early.

When fully wound, It takes just a day for the indicator to drop down two markers to the halfway point. From there, it takes a day each to hit the lower two markers. This makes it look like it's unwinding faster than it is because the indicator covers more distance in that first 24 hours. If the spacing were uniform, and the indicator was linear, the owner might not feel the need to wind it until the power reserve was nearly fully depleted. Then you might have a dead watch when you pick it up the next morning.

Note Oct 7, 2025.png

So what's the net effect of this tiny little design detail that the owner may not even understand? Well, it looks like the watch is already half-way out of power after the first day, so i t encourages the owner to wind the watch more frequently . To keep it closer to topped off, even when it's not necessary. This helps prevents the watch from running out of power, losing time, and, ultimately, stopping. A stopped watch may be right twice a day, but it's rarely at the times you want.

Small detail, material behavior change. Well considered, well executed, well done.

Dirk Eddelbuettel: RcppArmadillo 15.2.0-0 on GitHub: New Upstream, Simpler OpenMP

PlanetDebian
dirk.eddelbuettel.com
2025-10-20 22:13:00
Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research ...
Original Article

RcppArmadillo 15.2.0-0 on GitHub: New Upstream, Simpler OpenMP

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1270 other packages on CRAN , downloaded 42 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper ( preprint / vignette ) by Conrad and myself has been cited 650 times according to Google Scholar.

This versions updates to the 15.2.0 upstream release made today. It brings a few changes over Armadillo 15.0 (see below for more). It follows the most recent RcppArmadillo 15.0.2-2 release and the Armadillo 15 upstream transition with its dual focus on moving on from C++11 and deprecation of a number of API access points. As we had a few releases last month to manage the transition, we will sit this upgrade out and not upload to CRAN in order to normalize our update cadence towards the desired ‘about six in six months’ (that the CRAN Policy asks for). One can of course install as usual directly from the GitHub repository as well as from r-universe which also offers binaries for all CRAN platforms.

The transition to Armadillo 15 appears to be going slowly but steadily. We had well over 300 packages with either a need to relax the C++11 setting and/or update away from now-deprecated API access points. That number has been cut in half thanks to a lot of work from a lot of package maintainers—which is really appreciated! Of course, a lot remains to be done. Issues #489 and #491 contain the over sixty PRs and patches I prepared for all packages with at least one reverse dependency. Most (but not all) have aided in CRAN updates, some packages are still outstanding in terms of updates. As before meta-issue #475 regroups all the resources for the transition. If you, dear reader, have a package that is affected and I could be of assistance please do reach out.

The other change we made is to greatly simplify the detection and setup of OpenMP. As before, we rely on configure to attempt compilation of a minimal OpenMP-using program in order to pass the ‘success or failure’ onto Armadillo as a ‘can-or-cannot’ use OpenMP. In the year 2025 one of the leading consumer brands still cannot ship an OS where this works out of the box, so we try to aide there. For all others systems, R actually covers this pretty well and has a reliable configuration variable that we rely upon. Just as we recommend for downstream users of the package. This setup should be robust, but is a change so by all means if you knowingly rely on OpenMP please test and report back.

The detailed changes since the last CRAN release follow.

Changes in RcppArmadillo version 15.2.0-0 (2025-10-20) (GitHub Only)

  • Upgraded to Armadillo release 15.2.0 (Medium Roast Deluxe)

    • Added rande() for generating matrices with elements from exponential distributions

    • shift() has been deprecated in favour of circshift() , for consistency with Matlab/Octave

    • Reworked detection of aliasing, leading to more efficient compiled code

  • OpenMP detection in configure has been simplified

More detailed information is on the RcppArmadillo page . Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub .

/code/rcpp | permanent link

DNS0.EU private DNS service shuts down over sustainability issues

Bleeping Computer
www.bleepingcomputer.com
2025-10-20 22:05:17
The DNS0.EU non-profit public DNS service focused on European users announced its immediate shut down due to time and resource constraints. [...]...
Original Article

DNS0.EU private DNS service shuts down over sustainability issues

The DNS0.EU non-profit public DNS service focused on European users announced its immediate shut down due to time and resource constraints.

Based in France, the service was built as a resilient infrastructure across several hosting providers in every member state of the European Union.

The team behind DNS0.EU replaced all content on the website with a short announcement informing that they discontinued the service.

"The dns0.eu service has been discontinued. We would have liked to keep it running, but it was not sustainable for us in terms of time and resources," the DNS0.EU operator said.

Available alternatives

The team thanked infrastructure and security partners, and recommended that people switch to DNS4EU , a privacy-focused resolver developed by ENISA, or NextDNS , whose founders helped create DNS0.EU.

A DNS resolver translates the human-readable domain names into the numerical, machine-readable IP addresses so browsers can load the correct internet resources.

By default, connected devices use the DNS service from the Internet Service Provider (ISP) but they can choose other options, like Cloudflare (1.1.1.1), Google (8.8.8.8), or OpenDNS (208.67.222.222).

DNS0.eu was a public recursive DNS resolver service launched in 2023 as a French-based non-profit organization. It promised no-logs functionality, end-to-end encryption for resistance to eavesdropping and tampering, as well as protection against malicious domains, be they phishing domains, or command-and-control (C2)  malware servers.

It offered a free, secure, and GDPR-compliant DNS resolver that supported DNS‑over‑HTTPS, DNS‑over‑TLS, DNS-over-QUIC, and DNS‑over‑HTTP/3. It operated 62 servers in 27 cities in all EU member states, boasting a median latency of 12 milliseconds.

In addition, DNS0.EU provided child safety-focused filters for adult content, piracy, and ads, as well as increased detection of potentially malicious domains by looking into typosquatting, domain parking patterns, TLD reputation, homograph domains, and DGA-created URLs.

DNS0.EU team's recommendations for users, DNS4EU and NextDNS also include protection features against fraudulent and malicious content. However, NextDNS provides more granular filtering for websites and apps through privacy, security, and parental control options.

DNS4EU, co-funded by the European Union, is easier to set up and offers IP resolution that can block access to websites with fraudulent or malicious content, protect against content that is explicit or inappropriate for children, and stop ads.

BleepingComputer has contacted DNS0.EU to learn more about the reasons behind the shut down of the service, and we will update this post when we hear back.

Why UUIDs won't protect your secrets

Hacker News
alexsci.com
2025-10-21 00:16:41
Comments...
Original Article
Blog Home

This post is part of a collection on UUIDs .

What is IDOR?

Indirect Object Reference (IDOR) occurs when a resource can be accessed directly by its ID even when the user does not have proper authorization to access it. IDOR is a common mistake when using a separate service for storing files, such as a publicly readable Amazon S3 bucket. The web application may perform access control checks correctly, but the storage service does not.

Here’s vulnerable Django code which allows a user to view their latest billing statement:

# Vulnerable!
@login_required
def view_latest_bill(request):
    bill = Bill.objects.filter(owner=request.user).order_by("date").desc()[0]
    url = f'https://example.us-east-1.s3.amazonaws.com/bill-{bill.id}'
    return render(request, 'template.html', { url: url })

While Django ensures the user is logged in and only provides them with bills they own, S3 has no concept of Django users, and performs no such authorization checks.

A simple attack would start from a known URL and increment the bill ID:

$ curl https://my-bucket.us-east-1.s3.amazonaws.com/bill-100
[ attacker sees their own bill ]
$ curl https://my-bucket.us-east-1.s3.amazonaws.com/bill-101
[ attacker sees another user's bill ]

The attacker can keep trying bill IDs, potentially accessing the entire collection of bills.

UUIDs to the rescue?

What if we changed the Django model to use UUIDs for the primary key instead of an auto-increment? The new URLs will look like: my-bucket.us-east-1.s3.amazonaws.com/bill-9c742b6a-3401-4f3d-bee7-6f5086c6811f. UUIDs aren’t guessable, so the attacker can’t just “add one” to the URL to access other user’s files, right?

 class Bill(models.Model):
-    id = models.AutoField(primary_key=True)
+    id = models.UUIDField(primary_key=True, default=uuid.uuid4)

Unfortunately, this is only a partial fix.

Even when URLs are unguessable, that doesn’t mean an attacker can’t learn them. A classic example starts with a former employee who used their personal computer for work. Hopefully their user account was quickly disabled, blocking them from accessing the company’s web application. But sensitive URLs may still exist in their browser history. Even a non-technical attacker can pull off this attack, just by clicking through their browser history. Thankfully, many companies require employees to use company-issued devices when performing work, so this attack may be limited to former employees who violated that rule.

The accidental leaking of URLs is probably a more reasonable concern. For example, if only managers are authorized to view bills you need to be careful not to leak the bill ID in other views where other employees have access.

If you use secret UUIDs, think of them as toxic assets. They taint anything they touch. If they end up in logs, then logs must be kept secret. If they end up in URLs, then browser history must be kept secret. This is no small challenge.

Another concern for leaked UUIDs is rotation. Whenever a secret key is compromised, leaked, or known to have been stored improperly, it should be changed. The same holds true for secret URLs. Make sure you have a way to rotate secret URLs, otherwise you may end up stuck in a compromised state. Again, no small challenge.

If this sounds like a huge pain… it is. Let’s find a better solution.

Properly fixing IDOR

The best approach is to ensure every request for sensitive data is authorized.

One fix is to route file access through the web application. Continuing our example, the user would access /api/bill/100 and the file would be streamed from the storage through the web app to the user’s browser. If the user tries to access /api/bill/101, where they lack authorization, the web application can deny the request. Make sure the storage bucket is private, such that access must route via the web app.

This approach is a good quick fix, but there are other approaches to consider.

If your storage provider is Amazon S3 you should consider pre-signed URLs . These URLs allow the browser to download the file directly from S3, without streaming through the web app. The URL contains a cryptographic signature with a short expiration date. These URLs are still sensitive, but the short expiration mitigates a number of concerns. Again, make sure the storage bucket is private.

A key benefit of the pre-signed URL approach is that it offloads file access from your web application, reducing load on the application server.

Case study: YouTube unlisted content

Let’s consider a well-known application that doesn’t follow this advice.

YouTube, a popular video hosting service, allows uploaders to mark videos as “unlisted”. This is a compromise between public and private. The owner of the video can copy their video’s URL and share it out-of-band, like in a private chat room. This way, people in the private chat room can view the video, but the owner doesn’t need to grant them access one-at-a-time and the viewers don’t need to log in. In essence, anyone who knows the URL is considered authorized from YouTube’s perspective.

A radio control based form with the following text. Visibility. Choose when to publish and who can see your video. Save or publish. Make your video public, unlisted, or private. Private. Only you and people you choose can watch your video. Unlisted. Anyone with the video link can watch your video. Public. Everyone can watch your video.

YouTube visibility selection

This approach uses unguessable URLs, which contain a random video ID, like ibF36Yyeehw . This appears to be 11 random alphanumeric characters, which offer around 64 bits of entropy. This is suitably unguessable, but the security is questionable.

Once the URL is shared with others, the owner loses the ability to assert access control over the video. An authorized viewer can choose to share the URL with others. Users may expect that the video has proper access control restrictions and share the URL in a public-facing document, not realizing that leaking the URL leaks the video.

Consider unlistedvideos.com, an index of unlisted YouTube videos. Users who discover unlisted videos can upload those URLs to the site, thus leaking the content to a broad audience. The large number of videos listed on the site shows the poor access control properties afforded by this access control method.

If your unlisted content leaks to unauthorized viewers, you can regain control by marking the video as private. This prevents anyone from accessing the video, until you grant their account access. Of course, you probably chose to make the video unlisted to avoid needing to manage individual account access. You could also try re-uploading the video, marking it as unlisted, and sharing the new link, but the risk of a subsequent leak remains.

Another example of this design appears later in this blog post, AWS billing estimates. AWS appears to use 160 bits of entropy to protect these URLs. Here’s the verbiage AWS uses when you create a share link.

A dialog box containing the following text. Public server acknowledgment: Your data will be stored on AWS public servers. The calculator will generate an obscure, but publicly accessible URL. Anyone with the public share link will be able to access your estimate.

AWS billing share dialog

Interestingly, I’m not seeing a way to delete a billing estimate once shared. The creator appears to lose all ability to manage access once the link is shared outside their sphere of control. Be very careful not to put sensitive data in your billing estimates.

Unlisted content is an example of IDOR as an intentional security design. The uploader is expected to decide if unlisted offers the right security posture for their content. There are use cases where the effort needed to individually grant users access outweighs the risk of using unlisted. Not everyone is dealing in highly sensitive content, after all.

Are UUIDs unguessable?

OK, maybe you want to create something like YouTube unlisted content, despite these concerns. In that case, we should ignore security concerns related to “leaked URLs” as that is “by design”. Unlisted URLs are sort of like bearer tokens or API tokens which grant access to a single resource. Let’s focus on attacks that guess URLs and consider how guessable UUIDs actually are.

UUIDv4 contains 122 random bits, much more than the 64 bits of a YouTube video ID, so there’s little to contest about UUIDv4 guessability.

But what about newer formats like UUIDv7?

UUIDv7 embeds a timestamp at the start such that the IDs generally increase over time. There’s some claimed benefits, such as improved write performance for certain types of databases.

Unfortunately, the timestamp makes UUIDv7s easier to guess. The attacker needs to figure out the timestamp and then brute-force the random bits. Learning the timestamp may not be that difficult: users sometimes have access to metadata for resources they don’t have full permission to access.

In our “latest bill” example, the bills are probably generated by a batch job kicked off by cron. As such, the bills are likely created one after another in a narrow time period. This is especially true if the attacker has the UUID of their own bill as a reference. An attacker may be able to guess a small window around when the target object’s UUID was created.

Other UUID generation methods recommend creating UUIDs in large batches and then assigning them to resources, in order, as resources are created. With this approach, the UUID timestamp is loosely correlated with the resource creation timestamp, but doesn’t contain a high precision timestamp for the resource creation. This mitigates some classes of information leakage related to timestamps. Unfortunately, it also bunches UUIDs together very tightly, such that many IDs will share the exact same timestamp. Learning one UUID leaks the timestamp of the entire batch.

At first glance, the random bits seem to save us. There are still 74 random bits in a UUIDv7; still more than a YouTube video ID. That’s 2 74 possible random suffixes (18,889,465,931,478,580,854,784). Well beyond what an attacker can reasonably brute-force over the Internet.

I would end the blog post here, but UUIDv7 offers additional optional methods which we need to consider. The spec allows monotonic counters to be used when multiple UUIDs are created within the same timestamp. This ensures that IDs created by a single node are monotonically increasing, even within a single millisecond. The first UUID in a given timestamp uses a randomized counter value. Subsequent IDs in the same millisecond increment that counter by one.

When the counter method is used, an attacker who learns one UUIDv7 can predict the counters of neighboring IDs by adding or subtracting one. A random suffix still exists, and that would still need to be brute-forced.

Of note for Django users, Python 3.14 introduced UUIDv7 in the standard library. Python uses a 42-bit counter , which is the maximum width the spec allows. That means Python’s UUIDv7 only has 32 random bits, offering only 2 32 possible random suffixes (4,294,967,296).

How much security does 2 32 offer?

Four billion seems like a big number, but is it large enough?

On average, this is 1,657 request per second averaged over a month. Is that possible?

S3 claims it will automatically scale to “at least 5,500 GET requests per second”. On the attacker side, HTTP load testing tools easily scale this high. k6, a popular load testing too, suggests using a single machine unless you need to exceed 100,000 request per second. The attack fits within the systems limits and appears feasible.

Adding a rate limiter would force the attacker to distribute their attack, increasing attacker cost and complexity. Cloud providers like Amazon S3 don’t offer rate limiting controls so you’ll need to consider a WAF. This changes the user-facing URL, so adding a WAF may break old URLs.

There’s cost asymmetry here too. An attacker who guesses 2 32 S3 URLs will cost your service at least $1,700 on your AWS bill . If you don’t have monitoring set up, you may not realize you’re under attack until you get an expensive bill. The attackers cost could be as low as a single machine.

I’m uneasy about the security here, as the attack appears technically feasible. But the attack doesn’t seem very attractive to an attacker, as they may not be able to target a specific resource.

An application that had juicy enough content to be worth attacking in this way would probably worry about “URLs leaking”. In that case, unlisted URLs are a poor fit for the product and the fixes listed earlier should be used. Which renders the entire point moot as you should never end up here.

But it’s not an entirely theoretical concern. If you search on GitHub, you can find examples of applications that use UUIDv7 IDs and the “public-read” ACL. The sensitivity of the data they store and the exact UUIDv7 implementation they use varies.

Nevertheless, 32 random bits is too small to be considered unguessable, especially for a cloud service like S3 which lacks rate-limit controls.

UUIDv7 for internal-only IDs

A common theme of UUIDv7 adoption is to avoid exposing the IDs publicly. One concern driving this trend relates to IDs leaking timing information, which can be sensitive in certain situations.

A simple approach uses a random ID, perhaps UUIDv4, as the external ID and UUIDv7 as the database primary key. This can be done using a separate database column and index for the external ID.

Another intriguing approach is UUIDv47 which uses SipHash to securely hash the UUIDv7 into a UUIDv4-like ID. SipHash requires a secret key to operate, so you’ll need to manage that key. Unfortunately, rotating the key will invalidate old IDs, which would break external integrations like old URLs. This may prevent systems from changing keys after a key compromise. Caveat emptor.

Either of these approaches could help in our “unlisted URLs with UUIDv7” example.

Postgres UUIDv7 generation

Postgres currently uses the “replace leftmost random bits with increased clock precision” method when generating UUIDv7 IDs. Postgres converts 12 of the random bits into extra timestamp bits. This means Postgres UUIDv7 timestamps have nanosecond granularity instead of millisecond. As such, Postgres UUIDv7s have 62 random bits, in the current implementation.

So when it comes to UUIDv7 guessability, it really depends on what optional methods the implementation chooses.

Closing

Be careful when adopting newer UUID versions as the properties and trade-offs are distinct from earlier versions.

The authors of UUIDv7 knew about these guessability concerns and discuss them in RFC 9562. The spec offers a “monotonic random” counter method, which increments the counter by a random amount instead of one. While their solution would help mitigate this attack, I wasn’t able to find an implementation that actually uses it.

References


You don't need Kafka: Building a message queue with Unix signals

Hacker News
leandronsp.com
2025-10-20 23:22:45
Comments...
Original Article

Have you ever asked yourself what if we could replace any message broker with a very simple one using only two UNIX signals? Well, I’m not surprised if you didn’t. But I did. And I want to share my journey of how I achieved it.

If you want to learn about UNIX signals, binary operations the easy way, how a message broker works under the hood, and a bit of Ruby, this post is for you.

And if you came here just because of the clickbait title, I apologize and invite you to keep reading. It’ll be fun, I promise.

image

It’s all about UNIX

A few days ago, I saw some discussion on the internet about how we could send messages between processes. Many people think of sockets, which are the most common way to send messages, even allowing communication across different machines and networks. Some don’t even realize that pipes are another way to send messages between processes:

$ echo 'hello' | base64

aGVsbG8K

Here’s what’s happening:

  • The process echo is started with the content “hello”
  • echo is a program that prints the message to STDOUT
  • Through the pipe, the content in STDOUT is sent directly to the STDINT of the base64 process
  • The base64 process encodes its input to Base64 and then puts the result in STDOUT

Note the word “send”. Yes, anonymous pipes are a form of IPC (Inter-process communication). Other forms of IPC in UNIX include:

  • named pipes (mkfifo)
  • sockets
  • regular files
  • or even a simple signal

UNIX signals

According to Wikipedia :

A UNIX signal is a standardized message sent to a program to trigger specific behaviour, such as quitting or error handling

There are many signals we can send to a process, including:

  • SIGTERM - sends a notification to the process to terminate. It can be “trapped,” which means the process can do some cleanup work before termination, like releasing OS resources and closing file descriptors
  • SIGKILL - sends a termination signal that cannot be trapped or ignored, forcing immediate termination
  • SIGINT - the interrupt signal, typically sent when you press Ctrl+C in the terminal. It can be trapped, allowing the process to perform cleanup before exiting gracefully
  • SIGHUP - the hangup signal, originally sent when a terminal connection was lost. Modern applications often use it to reload configuration files without restarting the process
  • SIGQUIT - similar to SIGINT but also generates a core dump for debugging
  • SIGSTOP - pauses (suspends) a process. Cannot be trapped or ignored
  • SIGCONT - resumes a process that was paused by SIGSTOP
  • SIGCHLD - sent to a parent process when a child process terminates or stops
  • SIGUSR1 and SIGUSR2 - user-defined signals that applications can use for custom purposes

Sending messages using signals

Okay, we know that signals are a primitive form of IPC. UNIX-like systems provide a syscall called kill that sends signals to processes. Historically, this syscall was created solely to terminate processes. But over time, they needed to accommodate other types of signals, so they reused the same syscall for different purposes.

For instance, let’s create a simple Ruby script sleeper.rb which sleeps for 60 seconds, nothing more:

puts "Process ID: #{Process.pid}"

puts "Sleeping for 60 seconds..."

sleep 60

After running we see:

Process ID: 55402

Sleeping for 60 seconds...

In another window, we can send the SIGTERM signal to the process 55402 via syscall kill :

$ kill -SIGTERM 55402

And then, in the script session:

[1] 55402 terminated ruby sleeper.rb

Signal traps

In Ruby, we can also trap a signal using the trap method in Ruby:

puts "Process ID: #{Process.pid}"

puts "Sleeping for 60 seconds..."

trap('SIGTERM') do

puts "Received SIGTERM, exiting gracefully..."

exit

end

sleep 60

Which in turn, after sending the signal, will gracefully:

Process ID: 55536

Sleeping for 60 seconds...

Received SIGTERM, exiting gracefully...

After all, we cannot send messages using signals . They are a primitive way of sending standardized messages which will trigger specific behaviours. At most, we can trap some signals, but nothing more.

Okay Leandro, but what’s the purpose of this article then?

Hold on . That’s exactly why I’m here. To prove points by doing useless stuff, like when I simulated OOP in Bash a couple of years ago (it was fun though).

To understand how we can “hack” UNIX signals and send messages between processes, let’s first talk a bit about binary operations . Yes, those “zeros” and “ones” you were scared of when you saw them for the first time. But they don’t bite (🥁 LOL), I promise.

What is a message?

If we model a message as a sequence of characters, we could say that at a high-level, messages are simply strings . But in memory, they are stored as bytes .

We know that bytes are made of bits. In computer terms, what’s a bit? It’s simply an abstraction representing only two states :

  • zero
  • one

That’s it. For instance, using ASCII , we know that the letter “h” has the following codes:

  • 104 in decimal
  • 0x68 in hexadecimal
  • 01101000 in binary

Binary-wise, what if we represented each “0” with a specific signal and each “1” with another? We know that some signals such as SIGTERM, SIGINT, and SIGCONT can be trapped, but intercepting them would harm their original purpose.

But thankfully, UNIX provides two user-defined signals that are perfect for our hacking experiment.

Sending SIGUSR1 and SIGUSR2

First things first, let’s trap those signals in the code:

puts "Process ID: #{Process.pid}"

puts "Sleeping forever. Send signals to this process to see how it responds."

trap('SIGUSR1') do

puts "Received SIGUSR1 signal"

end

trap('SIGUSR2') do

puts "Received SIGUSR2 signal"

end

sleep

Process ID: 56172

Sleeping forever. Send signals to this process to see how it responds.

After sending some kill -SIGUSR1 56172 and kill -SIGUSR2 56172 , we can see that the process prints the following content:

Process ID: 56172

Sleeping forever. Send signals to this process to see how it responds.

Received SIGUSR1 signal

Received SIGUSR2 signal

Received SIGUSR2 signal

Received SIGUSR1 signal

Received SIGUSR1 signal

Received SIGUSR2 signal

Signals don’t carry data . But the example we have is perfect for changing to bits, uh?

Received SIGUSR1 signal # 0

Received SIGUSR2 signal # 1

Received SIGUSR2 signal # 1

Received SIGUSR1 signal # 0

Received SIGUSR2 signal # 1

Received SIGUSR1 signal # 0

Received SIGUSR1 signal # 0

Received SIGUSR1 signal # 0

That’s exactly 01101000 , the binary representation of the letter “h”. We’re simply encoding the letter as a binary representation and sending it via signals

Again, we’re encoding it as a binary and sending it via signals .

How cool is that ?

image

Decoding the binary data

On the other side, the receiver should be capable of decoding the message and converting it back to the letter “h”:

  • sender encodes the message
  • receiver decodes the message

So, how do we decode 01101000 (the letter “h” in ASCII)? Let’s break it down into a few steps:

  1. First, we need to see the 8 bits as individual digits in their respective positions
  2. The rightmost bit is at position 0, whereas the leftmost bit is at position 7. This is how we define the most significant bit ( MSB , the leftmost) and the least significant bit ( LSB , the rightmost)
  3. For this example, we perform a left shift operation on each bit and then sum all the values, in this case from MSB to LSB (the order doesn’t matter much for now): (0 << 7) + (1 << 6) + (1 << 5) + (0 << 4) + ... + (0 << 0) :
    left shift on zeros will always produce a zero
  • 0 << 7 = (2 ** 7) * 0 = 128 * 0 = 0
  • 1 << 6 = (2 ** 6) * 1 = 64 * 1 = 64

Similarly to the remaining bits:

  • 1 << 5 = 32
  • 0 << 4 = 0
  • 1 << 3 = 8
  • 0 << 2 = 0
  • 0 << 1 = 0
  • 0 << 0 = 0

So, our sum becomes, from MSB to LSB:

MSB LSB

0 1 1 0 1 0 0 0

0 + 64 + 32 + 0 + 8 + 0 + 0 + 0 = 104

104 is exactly the decimal representation of the letter “h” in ASCII.

How wonderful is that?

Sending the letter “h”

Now let’s convert these operations to Ruby code. We’ll write a simple program receiver.rb that receives signals in order from LSB to MSB (positions 0 to 7) and then converts them back to ASCII characters, printing to STDOUT .

Basically, we’ll accumulate bits and whenever we form a complete byte, we’ll decode it to its ASCII representation. The very basic implementation of our accumulate_bit(bit) method would look like as follows:

@position = 0 # start with the LSB

@accumulator = 0

def accumulate_bit(bit)

# The left shift operator (<<) is used to

# shift the bits of the number to the left.

#

# This is equivalent of: (2 ** @position) * bit

@accumulator += (bit << @position)

return @accumulator if @position == 7 # stop accumulating after 8 bits (byte)

@position += 1 # move to the next bit position: 0 becomes 1, 1 becomes 2, etc.

end

# Letter "h" in binary is 01101000

# But we'll send from the LSB to the MSB

#

# 0110 1000 (MSB -> LSB) becomes 0001 0110 (LSB -> MSB)

# The order doesn't matter that much, it'll depend on

# the receiver's implementation.

accumulate_bit(0)

accumulate_bit(0)

accumulate_bit(0)

accumulate_bit(1)

accumulate_bit(0)

accumulate_bit(1)

accumulate_bit(1)

accumulate_bit(0)

puts @accumulator # should print 104, which is the ASCII code for "h"

Pay attention to this code. It’s very important and builds the foundation for the next steps. If you didn’t get it, go back and read it again. Try it yourself in the terminal or using your preferred programming language.

Now, how to convert the decimal 104 to the ASCII character representation? Luckily, Ruby provides a method called chr which does the job:

irb> puts 104.chr

=> "h"

We could do the same job for the rest of the word “hello”, for instance. According to the ASCII table , it should be the following:

  • e in decimal is 101
  • l in decimal is 108
  • o in decimal is 111

Let’s check if Ruby knows that:

104.chr # "h"

101.chr # "e"

108.chr # "l"

111.chr # "o"

We can even “decode” the word to the decimal representation in ASCII:

irb> "hello".bytes

=> [104, 101, 108, 108, 111]

Now, time to finish our receiver implementation to properly print the letter “h”:

@position = 0 # start with the LSB

@accumulator = 0

trap('SIGUSR1') { decode_signal(0) }

trap('SIGUSR2') { decode_signal(1) }

def decode_signal(bit)

accumulate_bit(bit)

return unless @position == 8 # if not yet accumulated a byte, keep accumulating

print "Received byte: #{@accumulator} (#{@accumulator.chr})\n"

@accumulator = 0 # reset the accumulator

@position = 0 # reset position for the next byte

end

def accumulate_bit(bit)

# The left shift operator (<<) is used to

# shift the bits of the number to the left.

#

# This is equivalent of: (2 ** @position) * bit

@accumulator += (bit << @position)

@position += 1 # move to the next bit position: 0 becomes 1, 1 becomes 2, etc.

end

puts "Process ID: #{Process.pid}"

sleep

Read that code and its comments. It’s very important. Do not continue reading until you really get what’s happening here.

  • Whenever we get SIGUSR1 , we accumulate the bit 0
  • When getting SIGUSR2 , accumulate then the bit 1
  • When accumulator reaches the position 8 , it means we have a byte. At this moment we should print the ASCII representation using the .chr we seen earlier. Then, reset bit position and accumulator

Let’s see our receiver in action! Start the receiver in one terminal:

$ ruby receiver.rb

Process ID: 58219

Great! Now the receiver is listening for signals. In another terminal, let’s manually send signals
to form the letter “h” (which is 01101000 in binary, remember?):

# Sending from LSB to MSB: 0, 0, 0, 1, 0, 1, 1, 0

$ kill -SIGUSR1 58219 # 0

$ kill -SIGUSR1 58219 # 0

$ kill -SIGUSR1 58219 # 0

$ kill -SIGUSR2 58219 # 1

$ kill -SIGUSR1 58219 # 0

$ kill -SIGUSR2 58219 # 1

$ kill -SIGUSR2 58219 # 1

$ kill -SIGUSR1 58219 # 0

And in the receiver terminal, we should see:

Received byte: 104 (h)

How amazing is that? We just sent the letter “h” using only two UNIX signals!

But wait. Manually sending 8 signals for each character? That’s tedious and error-prone. What if we wanted to send the word “hello”? That’s 5 characters × 8 bits = 40 signals to send manually. No way.

We need a sender.

Building the sender

The sender’s job is the opposite of the receiver: it should encode a message (string) into bits and send them as signals to the receiver process.

Let’s think about what we need:

  1. Take a message as input (like “hello”)
  2. Convert each character to its byte representation
  3. Extract the 8 bits from each byte
  4. Send SIGUSR1 for bit 0, SIGUSR2 for bit 1
  5. Repeat for all characters

The tricky part here is the step 3: how do we extract individual bits from a byte? To extract the bit at position i , we can use the following formula:

bit = (byte >> i) & 1

Let me break this down:

  • byte >> i performs a right shift by i positions
  • & 1 is a bitwise AND operation that extracts only the rightmost bit

For the letter “h” ( 01101000 in binary, 104 in decimal):

Position 0 (LSB):

  • (104 >> 0) = 104 / (2 ** 0) = 104 / 1 = 104
  • 01101000 >> 0 = 01101000
  • 01101000 & 00000001 = 0 ( one AND zero is zero )

Position 1:

  • (104 >> 1) = 104 / (2 ** 1) = 104 / 2 = 52
  • 01101000 >> 1 = 00110100
  • 00110100 & 00000001 = 0

Position 2:

  • (104 >> 2) = 104 / (2 ** 2) = 104 / 4 = 26
  • 01101000 >> 2 = 00011010
  • 00011010 & 00000001 = 0

Position 3:

  • (104 >> 3) = 104 / (2 ** 3) = 104 / 8 = 13
  • 01101000 >> 3 = 00001101
  • 00001101 & 00000001 = 1 ( one AND one equals one )

And so on for positions 4, 5, 6, and 7. This gives us: 0, 0, 0, 1, 0, 1, 1, 0 — exactly the bits we need from LSB to MSB!

  • (104 >> 0) & 1 = 104 & 1 = 0
  • (104 >> 1) & 1 = 52 & 1 = 0
  • (104 >> 2) & 1 = 26 & 1 = 0
  • (104 >> 3) & 1 = 13 & 1 = 1
  • (104 >> 4) & 1 = 6 & 1 = 0
  • (104 >> 5) & 1 = 3 & 1 = 1
  • (104 >> 6) & 1 = 1 & 1 = 1
  • (104 >> 7) & 1 = 0 & 1 = 0

Pay close attention to this technique. It’s a fundamental operation in low-level programming.

So now time to build the sender.rb which is pretty simple:

receiver_pid = ARGV[0].to_i

message = ARGV[1..-1].join(' ')

def encode_byte(byte)

8.times.map do |i|

# Extract each bit from the byte, starting from the LSB

(byte >> i) & 1

end

end

message.bytes.each do |byte|

encode_byte(byte).each do |bit|

signal = bit == 0 ? 'SIGUSR1' : 'SIGUSR2'

Process.kill(signal, receiver_pid)

sleep 0.001 # Delay to allow the receiver to process the signal

end

end

For each byte (8-bit structure) we extract the bit performing the right shift + AND oprerations. The result is the extracted bit.

In the receiver window:

$ ruby receiver.rb

Process ID: 68968

And in the sender window:

$ ruby sender.rb 68968 h

The receiver will print:

$ ruby receiver.rb

Process ID: 68968

Received byte: 104 (h)

Processes sending messages with only two signals! How wonderful is that?

Sending the “hello” message

Now, sending the hello message is super easy. The sender is already able to send not only a letter but any message using signals:

$ ruby sender.rb 68968 hello

# And the receiver:

Received byte: 104 (h)

Received byte: 101 (e)

Received byte: 108 (l)

Received byte: 108 (l)

Received byte: 111 (o)

Just change the receiver implementation a little bit:

def decode_signal(bit)

accumulate_bit(bit)

return unless @position == 8 # if not yet accumulated a byte, keep accumulating

print @accumulator.chr # print the byte as a character

@accumulator = 0 # reset the accumulator

@position = 0 # reset position for the next byte

end

And then:

$ ruby sender.rb 96875 Hello

# In the receiver's terminal

Process ID: 96875

Hello

However, if we send the message again, the receiver will print everything in the same line:

$ ruby sender.rb 96875 Hello

$ ruby sender.rb 96875 Hello

# In the receiver's terminal

Process ID: 96875

HelloHello

It’s obvious: the receiver doesn’t know where the sender finished the message, so it’s impossible to know where we should stop one message and print the next one on a new line with \n .

We should then determine how the sender indicates the end of the message. How about being it all zeroes ( 0000 0000 )?

  • We send the message: first 5 bytes representing the “hello” message
  • Then we send a “NULL terminator”, just one byte 0 ( 0000 0000 )

0110 1000 # h

0110 0101 # e

0110 1000 # l

0110 1000 # l

0110 1111 # o

0000 0000 # NULL

Hence, when the receiver gets a NULL terminator, it will print a line feed \n . Let’s change the sender.rb first:

receiver_pid = ARGV[0].to_i

message = ARGV[1..-1].join(' ')

def encode_byte(byte)

8.times.map do |i|

# Extract each bit from the byte, starting from the LSB

(byte >> i) & 1

end

end

message.bytes.each do |byte|

encode_byte(byte).each do |bit|

signal = bit == 0 ? 'SIGUSR1' : 'SIGUSR2'

Process.kill(signal, receiver_pid)

sleep 0.001 # Delay to allow the receiver to process the signal

end

end

# Send NULL terminator (0000 0000)

8.times do

Process.kill('SIGUSR1', receiver_pid)

sleep 0.001 # Delay to allow the receiver to process the signal

end

puts "Message sent to receiver (PID: #{receiver_pid})"

Then, the receiver.rb :

@position = 0 # start with the LSB

@accumulator = 0

trap('SIGUSR1') { decode_signal(0) }

trap('SIGUSR2') { decode_signal(1) }

def decode_signal(bit)

accumulate_bit(bit)

return unless @position == 8 # if not yet accumulated a byte, keep accumulating

if @accumulator.zero? # NULL terminator received

print "\n"

else

print @accumulator.chr # print the byte as a character

end

@accumulator = 0 # reset the accumulator

@position = 0 # reset position for the next byte

end

def accumulate_bit(bit)

# The left shift operator (<<) is used to

# shift the bits of the number to the left.

#

# This is equivalent of: (2 ** @position) * bit

@accumulator += (bit << @position)

@position += 1 # move to the next bit position: 0 becomes 1, 1 becomes 2, etc.

end

puts "Process ID: #{Process.pid}"

sleep

Output:

$ ruby sender.rb 96875 Hello, World!

$ ruby sender.rb 96875 You're welcome

$ ruby sender.rb 96875 How are you?

# Receiver

Process ID: 97176

Hello, World!

You're welcome

How are you?

OMG Leandro! That’s amazing!

Amazing, right? We just built an entire communication system between two processes using one of the most primitive methods available: UNIX signals.

The sky’s the limit now! Why not build a full-fledged message broker using this crazy technique?

A modest message broker using UNIX signals

We’ll break down the development into three components:

  1. Broker : the intermediary that routes messages
  2. Consumer : processes that receive messages
  3. Producer : processes that send messages

image

  1. Let’s start with the Broker. It should register itself with the producer, then trap incoming signals, decode them, and enqueue the messages for delivery to consumers via outgoing signals:

#!/usr/bin/env ruby

require_relative 'signal_codec'

require_relative 'consumer'

class Broker

PID = 'broker.pid'.freeze

def initialize

@codec = SignalCodec.new

@queue = Queue.new

@consumer_index = 0

end

def start

register_broker

trap('SIGUSR1') { process_bit(0) }

trap('SIGUSR2') { process_bit(1) }

puts "Broker PID: #{Process.pid}"

puts "Waiting for messages..."

distribute_messages

sleep # Keep alive

end

private

def process_bit(bit)

@codec.accumulate_bit(bit) do |message|

@queue.push(message) unless message.empty?

end

end

def register_broker

File.write(PID, Process.pid)

at_exit { File.delete(PID) if File.exist?(PID) }

end

def distribute_messages

Thread.new do

loop do

sleep 0.1

next if @queue.empty?

consumers = File.exist?(Consumer::FILE) ? File.readlines(Consumer::FILE).map(&:to_i) : []

next if consumers.empty?

message = @queue.pop(true) rescue next

consumer_pid = consumers[@consumer_index % consumers.size]

@consumer_index += 1

puts "[SEND] #{message} → Consumer #{consumer_pid}"

@codec.send_message(message, consumer_pid)

end

end

end

end

if __FILE__ == $0

broker = Broker.new

broker.start

end

  • The broker registers itself
  • Traps incoming signals USR1 (bit 0) and USR2 (bit 1)
  • Enqueues the messages
  • Send messages to consumers using outgoing signals ( USR1 and USR2 too)

Note that we’re using a module called SignalCodec which will be explained soon. Basically this module contains all core components to encode/decode signals and perform bitwise operations.

  1. Now the Consumer implementation:

#!/usr/bin/env ruby

require_relative 'signal_codec'

class Consumer

FILE = 'consumers.txt'.freeze

def initialize

@codec = SignalCodec.new

end

def start

register_consumer

trap('SIGUSR1') { process_bit(0) }

trap('SIGUSR2') { process_bit(1) }

puts "Consumer PID: #{Process.pid}"

puts "Waiting for messages..."

sleep # Keep alive

end

private

def process_bit(bit)

@codec.accumulate_bit(bit) do |message|

puts "[RECEIVE] #{message}"

end

end

def register_consumer

File.open(FILE, 'a') { |f| f.puts Process.pid }

at_exit { deregister_consumer }

end

def deregister_consumer

if File.exist?(FILE)

consumers = File.readlines(FILE).map(&:strip).reject { |pid| pid.to_i == Process.pid }

File.write(FILE, consumers.join("\n"))

end

end

end

if __FILE__ == $0

consumer = Consumer.new

consumer.start

end

  • The consumer starts and registers itself with the broker
  • Consumer then traps incoming signals (bit 0 and bit 1)
  • Decodes and prints messages
  1. Last but not least, the Producer implementation, which is pretty straightforward:

#!/usr/bin/env ruby

require_relative 'signal_codec'

require_relative 'broker'

unless File.exist?(Broker::PID)

abort "Error: Broker not running (#{Broker::PID} not found)"

end

broker_pid = File.read(Broker::PID).strip.to_i

message = ARGV.join(' ')

if message.empty?

puts "Usage: ruby producer.rb <message>"

exit 1

end

codec = SignalCodec.new

puts "Sending: #{message}"

codec.send_message(message, broker_pid)

puts "Message sent to broker (PID: #{broker_pid})"

  • Producer receives a ASCII message from the STDIN
  • Encode and sends the message to the broker via outgoing signals

So far, this architecture should look familiar. Many broker implementations follow these basic foundations.

Of course, production-ready implementations are far more robust than this one. Here, we’re just poking around with hacking and experimentation

The coolest part is the SignalCodec though:

class SignalCodec

SIGNAL_DELAY = 0.001 # Delay between signals to allow processing

def initialize

@accumulator = 0

@position = 0

@buffer = []

end

def accumulate_bit(bit)

@accumulator += (bit << @position)

@position += 1

if @position == 8 # Byte is complete

if @accumulator.zero? # Message complete - NULL terminator

decoded = @buffer.pack("C*").force_encoding('UTF-8')

yield(decoded) if block_given?

@buffer.clear

else

@buffer << @accumulator

end

@position = 0

@accumulator = 0

end

end

def send_message(message, pid)

message.each_byte do |byte|

8.times do |i|

bit = (byte >> i) & 1

signal = bit == 0 ? 'SIGUSR1' : 'SIGUSR2'

Process.kill(signal, pid)

sleep SIGNAL_DELAY

end

end

# Send NULL terminator (0000 0000)

8.times do

Process.kill('SIGUSR1', pid)

sleep SIGNAL_DELAY

end

end

end

If you’ve been following along, this shouldn’t be hard to understand, but I’ll break down how this beautiful piece of code works:

  • The codec is initialized with the bit position at zero, as well as the accumulator
  • A buffer is also initialized to store accumulated bits until a complete byte is formed
  • The accumulate_bit method should be familiar from our earlier implementation, but it now accepts a closure (block) that lets the caller decide what to do with each decoded byte
  • send_message encodes a message into bits and sends them via UNIX signals

Everything in action:

image

How cool, amazing, wonderful, impressive, astonishing is that?

Conclusion

Yes, we built a message broker using nothing but UNIX signals and a bit of Ruby magic. Sure, it’s not production-ready , and you definitely shouldn’t use this in your next startup (please don’t), but that was never the point.

The real takeaway here isn’t the broker itself: it’s understanding how the fundamentals work. We explored binary operations, UNIX signals, and IPC in a hands-on way that most people never bother with.

We took something “useless” and made it work, just for fun. So next time someone asks you about message brokers, you can casually mention that you once built (or saw) one using just two signals. And if they look at you weird, well, that’s their problem. Now go build something equally useless and amazing. The world needs more hackers who experiment just for the fun of it.

Happy hacking!

Apple Lowered the Price of the New MacBook Pro for Most of the Eurpopean Countries Where It No Longer Ships With an Included Power Adapter

Daring Fireball
pxlnv.com
2025-10-20 23:05:19
Nick Heer, at Pixel Envy: First of all, the dollar is not the currency in any of these countries. Second, the charger in European countries is €65, which is more like $76 right now. Third, Apple is allowed to bundle an A.C. adapter, it just needs to offer an option to not include it. Practical...
Original Article

Are you outraged? Have you not heard? Apple updated its entry-level MacBook Pro with a new M5 chip, and across Europe, it does not ship with an A.C. adapter in the box as standard any more. It still comes with a USB-C to MagSafe cable, and you can add an adapter at checkout, but those meddling E.U. regulators have forced Apple to do something stupid and customer-unfriendly again . Right?

William Gallagher , of AppleInsider , gets it wrong:

Don’t blame Apple this time — if you’re in the European Union or the UK, your new M5 14-inch MacBook Pro or iPad Pro may cost you $70 extra because Apple isn’t allowed to bundle a charger.

First of all, the dollar is not the currency in any of these countries. Second, the charger in European countries is €65, which is more like $76 right now. Third, Apple is allowed to bundle an A.C. adapter, it just needs to offer an option to not include it. Fourth, and most important, is that the new MacBook Pro is less expensive in nearly every region in which the A.C. adapter is now a configure-to-order option — even after adding the adapter.

In Ireland, the MacBook Pro used to start at €1,949; it now starts at €1,849; in France, it was €1,899 , and it is now €1,799 . As mentioned, the adapter is €65, making these new Macs €35 less with a comparable configuration. The same is true in each Euro-currency country I checked: Germany, Italy, and Spain all received a €100 price cut if you do not want an A.C. adapter, and a €35 price cut if you do.

It is not just countries that use the Euro receiving cuts. In Norway, the new MacBook Pro starts at 2,000 krone less than the one it replaces, and a charger is 849 krone. In Hungary, it is 50,000 forint less, with a charger costing about 30,000 forint. There are some exceptions, too. In Switzerland, the new models are 50 francs less, but a charger is 59 francs. And in the U.K., there is no price adjustment, even though the charger is a configure-to-order option there, too.

Countries with a charger in the box, on the other hand, see no such price adjustment, at least for the ones I have checked. The new M5 model starts at the same price as the M4 it replaces in Canada, Japan, Singapore, and the United States. (For the sake of brevity and because not all of these pages have been recently crawled by the Internet Archive, I have not included links to each comparison. I welcome checking my work, however, and would appreciate an email if I missed an interesting price change.)

Maybe Apple was already planning a €100 price cut for these new models. The M4 was €100 less expensive than the M3 it replaced, for example, so it is plausible. That is something we simply cannot know. What we do know for certain is that these new MacBook Pros might not come with an A.C. adapter, but even if someone adds one at checkout, it still costs less in most places with this option.

Gallagher:

It doesn’t appear that Apple has cut prices of the MacBook Pro or iPad Pro to match, either. That can’t be proven, though, because at least with the UK, Apple generally does currency conversion just by swapping symbols.

It can be proven if you bother to put in thirty minutes’ work.

Joe Rossignol , of MacRumors , also gets it a little wrong:

According to the European Union law database, Apple could have let customers in Europe decide whether they wanted to have a charger included in the box or not, but the company has ultimately decided to not include one whatsoever: […]

A customer can, in fact, choose to add an A.C. adapter when they order their Mac.

More on the Lack of an Included Charger With New M5 MacBooks in Europe, Including the U.K. and Norway

Daring Fireball
daringfireball.net
2025-10-20 22:48:30
While poking fun at EU regulations leading Apple not to include a power adapter with the new M5 MacBook Pro across Europe, I wondered why the U.K. — which left the EU five years ago — was affected. DF reader CA wrote, via email: We did indeed leave the EU, but remain aligned to some of their sta...
Original Article
M5 MacBook Pro Does Not Include a Charger in the Box in Europe

Joe Rossignol, MacRumors:

The new 14-inch MacBook Pro with an M5 chip does not include a charger in the box in European countries, including the U.K., Ireland, Germany, Italy, France, Spain, the Netherlands, Norway, and others, according to Apple’s online store. In the U.S. and all other countries outside of Europe, the new MacBook Pro comes with Apple’s 70W USB-C Power Adapter, but European customers miss out.

Apple has gradually stopped including chargers with many products over the years — a decision it has attributed to its environmental goals.

In this case, an Apple spokesperson told French website Numerama’s Nicolas Lellouche that the decision to not include a charger with this particular MacBook Pro was made in anticipation of a European regulation that will require Apple to provide customers with the option to purchase certain devices without a charger in the box, starting in April.

I’m not sure why there’s no power adapter in the box in the UK ( I double-checked ). The cited regulation is for the EU, and the UK, rather famously, left the EU in 2020.

But, still, amazing stuff continues to happen in Europe.

Thursday, 16 October 2025

Today is when the Amazon brain drain sent AWS down the spout

Hacker News
www.theregister.com
2025-10-20 21:50:03
Comments...
Original Article

column "It's always DNS" is a long-standing sysadmin saw, and with good reason: a disproportionate number of outages are at their heart DNS issues. And so today, as AWS is still repairing its downed cloud as this article goes to press, it becomes clear that the culprit is once again DNS. But if you or I know this, AWS certainly does.

And so, a quiet suspicion starts to circulate: where have the senior AWS engineers who've been to this dance before gone? And the answer increasingly is that they've left the building — taking decades of hard-won institutional knowledge about how AWS's systems work at scale right along with them.

What happened?

AWS reports that on October 20, at 12:11 AM PDT, it began investigating “increased error rates and latencies for multiple AWS services in the US-EAST-1 Region.” About an hour later, at 1:26 AM, the company confirmed “significant error rates for requests made to the DynamoDB endpoint” in that region. By 2:01 AM, engineers had identified DNS resolution of the DynamoDB API endpoint for US-EAST-1 as the likely root cause, which led to cascading failures for most other things in that region. DynamoDB is a "foundational service" upon which a whole mess of other AWS services rely, so the blast radius for an outage touching this thing can be huge.

As a result, much of the internet stopped working : banking, gaming, social media, government services, buying things I don't need on Amazon.com itself, etc.

AWS has given increasing levels of detail, as is their tradition, when outages strike, and as new information comes to light. Reading through it, one really gets the sense that it took them 75 minutes to go from "things are breaking" to "we've narrowed it down to a single service endpoint, but are still researching," which is something of a bitter pill to swallow. To be clear: I've seen zero signs that this stems from a lack of transparency, and every indication that they legitimately did not know what was breaking for a patently absurd length of time.

Note that for those 75 minutes, visitors to the AWS status page (reasonably wondering why their websites and other workloads had just burned down and crashed into the sea) were met with an "all is well!" default response. Ah well, it's not as if AWS had previously called out slow outage notification times as an area for improvement. Multiple times even. We can keep doing this if you'd like.

The prophecy

AWS is very, very good at infrastructure. You can tell this is a true statement by the fact that a single one of their 38 regions going down (albeit a very important region!) causes this kind of attention, as opposed to it being "just another Monday outage." At AWS's scale, all of their issues are complex; this isn't going to be a simple issue that someone should have caught, just because they've already hit similar issues years ago and ironed out the kinks in their resilience story.

Once you reach a certain point of scale, there are no simple problems left. What's more concerning to me is the way it seems AWS has been flailing all day trying to run this one to ground. Suddenly, I'm reminded of something I had tried very hard to forget.

At the end of 2023, Justin Garrison left AWS and roasted them on his way out the door . He stated that AWS had seen an increase in Large Scale Events (or LSEs), and predicted significant outages in 2024. It would seem that he discounted the power of inertia, but the pace of senior AWS departures certainly hasn't slowed — and now, with an outage like this, one is forced to wonder whether those departures are themselves a contributing factor.

You can hire a bunch of very smart people who will explain how DNS works at a deep technical level (or you can hire me, who will incorrect you by explaining that it's a database), but the one thing you can't hire for is the person who remembers that when DNS starts getting wonky, check that seemingly unrelated system in the corner, because it's historically played a contributing role to some outages of yesteryear.

When that tribal knowledge departs, you're left having to reinvent an awful lot of in-house expertise that didn't want to participate in your RTO games, or play Layoff Roulette yet again this cycle. This doesn't impact your service reliability — until one day it very much does, in spectacular fashion. I suspect that day is today.

The talent drain evidence

This is The Register , a respected journalistic outlet. As a result, I know that if I publish this piece as it stands now, an AWS PR flak will appear as if by magic, waving their hands, insisting that "there is no talent exodus at AWS," a la Baghdad Bob. Therefore, let me forestall that time-wasting enterprise with some data.

  • It is a fact that there have been 27,000+ Amazonians impacted by layoffs between 2022 and 2024, continuing into 2025. It's hard to know how many of these were AWS versus other parts of its Amazon parent, because the company is notoriously tight-lipped about staffing issues.
  • Internal documents reportedly say that Amazon suffers from 69 percent to 81 percent regretted attrition across all employment levels. In other words, "people quitting who we wish didn't."
  • The internet is full of anecdata of senior Amazonians lamenting the hamfisted approach of their Return to Office initiative; experts have weighed in citing similar concerns.

If you were one of the early employees who built these systems, the world is your oyster. There's little reason to remain at a company that increasingly demonstrates apparent disdain for your expertise.

My take

This is a tipping point moment. Increasingly, it seems that the talent who understood the deep failure modes is gone. The new, leaner, presumably less expensive teams lack the institutional knowledge needed to, if not prevent these outages in the first place, significantly reduce the time to detection and recovery. Remember, there was a time when Amazon's "Frugality" leadership principle meant doing more with less, not doing everything with basically nothing. AWS's operational strength was built on redundant, experienced people, and when you cut to the bone, basic things start breaking.

I want to be very clear on one last point. This isn't about the technology being old. It's about the people maintaining it being new. If I had to guess what happens next, the market will forgive AWS this time, but the pattern will continue.

AWS will almost certainly say this was an "isolated incident," but when you've hollowed out your engineering ranks, every incident becomes more likely. The next outage is already brewing. It's just a matter of which understaffed team trips over which edge case first, because the chickens are coming home to roost. ®

[$] The RubyGems.org takeover

Linux Weekly News
lwn.net
2025-10-20 21:45:54
In September, a group of long-time maintainers of Ruby packaging tools projects had their GitHub privileges for revoked by nonprofit corporation Ruby Central in what many people are calling a hostile takeover. Ruby Central and its board members have issued several public statements that have, so far...
Original Article

The page you have tried to view ( The RubyGems.org takeover ) is currently available to LWN subscribers only.

Reader subscriptions are a necessary way to fund the continued existence of LWN and the quality of its content.

If you are already an LWN.net subscriber, please log in with the form below to read this content.

Please consider subscribing to LWN . An LWN subscription provides numerous benefits, including access to restricted content and the warm feeling of knowing that you are helping to keep LWN alive.

(Alternatively, this item will become freely available on October 30, 2025)

My 4 Train Broke in Half

hellgate
hellgatenyc.com
2025-10-20 21:40:35
One of my fellow passengers was right: "You can't make this shit up."...
Original Article

I can't count the number of times I've found myself stranded underground on a train stalled between subway stations, bored and testy, with spotty cell service, wondering whether this is the time I'm going to get truly stuck. The train always seems to jolt back to life right as I'm composing a text message catastrophizing the situation, some variation of Fuck I'm sorry I'm going to be late the [insert line here] train is ruining my fucking life lmfao .

On Sunday evening, I was on a sparsely populated Brooklyn-bound 4 train, heading back from an afternoon at Woodlawn Cemetery for a mausoleum tour (shout out to Open House New York!) when things slumped to a familiar halt between 161st Street-Yankee Stadium and 149th Street. An unusually clear announcement informed us that the train was having a mechanical issue. Most people barely looked up from their phones, or bothered to take their headphones out. I went back to reading my copy of "Death in Venice," and my boyfriend went back to looking at his phone; neither of us really reacted when the same announcement happened again, five or so minutes later. An MTA worker marched through the car, with his radio buzzing. I thought I heard the word "split," and closed my book. It wasn't until the third announcement that the entire car perked up.

As soon as I processed what might be happening, I started recording:

audio-thumbnail

Give us your email to read the full story

Sign up now for our free newsletters.

Sign up

How I accidentally became PureGym's unofficial Apple Wallet developer

Lobsters
drobinin.com
2025-10-20 21:37:26
Comments...
Original Article

47 seconds: A villain origin story

Wednesday, 11:15 AM. I'm at the PureGym entrance doing the universal gym app dance. Phone out, one bar of signal that immediately gives up because apparently the building is wrapped in aluminum foil. Connect to PureGym WiFi. Wait for it to actually connect. Open the app.

"Warming up..."

The entire feed loads—suggested workouts I'll never do, my "recent" activities from 2024, a motivational quote that makes me question humanity and myself. Oh, and would I like to enable push notifications? No. How about this special offer for personal training? Also no.

Finally, I can tap "Gym access." Loading spinner. The QR code materializes eventually like it's 2000 and I'm downloading a JPEG on dial-up. Someone behind me clears their throat. Scan at the pod, the barrier grunts open.

Total time: 47 seconds.

PureGym app loading Apple Wallet pass

Before: 47 seconds of app archaeology After: 3 seconds of magic

I do this 6 days a week. That's 282 seconds of my life every week just trying to enter a building. As someone who's spent more than a decade optimising iOS apps for a living, watching myself suffer through this UX disaster daily was starting to feel like cosmic punishment.

Meanwhile, Amazon Fresh lets people walk out with £200 of groceries without even touching their phone. But here I am, performing a full mobile obstacle course just to lift heavy things and put them back down.

The eight-year PIN mystery

Before we dive into the technical meat, let me share the security contradiction that made me question reality itself.

I've been using the same 8-digit PIN code at PureGym turnstiles for eight years . EIGHT. YEARS. Same code. Never changed. Never expired. It survived COVID, three prime ministers, and the entire rise and fall of NFTs. I could literally tattoo this PIN on my forehead and use it as biometric authentication at this point.

But the QR code in the app? That bad boy refreshes every 60 seconds like it's protecting nuclear launch codes.

Think about this for a second. The physical keypad -- exposed to British weather, coated in a mysterious film of protein shake and regret, probably being livestreamed to TikTok by someone's ring doorbell -- accepts my ancient PIN without question. But the digital QR code needs cryptographic rotation that would make the NSA jealous.

This is peak security theatre. It's like having a £10,000 smart lock on your front door while leaving the windows open.

How I learnt to stop worrying and love mitmproxy

My first approach was embarrassingly naive. "I'll just screenshot the QR code and add it to Apple Wallet as a static image!"

Reader, I actually did this. Created a pretty pass, added my screenshot, proudly walked to the gym on Wednesday with my "innovation." The scanner beeped angrily at me like I'd personally insulted its mother. The screenshot from Monday was already dead.

That's when I discovered PureGym's QR codes are dynamic -- while technically they expire after about a week, the app refreshes them every minute for some reason.

But this raised an important question: if security was so critical, why does my prehistoric PIN still work everywhere?

Time to see if I was alone in my frustration. Now, most people would start by Googling. But we're engineers. We go straight to the source of truth: GitHub.

Searching for "PureGym" on GitHub is like opening a time capsule of developer frustration. Beautiful.

But nobody had cracked Apple Wallet. The gauntlet was thrown.

These digital archaeologists had unearthed the authentication endpoint:

POST https://auth.puregym.com/connect/token

grant_type=password&username={EMAIL}&password={PIN}&scope=pgcapi offline_access

Authorization: Basic cm8uY2xpZW50Og==

Decode that Base64 and you get ro.client: -- yes, that's a colon at the end. No secret. Just vibes. It's like leaving your house key under the doormat, but the door is also unlocked.

The crown jewel? Your 8-digit gym door PIN is your API password and you most likely didn't set it yourself. The same PIN that hasn't changed since the iPhone 8 was cutting-edge technology. I'm starting to think PureGym's security model was designed by someone with a very specific sense of humour.

Proxying my way to enlightenment

Time to see what the app actually does.

One could notice that at this point the PureGym API is basically crowd-sourced documentation all over Github, but I wanted to see the actual flow myself, so I needed to intercept its traffic, which meant proxy tools. The usual suspects: Charles Proxy, mitmproxy, Proxyman -- pick your poison.

Setting up a proxy in 2025 should be trivial, but mobile apps always had trust issues. Some certificate dancing later (shoutout to SSL Kill Switch and objection for making this bearable), I had traffic flowing:

GET https://capi.puregym.com/api/v2/member/qrcode # Note how auth is not versioned, this is v2, but all other apis are mostly v1s

{
  "QrCode":"exerp:checkin:<part1>-<part2>-<part3>",
  "RefreshAt":"2025-08-14T12:08:27.4349618Z",
  "ExpiresAt":"2025-08-21T12:02:27.4349618Z",
  "RefreshIn":"0:01:00",
  "ExpiresIn":"167:55:00"
}

# part1 is some static id, part2 is the now() timestamp, and part3 seems to change every time so probably is a salt

Look at that response. It literally tells me when to refresh despite a different expiration time. It's like the API is saying "Hey buddy, I know this is odd, but can you poll me every minute? Thanks, love you too."

PassKit: Apple's forgotten child

Here's something many iOS developers don't know: Apple Wallet passes aren't just static cards. They're basically tiny apps that can update themselves, send push notifications, and even react to your location. The framework is called PassKit, it's been around since iOS 6, and approximately 7 people have ever used it properly (definitely not me).

Building a pass requires:

  1. A JSON manifest (easy)
  2. Images in very specific dimensions (annoying)
  3. Cryptographic signatures (hello darkness my old friend)
  4. A web service for updates (time to write some Swift)

The certificate signing process deserves its own horror story but I will save it for my memoirs. As we all know, Apple hates us , and this particular framework is a good example of that. You need:

  • A Pass Type ID certificate from Apple Developer Portal
  • Apple's WWDR intermediate certificate
  • OpenSSL commands that look like you're summoning a demon

After three hours of certificate wrestling, I had a working .pkpass file. It's just a ZIP archive with delusions of grandeur, but when it works, it feels like magic (and when it doesn't, you can't easily tell because Xcode Simulator seems to treat Wallet passes differently from the actual devices).

The Swift backend nobody asked for

Most PassKit tutorials use Node.js. But I'm a Swift developer and believe in suffering, so I built the backend in Vapor.

Why? Because I could. And for type safety. But mostly because I could.

// The endpoint that makes the magic happen
app.post("v1", "devices", ":deviceId", "registrations", ":passType", ":serial") { req async in
// Device wants updates! Store that push token like it's bitcoin in 2010
let pushToken = req.body["pushToken"]
// ... store it somewhere
return Response(status: .created)
}

// The "oh you want updates?" endpoint
app.get("v1", "passes", ":passType", ":serial") { req async -> Response in
// Check if anything changed
if nothingChanged {
return Response(status: .notModified) // 304 baby!
}

// Generate fresh pass with new QR
let freshPass = try await generateNewPass()
return Response(body: freshPass, headers: ["Content-Type": "application/vnd.apple.pkpass"])
}

The best part of working with Apple Walet passes is this: when the QR code needs updating, I send a silent push notification. The device wakes up, fetches the new pass, updates the QR code, and the user never knows. It's like having a butler who refreshes your gym pass while you sleep.

The Great Gym Heist

Apple Wallet passes can appear on your lock screen when you're near specific locations. So I figured, as I travel around the UK a lot, I might as well stop by random PureGyms for an occasional bench press, so I needed coordinates for every PureGym in the UK. There are some location lists on the official website, but no coordinates.

Then I found it. The motherlode:

GET https://capi.puregym.com/api/v1/gyms/ # Back to v1, but judging by lots of null fields it used to do more stuff

// Lots of gyms
[
{
"id": 318,
"name": "Yeovil Houndstone Retail Park",
"latitude": "50.945449",
"longitude": "-2.671796",
"email": {...},
"gymOpeningHours": {...},
// ... more fields including something called "reasonsToJoin"
}
]

I scraped all locations just in case, and add the nearest to the generated pass. Now my membership appears on the lock screen whenever I'm near PureGyms in the country.

The only downside? My local PureGym is in a shopping center. Now my pass appears every time I go to M&S. Buying milk has become a moral dilemma -- am I shopping or avoiding the gym that's literally behind the wall?

The Apple Watch support

Plot twist: Apple Wallet passes automatically sync to Apple Watch. No extra code needed. Double-tap the side button, scan with my wrist, I'm in.

Time: 3 seconds.

That's a 93% reduction in entry time. As someone who optimises code for a living, this number makes me inappropriately happy.

Numbers that matter (to me)

Let's talk metrics, because what's a technical blog post without unnecessary statistics:

  • Original app time : 47 seconds
  • Apple Wallet time : 3 seconds
  • Time saved per visit : 44 seconds
  • Visits per week : 6 (I skip Sundays, God rested, so do I)
  • Annual time saved : 3.8 hours
  • Number of times other gym members have asked "is there an app for that?" : 23
  • Number of times I've had to explain this isn't official : 23
  • Number of times they've asked me to make them one : 23
  • Number of times I've had to explain copyright law : 23

Bonus round

Since I was already deep in PureGym's API, I figured why not add gym capacity to my Home Assistant setup? The attendance endpoint is beautiful:

GET https://capi.puregym.com/api/v1/gyms/234/attendance

{
"totalPeopleInGym": 29, // They all want my bench
"totalPeopleInClasses": 0 // No one wants to do pilates
"maximumCapacity": 0, // This particular gym is huuuge (it's not actually, I am yet to find one that actually reports this value)
}

I made a few cards for Home Assistant dashboard that hangs next to my door that definitely help me go to the gym more (or to justify not going because it's too busy):

I was also wondering if I culd prove, with data, that I'm avoiding the gym efficiently but actually I usually come there at their least busy times so I must be a natural.

The uncomfortable engineering truth

This whole project took a weekend. A weekend to solve a problem that PureGym has had years to address.

But here's what I learnt: sometimes the best features come from outside the organization. PureGym probably has a roadmap, sprint planning, and very good reasons for not implementing Apple Wallet. Maybe it's not a priority. Maybe they have data showing only 0.3% of users would use it. Maybe their KPIs are based on the number of online classes previewed in the app, and forcing users to see them every time the app loads secures someone's annual bonus.

Maybe someone in product genuinely believes the current app experience is fine (bless their heart).

But users don't care about your Jira tickets. They care about experiences. And sometimes, those of us on the outside, armed with mitmproxy and too much free time, can prototype those experiences faster than a planning meeting can be scheduled.

The ethical elephant

Am I technically violating terms of service? Probably. Could PureGym break this tomorrow? Absolutely. Should I be doing this? Gestures vaguely at the 47-second entry time.

I follow some ground rules:

  • Cache everything (in case one's API is held together with prayers)
  • No automation beyond personal use
  • Don't share the actual service (hence the 23 explanations)
  • Be ready for it to break any moment

Should I package this up properly? Probably not: it's a proof of concept that solves my specific problem. Plus, PureGym would probably just hire me to shut it down, and I'm not ready for that level of corporate responsibility.

What's next?

The possibilities are endless and mostly ridiculous:
Shame Notifications : "You were literally 100 meters from the gym and walked past it"
SaaS Dreams : Package this properly, get sued immediately, become a cautionary tale at product management conferences

For now, I'm enjoying my 3-second gym entry and the knowledge that I've optimised away 3.8 hours of annual friction. In the grand scheme of things, it's meaningless. But in the petty scheme of things? It's everything.

And if you're from PureGym reading this—let's talk. I've already done half the work for you. 😉

PureGym app loading Apple Wallet pass

Before: 47 seconds of app archaeology After: 3 seconds of wrist magic


Working on something that needs iOS expertise, reverse engineering skills, or someone who obsesses over shaving 44 seconds off everyday interactions? I build exceptional mobile experiences and occasionally solve problems that shouldn't exist. Drop me a line at work@drobinin.com .

No Tricks, Just Treats 🎃 EFF’s Halloween Signal Stickers Are Here!

Electronic Frontier Foundation
www.eff.org
2025-10-20 21:37:09
EFF usually warns of new horrors threatening your rights online, but this Halloween we’ve summoned a few of our own we’d like to share.  Our new Signal Sticker Pack highlights some creatures—both mythical and terrifying—conjured up by our designers for you to share this spooky season. If you’re new ...
Original Article

EFF usually warns of new horrors threatening your rights online, but this Halloween we’ve summoned a few of our own we’d like to share.  Our new Signal Sticker Pack highlights some creatures—both mythical and terrifying—conjured up by our designers for you to share this spooky season.

If you’re new to Signal , it's a free and secure messaging app built by the nonprofit Signal Foundation at the forefront of defending user privacy. While chatting privately, you can add some seasonal flair with Signal Stickers , and rest assured: friends receiving them get the full sticker pack fully encrypted, safe from prying eyes and lurking spirits.

How To Get and Share Signal Stickers

On any mobile device or desktop with the Signal app installed, you can simply click the button below.

Download EFF's Signal Stickers

To share Frights and Rights

You can also paste the sticker link directly into a signal chat, and then tap it to download the pack directly to the app.

Once they’re installed, they are even easier to share—simply open a chat, tap the sticker menu on your keyboard, and send one of EFF’s spooky stickers.  They’ll then be asked if they’d like to also have the sticker pack.

All of this works without any third parties knowing what sticker packs you have or whom you shared them with. Our little ghosts and ghouls are just between us.

Meet The Encryptids

 a banshee, bigfoot, and a ghost

These familiar champions of digital rights— The Encryptids —are back! Don’t let their monstrous looks fool you; each one advocates for privacy, security, and a dash of weirdness in their own way. Whether they’re shouting about online anonymity or the importance of interoperability , they’re ready to help you share your love for digital rights. Learn more about their stories here , and you can even grab a bigfoot pin to let everyone know that privacy is a “human” right.

Street-Level Surveillance Monsters

 a body worn camera, face recognition spider, and flying wraith

On a cool autumn night, you might be on the lookout for ghosts and ghouls from your favorite horror flicks—but in the real world, there are far scarier monsters lurking in the dark: police surveillance technologies . Often hidden in plain sight, these tools quietly watch from the shadows and are hard to spot . That’s why we’ve given these tools the hideous faces they deserve in our Street-Level Surveillance Monsters series , ready to scare (and inform) your loved ones.

Copyright Creatures

 including a copyright thief, copyright robots, and a troll

Ask any online creator and they’ll tell you: few things are scarier than a copyright takedown. From unfair DMCA claims and demonetization to frivolous lawsuits designed to intimidate people into a hefty payment, the creeping expansion of copyright can inspire as much dread as any monster on the big screen. That’s why this pack includes a few trolls and creeps straight from a broken copyright system—where profit haunts innovation.

To that end, all of EFF’s work (including these stickers) are under an open CC-BY License , free for you to use and remix as you see fit.

Happy Haunting Everybody!

These frights may disappear with your message , but the fights persist. That’s why we’re so grateful to EFF supporters for helping us make the digital world a little more weird and a little less scary. You can become a member today and grab some gear to show your support. Happy Halloween!

DONATE TODAY

Join EFF Lists

Emacs time-zones

Lobsters
xenodium.com
2025-10-20 21:26:49
Comments...
Original Article

Emacs ships with a perfectly functional world clock, available via M-x world-clock . Having said that, there are two things I wish it had:

  1. A quick way to interactively add any city (bonus points for fuzzy search).
  2. An easy way to shift the time back and forth.

As far as I can tell, these are neither available nor possible on the built-in world-clock (please correct me if otherwise), so when my friend across the world recently asked me for the best time to meet, I knew this was the last nudge I needed to get this done.

With that, I give you M-x time-zones (now on GitHub ).

There isn't much to talk about other than time-zones accomplishes the above tasks very easily without resorting to writing elisp nor accessing via customize, which I seldom use.

As I mentioned, time-zones is on GitHub if you'd like to give it a spin. It's super fresh, so please report any issues. Hope you like it.

Make it all sustainable

Reckon time-zones will be useful to you? Enjoying this blog or my projects ? I am an indie dev. Help make it sustainable by ✨ sponsoring

Need a blog? I can help with that . Maybe buy my iOS apps too ;)

powered by LMNO.lol

privacy policy · terms of service

Manage Your Email Subscriptions

OrganizingUp
convergencemag.com
2025-10-20 21:18:23
…...

It Kind of Seems Like Peter Thiel Is Losing It

Hacker News
futurism.com
2025-10-20 21:14:52
Comments...
Original Article

PayPal and Palantir cofounder Peter Thiel has long held outrageous ideological views — and imposed them on the world using his billions of dollars in wealth to do things like build ICE’s massive surveillance panopticon , support the rise of the “New Right ,” and destroying Gawker Media by secretly funding Hulk Hogan’s lawsuit against it .

His beliefs during that period — an explosive mixture of race science , libertarianism , and rugged capitalism — might strike many people as noxious, but at least they followed the internal logic of an extremely powerful tech billionaire. Now, though, leaked information is making it sound as though he’s actually losing his grip.

For years, rumors have swirled that Thiel has developed some kind of obsession with the “Antichrist,” an entity referenced in the Bible’s Book of John and Thessalonians who will supposedly arise during the end times and claim to be God, before ultimately being defeated by Jesus.

If you’re wondering whether Thiel is looking at the legend of the Antichrist as some kind of high-minded metaphor for his political convictions, The Guardian obtained audio this month of Thiel talking about the topic during a recent private appearance — and unfortunately, it’s immediately clear that he believes that this mythical figure from the pages of an ancient holy book is a literal entity who will walk the Earth during some kind of hellish apocalypse.

“Some people think of [the Antichrist] as a type of very bad person,” Thiel clarified during his remarks. “Sometimes it’s used more generally as a spiritual descriptor of the forces of evil. What I will focus on is the most common and most dramatic interpretation of Antichrist: an evil king or tyrant or anti-messiah who appears in the end times.”

In fact, Thiel said during the leaked lecture that he’s suspicious the Antichrist is already among us. He even mentioned some possible suspects: it could be someone like climate activist Greta Thunberg, he suggested, or AI critic Eliezer Yudkowsky — both of whom just happen to be his ideological opponents.

There’s something mortifying about this spectacle: a grown man, supposedly a paragon of rational thinking, scratching around in primeval religious texts to terrify himself with superstitious prophecies, which the people around him entertain because he’s wealthy and powerful. There’s an old expression, often credited to the Newbery Medal-winning children’s author Ellen Raskin: “the poor are crazy, the rich just eccentric.”

There are even moments when it sounds as though Thiel might be indulging in something akin to the disordered thinking associated with serious mental illness, making wild associations between numbers and concepts that defy logical thinking — outbursts that would prompt talk of psychiatric intervention if a friend or loved one were to express them. (Preoccupation with the Antichrist can also be a symptom of severe mental illness .)

“Christ only lived to age 33 and became history’s greatest man,” Thiel pondered at one point during the leaked audio. “The Antichrist has to somehow outdo this. I don’t want to be way too literal on the 33 number — I’d rather stress the Antichrist will be a youthful conqueror; maybe in our gerontocracy, 66 is the new 33. But something like these numbers do occur almost mystically through a number of different contexts.”

In a sense, it’s easy to empathize with someone like Thiel: able to afford any material pleasure he wants, likely long isolated from any real friends, and surrounded by employees effectively paid to agree that everything he utters is genius, even if it makes no sense at all.

Thiel, though, happens to have immense sway over the lives of people around the world via his various foundations , corporations , and political connections . In that sense, whatever’s going on with him isn’t just sad — for someone as powerful as he is, it’s scary.

Perhaps the most ludicrous part of Thiel’s strange talk about the Antichrist is that there’s an obvious rejoinder to it: wouldn’t an actual Antichrist who was bent on world domination eagerly embrace the surveillance and military tech that Thiel is so heavily invested in? For that matter, doesn’t the Book of Revelations say that the enemy of God will seek to control commerce, much like Thiel made his fortune doing at PayPal?

In an agonizing exchange early this summer, the New York Times ‘ Ross Douthat posed that exact question.

“You’re an investor in AI,” Douthat says. “You’re deeply invested in Palantir, in military technology, in technologies of surveillance and technologies of warfare and so on. And it just seems to me that when you tell me a story about the Antichrist coming to power and using the fear of technological change to impose order on the world, I feel like that Antichrist would maybe be using the tools that you are building… Isn’t that a concern? Wouldn’t that be the irony of history, that the man publicly worrying about the Antichrist accidentally hastens his or her arrival?”

“Look, there are all these different scenarios,” Thiel sputtered, seemingly caught off guard by the question. “I obviously don’t think that that’s what I’m doing.”

But of course, that’s exactly what an Antichrist would say.

More on Peter Thiel: Tech Billionaires Accused of Quietly Working to Implement “Corporate Dictatorship”

Measuring Engineering Productivity

Lobsters
justoffbyone.com
2025-10-20 21:13:31
Comments...
Original Article

Engineers love to measure things: latency, throughput, error rates. But try to measure their own work? Then metrics become the enemy.

I get it. I’ve seen enough terrible systems to understand the resistance. Most systems fail, thanks to Goodhart’s Law . Measure the lines of code written, and people jack up verbosity. Measure the number of PRs, and people start dividing up their PRs.

Yet, it all feels like you have to measure something . Every manager knows that some people are more productive than others. Clearly, there’s a way to ascertain productivity. And even a given individual has their ebss and flows. How do you know if someone’s doing better than yesterday or if they are falling off a cliff?

Why Measure Things?

Here’s the uncomfortable truth: you can measure engineering productivity. The goal is not to have a purely objective system where you stack rank people but rather to make it obvious to everyone, including the people being measured, the level of their output, especially compared to people around them.

Yes, outcomes matter most. I want to ship features that users love, fix bugs that matter, and build systems that scale. But outcomes are often a function of output . You can’t improve what you can’t see, and you can’t see what you don’t measure.

People repeatedly asked me how I measured productivity at Felt . Over the course of 4.5 years, I arrived at a system that seemed to work. It encouraged and motivated the team to release products at an incredible pace. It allowed me to see where everyone stood and helped make adjustments where needed.

$$f(\text{output}) = \text{outcome}$$

The North Star: Don’t Burden Your Engineers

The most important principle that guided everything: the bulk of the “paperwork” should fall on management, not individual contributors. Of course, it’s important to keep the busywork to a minimum for productivity’s sake. But more importantly, if you want a system that people won’t try to game, overtly or covertly, you want a system people won’t hate.

$$\frac{\text{Manager Time}}{\text{Engineer Time}} \gg 1$$

The System

Practice Frequency IC Overhead What It Measures
Async Standups Daily 5-10 min Day-to-day progress and blockers
Changelogs Weekly None Volume and type of work shipped
Manager 1:1s Weekly None Context and patterns over time
All-Hands Weekly ~10 min Team accomplishments
PR Notifications Real-time None Ambient awareness
Deploy Verifications Real-time 2-5 min Ownership and quality

Daily

Async Standups

I did not have daily standups. However, every engineer was required to post a message in a dedicated Slack channel (called #standups ) with three things:

  1. What I did yesterday
  2. What I plan to do today
  3. Any blockers.

That’s it. It took every engineer around 5-10 minutes to do this per day.

The engineers were asked to be as succinct as possible. A good standup message (more on this later) would include links to GitHub PRs or Linear issues. Often, most engineers didn’t have any blockers. The key was that I asked managers to actually keep tabs on these messages and notify their reports if they missed posting a message.

Weekly

Changelogs

At the end of every week (generally Friday), I compiled a changelog using GitHub’s built-in Release tool. This release wasn’t particularly meaningful since I built a SaaS product that was continuously deployed, but the changelog was.

Since the Changelog included every engineer’s GitHub username next to the merged PR, it allowed a very quick visual reference to see how many changes from each engineer went in at a given week.

I also categorized the PRs in the changelog (manually, since it predated ChatGPT) into sections such as “Features”, “Bug Fixes”, and “Dev Ex”. This was important because it allowed me to see who was working on what kind of things. Most engineers love to spend time on their tooling, and that’s important for both productivity and personal reasons. But I wanted to make sure people weren’t spending all their time on them.

Manager 1:1s

Component Focus Time Allocation
People Personal growth, career, wellbeing Majority of meeting
Product “What have you been working on?” 10-15 min
Process “Anything we can be doing to make you more productive or happier?” Varies

Every individual contributor (and really, every employee, including myself) had a weekly 1:1 with their manager. While this meeting was mostly not about their work, it was a part of it. A good 1:1 format I’ve used is “People, Product, Process.”

I often started with the “Product” by asking “What have you been working on?” The key was to make sure it was me, the manager, taking notes. I used a shared Notion document that had an H2 headline for each week and H3 headlines for each item of the “People, Product, Process” trifecta.

This way, it was very easy for me to see what people said they were working on week over week: all I had to do was open up a few weeks’ worth of H2s and compare the “Product” sections under each. Since this Notion doc was shared with my report and I took notes as we spoke, there was no debate.

This also captured context that didn’t show up in commits. Why did you spend three days refactoring that module? What made you decide to investigate that performance issue? What’s blocking you from finishing that feature?

Again, this “Product” section was often the smallest part of a 1:1—not more than 10 or 15 minutes of a 45-minute meeting. It’s also why I often did this part first.

All-Hands

Every week, we had a company All-Hands for around an hour. A 15-minute section of that meeting was set aside for each team in the company to present their work from the past week.

At the beginning of each All-Hands, teams would break out into their own rooms and would have 7-8 minutes to put together a quick presentation (using our own tool) to paste in screenshots, graphics, or whatever they thought was useful, and pick a speaker.

Later, when the entire team got together, the designated speaker had 2-3 minutes to go through their team’s work in front of the whole company.

This format not only allowed cross-team awareness, but also created artifacts I, as management, could go through week over week. It also created visibility through accountability. When you know you’ll be presenting your work to the company, you ship things you’re proud of.

Real-Time

PR Notifications

I piped all the merged GitHub PRs into a public Slack channel. This wasn’t always useful, and after a few more engineers joined, it became more “watching the Matrix” than anything. Not useful, but fun. Fun, but not useful.

Yet, while I never used this as “evidence” of anything, it created an ambient sense of what was going on and created a sense of movement.

Deploy Verifications

Over time, we developed a sophisticated system where every PR would get its own preview environment and a single-click login from the Github PR page itself. This, combined with a battery of CI tests (and things like blue-green deploys , automated rollbacks and such) meant that a merged PR would get deployed automatically to production.

However, to make sure things actually worked as designed (and because we started having bugs), I required our engineers to verify their PRs in production and produce a proof of that verification.

Again, we automated almost all of this—after each deploy, the toolong would create a new Slack message in a designated room and tag the person (in Slack) whose PR just got deployed. All the engineers had to do was verify their PR (by posting a message under that thread on how they verified it) and add a emoji.

Similar to the PR notifications, this system wasn’t used as evidence as much as it created a culture of high cadence. It became very obvious how many deploys were going out and whose PRs were in them, which became motivating for most people.

Other Notes

As you can see, while there were many parts to this system, it required practically zero overhead from individual contributors.

Aside from people writing their standup messages, the work was either done by management or things engineers should have been doing anyway as part of a continuous delivery cycle.

Before you go out and implement this system at your company, it behooves me to share some thoughts. I hope what I’m going to share here is obvious to most people, but well, more on that in a second.

Start Small

Most of what you’ve read came together over the course of 2 years. I did not start with 18 engineers divided across 3 teams. The first incarnation was simply the standup messages and the real-time deploy messages on Slack. Soon, we added our All-Hands presentations. The rest came in as I needed them.

If you tried to implement all of this at once, you’d get a mutiny and your engineers would be right. Start with one or two. It will feel annoying for a bit. Power through it, and if it works, consider the rest.

Context Matters Enormously

Needless to say, this system worked for one company as it grew from 2 to 25 people. Engineering was divided into 2 or 3 teams and mostly worked in the same timezone, albeit remotely. We built a SaaS product that was delivered continuously, sometimes even doing 25 deploys a day. It worked for us.

Don’t cargo cult this system. Understand the principles—minimal engineer overhead, maximum visibility, automation over process—and adapt them to your context.

Be Extremely Explicit in Your Expectations

When I introduced daily standups, I didn’t just say “post what you’re working on.” I provided examples. Literal examples of good and bad standup messages.

Good standup message:

Yesterday: Finished the OAuth integration, all tests passing.

Today: Starting on the password reset flow.

Blockers: Need design feedback on the reset email template.

Bad standup message:

Yesterday: Working on auth stuff.

Today: Working on auth stuff.

Blockers: None.

The same thing for changelogs. The same thing for All-Hands presentations. I showed people exactly what I wanted to see.

Be more explicit than you think you need to be. What seems obvious to you is not obvious to everyone. Provide examples. Provide templates. Show, don’t just tell.

Be Open to Feedback

As I mentioned, one-third of my 1:1s was about “Process.” I always directly asked people, “Anything we can be doing to make you more productive or happier?” This is where people often shared their dislikes about some of what I just talked about.

Similarly, as part of our regular “Engineering All-Hands” (which I held every 2-3 weeks), we had a “Tools and Processes Retro” every few months. This was another place where people could express their thoughts.

I did not always do what people asked for. Posting daily standup messages (and remind people to post them) was deeply unpopular, but it was required for a remote team. The deploy verifications were another common pain point (due to deploys sometimes getting delayed behind other deploys), so I relaxed the requirements to check them daily versus “immediately after deploys.” You learn and adjust.

It’s Not About the Numbers, but Numbers Matter

You’ll notice that the system wasn’t about the numbers as much as creating a culture of cadence and high-performance output which resulted in quick outcomes.

At the same time, I did sometimes put numbers on things—often as a way to verify my intuitions and assumptions. Whenever I felt like someone was falling behind, I quickly gathered their recent contributions from various artifacts and calibrated against their teammates. I didn’t do anything anyone else couldn’t have done for themselves so there wasn’t room for debate.

You also need to look at the numbers in context; often they will tell you a part of the story. For example, one time numbers told me that someone was working on something really complicated—there were a lot of PRs but not a lot of deploys. That was my hint that I didn’t scope the project correctly. Sheer number of PRs would have been a bad indicator.

And yes, there were also times the numbers told me that someone was falling behind. Since I used practically all public numbers, that made a hard conversation about performance at least slightly easier by grounding things in reality versus feelings.

Here’s what I want you to take away from this: measurement isn’t the enemy. Bad measurement is the enemy.

When you measure the wrong things—lines of code, number of commits, hours logged—you create dysfunction. When you burden engineers with measurement overhead, you reduce productivity instead of improving it and create enemies. When you use metrics as a weapon to punish people, you destroy trust.

$$\text{Trust} = \frac{\text{Transparency} \times \text{Fairness}}{\text{Surveillance}}$$

But when you measure thoughtfully, with respect for people’s time and a focus on visibility rather than control, measurement becomes valuable. It helps you see what’s actually happening. It helps you identify problems early. It helps you understand patterns and improve over time. It helps you, as well as the people being measured.

The system should work for your engineers, not against them. If your measurement system makes engineers less productive, you’ve failed. If it makes them feel surveilled and distrusted, you’ve failed. If it becomes an end in itself rather than a means to building better products, you’ve failed.

Build a system that gives you visibility while letting engineers focus on what they do best: engineering. That’s the goal. Everything else is details.

Populism and Economic Prosperity

Hacker News
mainlymacro.blogspot.com
2025-10-20 20:56:19
Comments...
Original Article

Mainstream political parties normally claim that populist parties, if they ever got to power, would damage the economy. We have clear evidence that they are right, and right in a big way. A paper in the American Economic Review (one of the top economics journals) published nearly two years ago, looked at the macroeconomic consequences of populist regimes coming to power. The results can be summed up in the chart below (from this working paper version )

The black line is the average difference between GDP under a populist government compared to a counterfactual GDP without the populist government. If the populist government was in power for 15 years, GDP would be over 10% lower as a result of that government. I have used the analysis for right wing populists: for a similar chart for all populists see the paper, or this useful summary by Joel Suss in the FT.

There are obviously countless issues in any analysis of this type, like how a populist government is defined, how you do the counterfactual, how you ensure you are not getting reverse causality (i.e. bad economic times encourage the election of populists etc) and so on. For those interested in those issues the paper is very readable.

The UK is part of this data set, because it rightly labels the Johnson government as populist. We know that the Johnson government reduced UK GDP because of Brexit, by a total of 4% according to the OBR. But the UK’s response to the pandemic was also pretty bad in large part because of the Johnson government. So our own national experience is consistent with the chart above.

Besides GDP, the paper also finds that debt to GDP increases under a populist government, and there is weaker evidence that inflation also increases. The reasons why GDP falls under a populist government are only touched on in the paper, but they are pretty obvious. First, populist governments tend to restrict overseas trade: Brexit, and Trump’s tariffs, are clear examples. Putting barriers up that make overseas trade more difficult reduces GDP. The paper provides average effects for the impact of populism on trade, but it is more useful to look at the specific measures imposed by a particular populist government.

A second reason populist governments reduce GDP is that they make their countries less open to people from overseas, as well as goods from overseas. Populist governments tend to weaken an independent judicial system, and that among other things weakens the confidence of business to invest. Our current vintage of right wing populists appear very hostile to academia, and academia is where innovation starts, and where the expertise to implement innovation often comes from. Populists tend to devalue expertise, which allows them to make unrealistic promises on tax and spending, creating budgetary problems. I could go on with specifics, but more generally, societies where one part is ‘othered’ or declared the enemy within tend to work less well than those that are more unified. Societies where the governing elite is mainly concerned with making money at other people's expense work less well than societies where innovation is the key to becoming wealthy. [1]

One obvious question is why, if the macroeconomic impact of populist governments is so bad, they don’t get voted out of power quite quickly. Unfortunately they generally don’t. The paper estimates that on average populist governments tend to stay in power longer than other governments. There are two reasons for this depressing result. First, populists rig the democratic system to make their re-election more likely, either directly through gerrymandering for example or by restrictions on press freedom. Second, the social and economic reasons for the rise in populism tend to be persistent.

The topic of our time is why today we are seeing such a rise in right wing populism, populism that all too easily morphs into forms of fascism. It is of course important to understand why some groups of people are more receptive to populist messages than others. But in understanding why now is different from previous post-war decades, I don’t think that is where we should focus. In the UK, for example, the BNP has always had support, and racism used to be much more prevalent. What has changed over the last few decades is the attitude of the political and media elites .

When I started this blog, it was the early period of UK austerity and I was obsessed with how a UK government could disregard basic macroeconomics (don’t cut spending in a demand deficient recession where interest rates have hit their floor), but also why that policy was popular despite its disregard for what every first year economics student is taught. I talked a bit about the transmission mechanism between academic knowledge and policy, and how that mechanism can break down or be disrupted. Periods where expertise was ignored or contested could still then be considered undesirable departures from an accepted norm.

Of course since then we have had Brexit, which was another case where this breakdown occurred with devastating consequences for the UK economy. Unfortunately today political parties ignoring academic expertise has become routine. In the US we now have an administration that actively contests expertise not just in economics, but in climate science and medicine. I and many others have written extensively on how the media, owned by self-interested members of the plutocracy or run by their lackies, can not just ignore academic expertise but through propaganda counteract any influence it might have. [2]

For those who still doubt the critical role the media can play in all this just look at what has happened and is happening in the US. The biggest social media company bought up by someone who thinks Farage isn’t right wing enough for the UK, and who has changed that company so that it promotes the far right. US media that isn’t already owned by plutocrats who support Trump is being bought up by them . A media that once could be described as manufacturing consent is being turned into a propaganda machine for the Republicans and Trump. You can see the same processes starting in the UK. As Professor Emily Bell puts it in this fascinating discussion , “the patterns are almost exactly the same” in the UK and US.

The information and knowledge that populism severely damages the economy is there and is in the public domain, but the media increasingly acts to hide that from the public or distort that information so that much of the public never gets to understand it. Reality tends to win out in the end because it’s hard to disguise what is happening to people’s incomes, which is why Brexit is now much less popular than it was, but societies are increasingly losing their ability to avoid these pitfalls in advance. The reason this is happening is because a significant number of the ridiculously rich have decided their interests are served by promoting populism, and by investing in the means to promote populism.

[1] More speculatively, as right wing populists often appeal to a rose coloured view of the past as something to return to, it is not surprising that they enact policies that take society backwards. There also seems to be an aversion against current sacrifice for future gain.

[2] It is sometimes claimed that those voting for populists don’t care about negative economic effects. I think this may be true for a minority, but is not true for most. Those who voted for Brexit were significantly more optimistic about the economy because they believed the lies about ‘project fear’ and more money for the NHS.

Claude Code for web - a new asynchronous coding agent from Anthropic

Simon Willison
simonwillison.net
2025-10-20 20:43:15
Anthropic launched Claude Code for web this morning. It's an asynchronous coding agent - their answer to OpenAI's Codex Cloud and Google's Jules, and has a very similar shape. I had preview access over the weekend and I've already seen some very promising results from it. It's available online at cl...
Original Article

20th October 2025

Anthropic launched Claude Code for web this morning. It’s an asynchronous coding agent —their answer to OpenAI’s Codex Cloud and Google’s Jules , and has a very similar shape. I had preview access over the weekend and I’ve already seen some very promising results from it.

It’s available online at claude.ai/code and shows up as a tab in the Claude iPhone app as well:

Screenshot of Claude AI interface showing a conversation about updating a README file. The left sidebar shows "Claude" at the top, followed by navigation items: "Chats", "Projects", "Artifacts", and "Code" (highlighted). Below that is "Starred" section listing several items with trash icons: "LLM", "Python app", "Check my post", "Artifacts", "Summarize", and "Alt text writer". The center panel shows a conversation list with items like "In progress", "Run System C", "Idle", "Update Rese", "Run Matplotl", "Run Marketin", "WebAssembl", "Benchmark M", "Build URL Qu", and "Add Read-Or". The right panel displays the active conversation titled "Update Research Project README" showing a task to update a GitHub README file at https://github.com/simonw/research/blob/main/deepseek-ocr-nvidia-spark/README.md, followed by Claude's response and command outputs showing file listings with timestamps from Oct 20 17:53.

As far as I can tell it’s their latest Claude Code CLI app wrapped in a container (Anthropic are getting really good at containers these days) and configured to --dangerously-skip-permissions . It appears to behave exactly the same as the CLI tool, and includes a neat “teleport” feature which can copy both the chat transcript and the edited files down to your local Claude Code CLI tool if you want to take over locally.

It’s very straight-forward to use. You point Claude Code for web at a GitHub repository, select an environment (fully locked down, restricted to an allow-list of domains or configured to access domains of your choosing, including “*” for everything) and kick it off with a prompt.

While it’s running you can send it additional prompts which are queued up and executed after it completes its current step.

Once it’s done it opens a branch on your repo with its work and can optionally open a pull request.

Putting Claude Code for web to work

Claude Code for web’s PRs are indistinguishable from Claude Code CLI’s, so Anthropic told me it was OK to submit those against public repos even during the private preview. Here are some examples from this weekend:

That second example is the most interesting. I saw a tweet from Armin about his MiniJinja Rust template language adding support for Python 3.14 free threading. I hadn’t realized that project had Python bindings, so I decided it would be interesting to see a quick performance comparison between MiniJinja and Jinja2.

I ran Claude Code for web against a private repository with a completely open environment ( * in the allow-list) and prompted:

I’m interested in benchmarking the Python bindings for https://github.com/mitsuhiko/minijinja against the equivalente template using Python jinja2

Design and implement a benchmark for this. It should use the latest main checkout of minijinja and the latest stable release of jinja2. The benchmark should use the uv version of Python 3.14 and should test both the regular 3.14 and the 3.14t free threaded version—so four scenarios total

The benchmark should run against a reasonably complicated example of a template, using template inheritance and loops and such like In the PR include a shell script to run the entire benchmark, plus benchmark implantation, plus markdown file describing the benchmark and the results in detail, plus some illustrative charts created using matplotlib

I entered this into the Claude iPhone app on my mobile keyboard, hence the typos.

It churned away for a few minutes and gave me exactly what I asked for. Here’s one of the four charts it created:

Line chart titled "Rendering Time Across Iterations" showing rendering time in milliseconds (y-axis, ranging from approximately 1.0 to 2.5 ms) versus iteration number (x-axis, ranging from 0 to 200+). Four different lines represent different versions: minijinja (3.14t) shown as a solid blue line, jinja2 (3.14) as a solid orange line, minijinja (3.14) as a solid green line, and jinja2 (3.14t) as a dashed red line. The green line (minijinja 3.14) shows consistently higher rendering times with several prominent spikes reaching 2.5ms around iterations 25, 75, and 150. The other three lines show more stable, lower rendering times between 1.0-1.5ms with occasional fluctuations.

(I was surprised to see MiniJinja out-performed by Jinja2, but I guess Jinja2 has had a decade of clever performance optimizations and doesn’t need to deal with any extra overhead of calling out to Rust.)

Note that I would likely have got the exact same result running this prompt against Claude CLI on my laptop. The benefit of Claude Code for web is entirely in its convenience as a way of running these tasks in a hosted container managed by Anthropic, with a pleasant web and mobile UI layered over the top.

Anthropic are framing this as part of their sandboxing strategy

It’s interesting how Anthropic chose to announce this new feature: the product launch is buried half way down their new engineering blog post Beyond permission prompts: making Claude Code more secure and autonomous , which starts like this:

Claude Code’s new sandboxing features, a bash tool and Claude Code on the web, reduce permission prompts and increase user safety by enabling two boundaries: filesystem and network isolation.

I’m very excited to hear that Claude Code CLI is taking sandboxing more seriously. I’ve not yet dug into the details of that—it looks like it’s using seatbelt on macOS and Bubblewrap on Linux.

Anthropic released a new open source (Apache 2) library, anthropic-experimental/sandbox-runtime , with their implementation of this so far.

Filesystem sandboxing is relatively easy. The harder problem is network isolation, which they describe like this:

Network isolation , by only allowing internet access through a unix domain socket connected to a proxy server running outside the sandbox. This proxy server enforces restrictions on the domains that a process can connect to, and handles user confirmation for newly requested domains. And if you’d like further-increased security, we also support customizing this proxy to enforce arbitrary rules on outgoing traffic.

This is crucial to protecting against both prompt injection and lethal trifecta attacks. The best way to prevent lethal trifecta attacks is to cut off one of the three legs, and network isolation is how you remove the data exfiltration leg that allows successful attackers to steal your data.

If you run Claude Code for web in “No network access” mode you have nothing to worry about.

I’m a little bit nervous about their “Trusted network access” environment. It’s intended to only allow access to domains relating to dependency installation, but the default domain list has dozens of entries which makes me nervous about unintended exfiltration vectors sneaking through.

You can also configure a custom environment with your own allow-list. I have one called “Everything” which allow-lists “*”, because for projects like my MiniJinja/Jinja2 comparison above there are no secrets or source code involved that need protecting.

I see Anthropic’s focus on sandboxes as an acknowledgment that coding agents run in YOLO mode ( --dangerously-skip-permissions and the like) are enormously more valuable and productive than agents where you have to approve their every step.

The challenge is making it convenient and easy to run them safely. This kind of sandboxing kind is the only approach to safety that feels credible to me.

iOS 26.1 lets users control Liquid Glass transparency

Hacker News
www.macrumors.com
2025-10-20 20:39:46
Comments...
Original Article

With the fourth betas of iOS 26.1, iPadOS 26.1, and macOS 26.1, Apple has introduced a new setting that's designed to allow users to customize the look of Liquid Glass.

ios 26 1 liquid glass opaque
The toggle lets users select from a clear look for Liquid Glass, or a tinted look. Clear is the current Liquid Glass design, which is more transparent and shows the background underneath buttons, bars, and menus, while tinted increases the opacity of Liquid Glass and adds more contrast.

The new setting can be found on iOS and iPadOS by going to Settings > Display and Brightness, or System Settings > Appearance on the Mac.

Apple says that the new toggle was added because during the beta testing period over the summer, user feedback suggested that some people would prefer to have a more opaque option for Liquid Glass. The added setting provides additional customization in iOS 26.1, iPadOS 26.1, and macOS Tahoe 26.1.

Increasing opacity and adding contrast applies to Liquid Glass throughout the operating system, including in apps and Lock Screen notifications.

There are multiple other new features in iOS 26.1, including a new slide to stop feature for alarms and timers, new Apple Intelligence languages, a redesigned Apple TV app icon, changes to the Settings app, and more, with a full list of features available in our iOS 26.1 feature guide .

Popular Stories

Apple Said to Cut iPhone Air Production Amid Underwhelming Sales

Apple plans to cut production of the iPhone Air amid underwhelming sales performance, Japan's Mizuho Securities believes (via The Elec). The Japanese investment banking and securities firm claims that the iPhone 17 Pro and iPhone 17 Pro Max are seeing higher sales than their predecessors during the same period last year, while the standard iPhone 17 is a major success, performing...

iOS 26.1 to iOS 26.4 Will Add These New Features to Your iPhone

Saturday October 18, 2025 11:00 am PDT by

iOS 26 was released last month, but the software train never stops, and iOS 26.1 beta testing is already underway. So far, iOS 26.1 makes both Apple Intelligence and Live Translation on compatible AirPods available in additional languages, and it includes some other minor changes across the Apple Music, Calendar, Photos, Clock, and Safari apps. More features and changes will follow in future ...

iOS 26.0.2 Update for iPhones Coming Soon

Apple's software engineers continue to internally test iOS 26.0.2, according to MacRumors logs, which have been a reliable indicator of upcoming iOS versions. iOS 26.0.2 will be a minor update that addresses bugs and/or security vulnerabilities, but we do not know any specific details yet. The update will likely be released by the end of next week. Last month, Apple released iOS 26.0.1,...

Apple's Next Rumored Products: New HomePod Mini, Apple TV, and More

Thursday October 16, 2025 9:13 am PDT by

Apple on Wednesday updated the 14-inch MacBook Pro, iPad Pro, and Vision Pro with its next-generation M5 chip, but previous rumors have indicated that the company still plans to announce at least a few additional products before the end of the year. The following Apple products have at one point been rumored to be updated in 2025, although it is unclear if the timeframe for any of them has...

Some Apple Employees Have 'Concerns' About iOS 26.4's Revamped Siri

iOS 26.4 is expected to introduce a revamped version of Siri powered by Apple Intelligence, but not everyone is satisfied with how well it works. In his Power On newsletter today, Bloomberg's Mark Gurman said some of Apple's software engineers have "concerns" about the overhauled Siri's performance. However, he did not provide any specific details about the shortcomings. iOS 26.4 will...

New iPad Pro Has Six Key Upgrades Beyond M5 Chip

Saturday October 18, 2025 10:57 am PDT by

While the new iPad Pro's headline feature is the M5 chip, the device has some other changes, including N1 and C1X chips, faster storage speeds, and more. With the M5 chip, the new iPad Pro has up to a 20% faster CPU and up to a 40% faster GPU compared to the previous model with the M4 chip, according to Geekbench 6 results. Keep in mind that 256GB and 512GB configurations have a 9-core CPU,...

New 14-Inch MacBook Pro Has Two Key Upgrades Beyond the M5 Chip

Thursday October 16, 2025 8:31 am PDT by

Apple on Wednesday updated the 14-inch MacBook Pro base model with an M5 chip, and there are two key storage-related upgrades beyond that chip bump. First, Apple says the new 14-inch MacBook Pro offers up to 2× faster SSD performance than the equivalent previous-generation model, so read and write speeds should get a significant boost. Apple says it is using "the latest storage technology," ...

M5 MacBook Air Coming Spring 2026 With M5 Mac Studio and Mac Mini in Development

Thursday October 16, 2025 3:57 pm PDT by

Apple plans to launch MacBook Air models equipped with the new M5 chip in spring 2026, according to Bloomberg's Mark Gurman. Apple is also working on M5 Pro and M5 Max MacBook Pro models that will come early in the year. Neither the MacBook Pro models nor the MacBook Air models are expected to get design changes, with Apple focusing on simple chip upgrades. In the case of the MacBook Pro, a m...

M5 Chip Achieves Impressive Feat in 14-Inch MacBook Pro Speed Test

The first alleged benchmark result for the M5 chip in the new 14-inch MacBook Pro has surfaced, allowing for some performance comparisons. Based on a single unconfirmed result uploaded to the Geekbench 6 database today, the M5 chip has pulled off an impressive feat. Specifically, the chip achieved a score of 4,263 for single-core CPU performance, which is the highest single-core score that...

J.P. Morgan's OpenAI loan is strange

Hacker News
marketunpack.com
2025-10-20 20:38:52
Comments...
Original Article

In October, OpenAI secured a 4 billion dollar revolving credit facility from J.P. Morgan and several other banks . I was surprised when I heard this because OpenAI is a young company with no earnings. Shouldn't all their capital come from investors? Let's run some numbers.

From first principles

Let's do an Expected Value (EV) calculation, first from the perspective of an investor and then from the perspective of a lender. We'll pick some arbitrary parameters first, then refine.

Putting our investor hat on, the possible returns for investing $1,000 into OpenAI look like this:

  • Cost: $1,000
  • Case 1 (90%): OpenAI goes bankrupt. Return: $0
  • Case 2 (9%): OpenAI becomes a big successful company and goes 10x. Return: $10,000
  • Case 3 (1%): OpenAI becomes the big new thing and goes 100x. Return: $100,000

Our expected value is:

\[\begin{align} EV &= -1000 + 0.9 \times 0 + 0.09 \times 10000 + 0.01 \times 100000\\ EV &= -1000 + 0 + 900 + 1000\\ EV &= 900 \end{align}\]

The EV is positive, so this is a good investment. Obviously, there's a 90% chance of it going to zero, so if this were our only investment, it would be an insanely risky one. But provided we can do many investments like this and provided their failure cases aren't correlated, this would be a profitable strategy.

What happens if we instead put our lender hat on? Using the same probabilities as above, the possible returns for lending $1,000 to OpenAI at 5% interest look like this:

  • Cost: $1,000
  • Case 1 (90%): OpenAI goes bankrupt. Return: $0
  • Case 2 (9%): OpenAI becomes a big successful company and goes 10x. Return: $1,000 + 5% interest = $1,050
  • Case 3 (1%): OpenAI becomes the big new thing and goes 100x. Return: $1,000 + 5% interest = $1,050

Lenders don't benefit directly from the success of the company. Whether it barely scrapes by but manages to repay the loan or becomes the greatest company ever and easily repays the loan, it's all the same to a lender. So, we can merge cases 2 and 3 into:

  • Case 2+3 (10%): OpenAI doesn't go bankrupt. Return: $1,000 + 5% interest = $1,050

This makes our EV in the lending case:

\[\begin{align} EV &= -1000 + 0.9 \times 0 + 0.1 \times 1050\\ EV &= -1000 + 0 + 105\\ EV &= -895 \end{align}\]

The EV is negative, so we'd end up losing most of our money on average. Lending on these terms doesn't make sense.

There are two numbers we made up in the above calculation: the probability of bankruptcy and the interest rate. Let's leave the interest rate fixed at 5% and see what the probability \(p\) would have to be for us to break even.

\[\begin{align} EV &= -1000 + p \times 0 + (1 - p) \times 1050\\ EV &= -1000 + 1050 - p \times 1050 \\ EV &= 50 - p \times 1050 \\[0.5cm] & \text{Set EV to 0} \\[0.5cm] 0 &= 50 - p \times 1050 \\ p &= \frac{50}{1050} \\ p &= 0.0476 \end{align}\]

So, we'd break even if the probability of OpenAI going bankrupt was only about 5%. In practice, we'd want it to be lower than that so that we made a profit and so that we had a margin of safety in case our assumptions were wrong.

This 5% failure rate seems very optimistic to me, but this scenario is basically the one the consortium of banks got into. Concrete details on the deal are sparse, but this CIO Leaders article claims the interest rate was "SOFR + 100 basis points". The overnight SOFR rate is about 4.1% in October, so this puts OpenAI's interest at about 5%.

From market data

The problem with the above expected value calculation is that it's very idealized. The shape of it is correct, but the real world is too messy to be accurately represented by just a couple of parameters. I think it would be very difficult to build a model with enough predictive accuracy to be useful and I suspect there just isn't enough publicly available data to plug into it to make it work.

Luckily for us, banks exist! We know the banks have the better model and the non-public data and we know they came up with about 5% interest. So, let's work back from that and see what we can learn.

We're talking about a loan here and that's very similar to issuing bonds. So, we should be able to look at the bond market and find companies in similar financial health (from the perspective of a creditor). One problem is that we only know the overnight rate for OpenAI of about 5%, but bonds on the market will have longer maturities. We need to calculate what what yield a longer maturity loan would require and we can do that by looking at US treasuries.

According to Bloomberg , the three month treasuries have a yield of 3.94%. One year ones have a yield of 3.58%.

Figure 1. Treasury Yields for US government bonds. The table shows the following yields: 3 months is 3.94%, 6 month is 3.81%, 12 month is 3.58%, 2 year is 3.50%, 5 year is 3.62%, 10 year is 4.03%, 30 year is 4.62%. This describes a smile that initially goes down, goes back up to starting level at 10 years, then continues upwards.

One way of thinking about corporate bonds is that they're basically treasury bonds plus some premium to account for the risk of default. This default spread seems to be about \(5\% - 3.94\% \approx 1\%\) in OpenAI's case. By this logic, OpenAI's one year debt would have a yield of about 4.6%.

Can we find some one year bonds with a yield of 4.6%?

Figure 2. A sample of corporate USD bonds expiring in one year or less, sorted by their mid-yield to maturity. (Source: Saxo Bank)

Some bonds in the vicinity of what we're looking for are:

  • 4.99%: HCA Inc. (US healthcare provider with credit rating BBB),
  • 4.73%: Ziraat Katilim (Turkish bank with credit rating B+), and
  • 4.24%: Citigroup (US bank with credit rating A).

In fact, scanning the sample above, it's mostly banks with BBB and A ratings. So, the consortium of big banks seems to have lent money to OpenAI at the kind of rates they themselves are borrowing at.

Looking at just a few bonds is interesting, but anecdotal. It would be better if we had some statistics across the whole bond market. Helpfully, Prof Damodaran goes through the exercise of calculating just such statistics ( archive link ) every year, most recently this January.

Figure 3. To quote the author: "This is a table that relates the interest coverage ratio of a firm to a 'synthetic' rating and a default spread that goes with that rating. The link between interest coverage ratios and ratings was developed by looking at all rated companies in the United States. The default spreads are obtained from traded bonds. Adding that number to a riskfree rate should yield the pre-tax cost of borrowing for a firm."

Looking up OpenAI's default spread of 1% in that table, we see it's at the level we'd expect for an A- or BBB firm (same as with the anecdotal search earlier). This normally corresponds to an interest coverage ratio of 3.00-4.24. However, OpenAI's actual interest coverage ratio is negative because their earnings before interest are negative.

This doesn't make sense: any way we look at it, OpenAI is getting the kind of interest rates only much more established and profitable firms would be getting. So, my initial surprise at hearing about this is justified, but there must be an explanation because the big banks wouldn't make such an obvious mistake.

Making this make sense

OpenAI is not a profitable company. It's also a private company, so we don't get to see audited financials, but we still know some things. This Reuters article claims OpenAI is going to generate $3.6 billion in revenue this year, but the costs will lead to a loss of more than $5 billion.

There's also speculation that their revenue next year will jump to $11.6 billion. However, there's no speculation about what their earnings will be because they're currently selling their services below cost and there isn't really any story as to how they'll turn this profitable.

The banks are lenders in this scenario, so they don't really care about how many users OpenAI gets or how huge their revenue becomes. As lenders, all they care about is getting paid back and it really doesn't seem like OpenAI will have the earnings to do that. But maybe earnings aren't what matters here.

If OpenAI can't pay its debts, it goes bankrupt and the creditors seize the company. Importantly, they seize it from the equity holders. Who are these equity holders? According to this Digital Information World article , the owners are Microsoft (28%), OpenAI non-profit and employees (52%), and other investors (20%).

So, the hypothetical is OpenAI runs out of money. They have revenue, but since their costs are higher, they don't actually have anything left over. They can't make interest payments on their debt, so they go bankrupt, and the banks seize the company from Microsoft. I don't think Microsoft will allow this to happen. Microsoft's earnings for last year were $88 billion , so I think Microsoft will just pay off OpenAI's $4 billion debt in this scenario. And I think the banks know all this.

So, the banks loaning money to OpenAI at an A- interest rate doesn't make sense, but effectively loaning the same to Microsoft with its AAA rating does, and that's what's actually happening here.

When a stadium adds AI to everything, it's worse experience for everyone

Hacker News
a.wholelottanothing.org
2025-10-20 20:38:02
Comments...
Original Article

I just got back from a 24hr trip to Los Angeles to catch my favorite Portland Thorns team, watching them clinch their playoff spot in a match at BMO stadium in downtown Los Angeles.

In May of 2024, I did the same trip to catch a match on Mother's Day, but I accidentally chose bad seats in the sun and it was hot and uncomfortable. Ultimately, it partially inspired my wife and I's book reviewing every NWSL soccer stadium so other fans wouldn't suffer the same fate when flying across the country to catch their favorite team.

Me and my pal Greg yesterday

This year, I got better seats in the shade and enjoyed the game. But overall? The experience of being in the stadium was worse a year later. After thinking about it on the flight home, I think the reason was the stadium's rush to automation and AI in several places.

Spoiler alert: deploying camera/AI recognition for everything isn't great

Every concession stand, including the ones that didn't even serve hot food, used the apparatus in the photo above to control all checkouts. I assume these are expensive units, because most places that used to have several checkout lanes only had one of them, requiring everyone to checkout through a single location.

Here's how they worked in the stadium yesterday: You place all your items on the white shelf with some space between them. Although they were clearly designed to be a self-checkout experience, the stadium had a staff member rearrange your items, then for about 30 seconds the kiosk would be thinking. After, it would pop up all items on the menu, and the staff member would have to tap to confirm what each item was. Then another 30 seconds to calculate and move the purchase to a point of sale/tap on the side, then you'd pay.

Overall, this added at least one, if not two full minutes to every transaction that didn't normally have those delays. Lines were unbearably long, and it was a hot day in LA yesterday, at 87ºF/30ºC. I bought food and drinks several times over the the course of the day and had to endure the process multiple times.

When you add object recognition, you're incentivized to reduce choices

Here's an unintended consequence of moving all your concession stand checkouts to computer vision: it's easier if you have less things on offer.

Case in point: Let's talk about my favorite concession stand at BMO last year, a place that served rotisserie chicken with waffle fries and chicken sandwiches. Here's our meal from 2024, it was well-seasoned, came with great sauces, and was one of the best meals I had at a stadium in my entire nationwide tour, which is why I remembered it.

I returned to the same concession stand yesterday and here's their new menu:

When your checkout stand relies on computer vision, it's probably confusing to have half a dozen different menu items that fans can enjoy. But if you could condense it to just chicken tenders, fries, a hot dog, and boxes of candy, your computer vision-based checkout system will probably work faster since it has to do less work with the obvious shapes of each of those items.

Looking through my photos from my 2024 visit, I saw a variety of food options including smashburgers and a Korean BBQ rice bowl I also tried, pictured above.

Then I realized varied foods would be difficult for computer vision to decipher, so why not get rid of most options? Walking around the stadium yesterday, the menus were basically all hot dogs, pizza, nachos, and chicken tenders.

Even quick service options sucked

As I said, it was a hot day, I was constantly parched, and I ended up drinking four bottles of water over the course of three hours. Each time, I had to go through the automated checkout gauntlet, and each time it required a long wait in a line, while I missed bits of the match.

Late in the game, I wanted to get water quickly and they had these "vending kiosks" that were fully automated. You'd tap your phone on the locked door, it would unlock, you'd grab items, then close the door. Next, you had to stand there for about 2 minutes while it said "calculating checkout" before showing you a receipt on the screen.

What was supposed to be fast was very slow. The person in front of me bought two items and saw she got charged for three. Since there were no paper receipts, she took a photo of the machine before going to the guest services to complain. I missed ten minutes of the game getting water.

This was a quick service "market" style place and last year, you'd just grab stuff off a shelf, and checkout quickly from staff at multiple registers. This year, it had a long line snaking all over because of the slow AI/camera checkout kiosks.

It was a busy game, being the last home match for the fans and I would guess there were around 17,000-18,000 people in attendance. When it's nearly 90ºF/30ºC, heat exhaustion becomes a problem for crowds. When it takes people ten minutes to buy a bottle of water (I didn't see automated water fillers at the restrooms), the embrace of slow AI/Camera-based checkout systems starts to become a health and safety issue for the crowd.

But Mrs. Lincoln—besides the obvious—how was the play?

A year later visiting the same stadium, I got worse food, slower service, and a worse overall experience. On the bright side, the billionaire stadium owners probably got to reduce their staff in the process while maybe increasing profits.

The company behind the kiosks claims they are 400% faster than human checkers and result in a 25% increase in profits . After experiencing it in person yesterday, I think those numbers are bullshit. Human checkers are clearly faster and smoother, and I bet they sold more food and drinks when people could get them quickly.

And the portions? They were so small!

Subscribe to get new posts in your inbox

I’m a digital nomad who works on the road. These seven travel gadgets go with me everywhere

Guardian
www.theguardian.com
2025-10-20 20:15:21
From ultra-portable laptops to organizers for all your cables, the right gear makes all the differenceSign up for the Filter US newsletter, your weekly guide to buying fewer, better thingsTwo years ago, I traded the routine of a fixed desk for a rotating cast of airport gates, train stations, and ca...
Original Article

T wo years ago, I traded the routine of a fixed desk for a rotating cast of airport gates, train stations, and cafe tables. As a freelance journalist constantly on the move, I’ve learned that the right gear doesn’t just make travel easier, it’s the only way to make this lifestyle possible. Fortunately, as a tech reviewer, I’m able to test just about every travel gadget you can imagine.

From beach towns in Asia to work trips in the US and Europe, these products are the reason I can pack light, stay productive, and keep moving without losing my sanity. Whether you’re also living and working out of a backpack, or just want to stay connected on your next vacation, there’s something for you here.

Item trackers

If you travel nonstop, it’s only a matter of time before you misplace your backpack, wallet, or phone. An item tracker can help you quickly relocate it in that heart-racing moment when you realize it’s gone.

Apple users have a go-to option with the AirTag, but Android users may prefer Chipolo, which works on both platforms. I use the Chipolo Pop for its extra capabilities: it doubles as a remote camera shutter button, you can double press it to find your smartphone, and more.

Apple Airtag

Apple Airtag on white background
Photograph: Courtesy of Apple
$24.99 at Amazon
$29.00 at Apple

Chipolo Pop

Chipolo Pop on a white background
Photograph: Courtesy of Chipolo
$29.00 at Chipolo
$29.00 at Amazon

Recently, I also started using the new Chipolo Loop because it offers USB-C charging so I don’t need to constantly swap batteries. I love my wallet’s slim Nomad Tracking Card for the same reason. It can be charged wirelessly, though it only supports iOS.

Chipolo Loop

Chipolo Loop on a white background
Photograph: Courtesy of Chipolo
$39.00 at Chipolo
$39.00 at Amazon

Nomad Tracking Card

Nomad Tracking Card on a white background
Photograph: Courtesy of Nomad Goods
$29.00 at Nomad
$29.00 at Amazon

Peak Design packing cubes

Peak Design Packing Cubes being packed in a bag
Photograph: Courtesy of Peak Design

I am not the most organized person, and was never into organizers or packing cubes until I used the ones from Peak Design. After I fell in love with the small-size packing cube, I eagerly bought a larger size. They help me stay organized on the go with smart design choices.

For instance, you get dual compartments for separating clean and dirty clothes, and a compression zipper for making additional space when needed. Peak Design claims its packing cubes can last decades. I’ve used them for two years, and they’re going strong.

$23.96 at Amazon
$29.95 at Peak Design

Moment Tech Organizer

Moment Everything Tech Organizer Opened
Photograph: Prakhar Khanna/The Guardian

I have way too many tiny gadgets, but the Moment Everything Tech Organizer helps me keep them all straight. From wire compartments to pockets for my GoPro and Insta360 batteries, it has designated space for all the essentials. The low-profile design doesn’t add extra bulk to my backpack, and the ripstop fabric with two-layer waterproofing helps against liquid damage, too.

$60.00 at Moment
$40.00 at Amazon

Google Pixel Buds Pro 2

Google Pixel Buds Pro 2 earbud
Photograph: Prakhar Khanna/The Guardian

The Google Pixel Buds Pro 2 fit me extremely well, and offer excellent sound alongside surprisingly effective active noise cancellation (ANC). I love them because you can control playback, ANC, volume, and summon a voice assistant, all without a protruding stem design. They’ve helped me catch sleep on multiple long-haul flights. I’m testing the new Apple AirPods Pro 3 as we speak, which may rival these, but those are $249, and these are now marked down to $189, making them a great value.

Google Pixel Buds Pro 2

skip past newsletter promotion
Google Pixel Buds Pro 2 on a white background
Photograph: Courtesy of Google
$189.00 at Amazon
$229.00 at Google

Apple AirPods Pro 3

Apple Airpods 3 on a white background
Photograph: Courtesy of Apple
$249.00 at Apple
$249.00 at Amazon

Insta360 X5 360-degree action camera

Insta360 X5 360-degree action camera
Photograph: Prakhar Khanna/The Guardian

I carry a pair of Ray-Ban Meta glasses for shooting first-person video footage, but they aren’t made to be worn for adventures. And that’s where the Insta360 X5 comes in. It can record 360-degree footage in 8K resolution with unique perspectives, which isn’t possible with a phone or a pair of sunglasses. The latest generation has improved low-light footage, intuitive in-app editing, and a replaceable lens design, so you can swap just the lens if you break it. It is expensive, but worth it if you want to record your travels from all-new perspectives.

Insta360 X5

Insta360 X5 on a white background
Photograph: Courtesy of Insta360
$549.99 at Insta360
$549.99 at Amazon

Ray-Ban Meta glasses

Ray Ban Meta Glasses on a white background
Photograph: Courtesy of Meta
$299.00 at Meta

Casio G-Shock 2100 Series

Casio G Shock watch photographed on a man’s wrist
Photograph: Prakhar Khanna/The Guardian

I have a white Casio G-Shock 2100 Series GA-2100-7A7, which goes with me everywhere. From surfing lessons at the Mantra Surf Club in India to kayaking in Sri Lanka or work-focused travels to the US and Europe, this G-Shock has been on my wrist for every type of outing. This G-Shock is sporty, comfortable to wear, easy to switch between time zones with a few button presses, and looks premium without costing a lot.

$110.00 at Casio
$110.00 at Reed’s Jewelers

Apple MacBook Air M4

Apple M2 MacBook Air Laptop on A chair
Photograph: Prakhar Khanna/The Guardian

The Apple M2 MacBook Air has been the most reliable laptop in terms of battery life and usability for me in the past few years. I’m recommending the M4 model because it packs more power and better value for money in 2025. As a digital nomad, I want my laptop to be lightweight and ready whenever I need it, and the Apple MacBook Air conveniently checks both boxes. For Windows fans, I love and recommend the Asus Zenbook A14, which is even lighter and offers longer battery life than the Apple laptop.

Apple MacBook Air M4

Asus Zenbook A14 on a white background
Photograph: Courtesy of Asus
$899.00 at Apple
$899.00 at Apple

Asus Zenbook A14

Apple MacBook Air M4 on a white background
Photograph: Courtesy of Apple
$879.00 at Amazon
$999.99 at Best Buy

California Enacts Nation’s First Law To Define and Ban Ultraprocessed Foods

Portside
portside.org
2025-10-20 20:11:27
California Enacts Nation’s First Law To Define and Ban Ultraprocessed Foods jeannette Mon, 10/20/2025 - 15:11 ...
Original Article

California made history Wednesday by enacting the first law in the United States to define and ultimately ban unhealthy ultraprocessed foods, or UPFs, from what’s projected to be over 1 billion meals served to California schoolchildren in the 2025-26 school year.

By signing the “Real Food, Healthy Kids Act” into law, California Gov. Gavin Newsom has taken control of a growing movement to reform the nation’s food supply. The state legislature passed the bill in mid-September.

On average, children in the United States get nearly two-thirds of their calories from ultraprocessed foods packed full of additives and high-calorie sugars, salt and fat, according to a recent CDC report .

Not only does the California legislation define ultraprocessed food — a task which most of the world has yet to accomplish — it requires public health officials and scientists to decide which UPFs are most harmful to human health. Any “ultraprocessed food of concern” would then be phased out of the school food supply.

California’s decisive action is a sharp contrast to the “Make American Healthy Again,” or MAHA, movement spearheaded by US Health and Human Services director Robert F. Kennedy, Jr.

“While folks in DC are commissioning reports and debating hypotheticals, California is leading with decisive action,” Jesse Gabriel, the Democratic California Assemblymember who introduced the bill, said in a press conference.

“Or to put it more bluntly, here in California, we are actually doing the work to protect our kids health, and we’ve been doing it since well before anyone had ever heard of the MAHA movement,” Gabriel added.

The MAHA Commission promised decisive action on ultraprocessed food by August of this year. However, the final report , released in September, only promised the government would “continue efforts” to define ultraprocessed foods.

“Unfortunately, the final MAHA report is all promises and has no teeth,” Barry Popkin, the W.R. Kenan, Jr. Distinguished Professor at the University of North Carolina at Chapel Hill’s Gillings School of Global Public Health told CNN at the time .

“In my opinion, it shows the food, agricultural, and pharmaceutical industries got to the White House and won the day,” Popkin said.

A strong lobbying effort

That scenario could have happened in California as well, said Bernadette Del Chiaro, senior vice president for California at the Environmental Working Group, or EWG, a health advocacy organization based in Washington, DC, that cosponsored the bill.

“There was very serious opposition. Industry always kicks and screams and fights like bloody hell to keep these bills from becoming law,” Del Chiaro told CNN. “Just the number of committees we had to go through to get the bill passed illustrates how much lobbying was going on.”

The final vote, however, told it all: Out of 120 California assemblymembers and senators from both the Republican and Democratic parties, only one voted no — a Republican from San Diego.

“We had broad bipartisan support because ultimately, this is coming from the grassroots up — from politicians hearing about these issues in their community and wanting to do something about it,” Del Chiaro said. “We’re in this moment where Americans are waking up to the fact that we have chemicals in everything — our food, our water and our air — and we need to do something about it.”

CNN reached out to the Consumer Brands Association, which represents major food manufacturers, but did not hear back before publication.

The best and worst ultraprocessed foods

The “Real Food, Healthy Kids Act” specifically defines an ultraprocessed food as one that may contain such ingredients as nonnutritive sweeteners; high amounts of saturated fat, sodium and added sugar; additives such as emulsifiers, stabilizers and thickeners; flavor enhancers; a host of food dyes; and more.

The California law provides guidance on how much of an ingredient like sugar or salt can be in a food in order to be served to elementary and middle-school children, with a slightly different standard for high schoolers.

But not all ultraprocessed foods will be phased out of the state’s school supply, Gabriel told CNN.

“We can’t eliminate all ultraprocessed foods — we need them, we need the shelf stability, the safety, the convenience,” he said. “But the foods with the most harmful additives, foods that are linked to food addiction or cancer or diabetes or fatty liver disease, that’s the group of ultraprocessed foods we will phase out of our schools.”

An ultraprocessed food can also be banned for containing additives that have been banned, restricted or required to carry a warning by other local, state, federal or international jurisdictions, according to the law. (The European Union has taken action on various food dyes and other additives .)

Another red flag: Has the ultraprocessed food been modified to include high levels of sugar, salt or fat? (That’s a key way manufacturers design ultraprocessed foods to meet the “ bliss point ” human taste buds yearn for.)

It won’t happen overnight

Eliminating ultraprocessed foods from the California school food supply is not going to be a quick process. In earlier iterations of the bill, regulators were required to take some actions in 2026. In the final law, the first regulation — requiring food vendors to report all ultraprocessed foods they will supply — is due on or before February 1, 2028. All ultraprocessed food of concern must be identified and out of schools by July 1, 2035.

There’s also a concern that California’s efforts could be derailed by a federal government controlled by Republicans.

“We are constantly concerned that Congress will fight to preempt our authority with some kind of watered down, weak federal effort,” Del Chiaro said. “There’s certainly members of Congress that are threatening that, right?

“But we had broad bipartisan support, and I would certainly hope that, politicians at the federal level see that we’re all on the same team. We’re on Team Public Health and Team Kids and I hope they will continue to let states be the laboratories of democracy that we are.”

Sign up for CNN’s Eat, But Better: Mediterranean Style . Our eight-part guide shows you a delicious expert-backed eating lifestyle that will boost your health for life.

Correction: A previous version of this story incorrectly conflated the number of meals served to California schoolchildren annually with the number of students.

Microsoft: October updates break USB input in Windows Recovery

Bleeping Computer
www.bleepingcomputer.com
2025-10-20 20:06:40
Microsoft has confirmed that this month's security updates disable USB mice and keyboards in the Windows Recovery Environment (WinRE), making it unusable. [...]...
Original Article

Windows

Microsoft has confirmed that this month's security updates disable USB mice and keyboards in the Windows Recovery Environment (WinRE), making it unusable.

WinRE is a minimal Windows-based environment that operates independently of the main operating system, allowing users to repair, restore, or troubleshoot the operating system even when Windows fails to start.

However, as Microsoft revealed in a Windows release health dashboard update on Friday, after installing the KB5066835 October 2025 security updates, users will no longer be able to use their USB-wired mouse and keyboard while in recovery mode.

The company added that the input devices will still work while using the Windows operating systems, although if the users need to use WinRE they might already experience issues which require troubleshooting or repairing the OS.

"After installing the Windows security update released on October 14, 2025 (KB5066835), USB devices, such as keyboards and mice, do not function in the Windows Recovery Environment (WinRE)," Microsoft said.

"This issue prevents navigation of any of the recovery options within WinRE. Note that the USB keyboard and mouse continue to work normally within the Windows operating system."

Windows Recovery Environment
Windows Recovery Environment (WinRE)

This known issue impacts both client (Windows 11 24H2 and Windows 11 25H2) and server (Windows Server 2025) platforms.

Until Redmond resolves this bug, users should be able to switch to Bluetooth wireless mice and keyboards or input devices that use old PS/2 connectors which shouldn't be affected.

Microsoft noted that the Windows team is working on a fix for this Windows Recovery issue, which will likely be available over the coming days.

Earlier this year, Microsoft fixed a known issue causing erroneous 0x80070643 failure errors when installing April WinRE updates. In August 2024, it also retired Windows security updates that triggered 0x80070643 errors when installing WinRE updates on Windows 10, Windows 11, and Windows Server systems.

On Friday, the company also fixed Active Directory issues on Windows Server 2025 systems after installing security updates released since September and shared guidance on how to resolve smart card authentication issues impacting all Windows 10, Windows 11, and Windows Server devices after installing this month's Windows updates.

One day earlier, Microsoft removed two compatibility holds blocking Windows 11 upgrades via Windows Update and addressed another known issue that broke HTTP/2 localhost (127.0.0.1) connections after installing recent security updates.

First Self-Propagating Worm Using Invisible Code Hits OpenVSX and VS Code

Hacker News
www.koi.ai
2025-10-20 20:05:42
Comments...
Original Article

A month after Shai Hulud became the first self-propagating worm in the npm ecosystem, we just discovered the world's first worm targeting VS Code extensions on OpenVSX marketplace.

But GlassWorm isn't just another supply chain attack. It's using stealth techniques we've never seen before in the wild - invisible Unicode characters that make malicious code literally disappear from code editors. Combine that with blockchain-based C2 infrastructure that can't be taken down, Google Calendar as a backup command server, and a full remote access trojan that turns every infected developer into a criminal proxy node.

This is one of the most sophisticated supply chain attacks we've ever analyzed. And it's spreading right now.

GlassWorm - puts millions at risk

What GlassWorm does to infected systems:

  • Harvests NPM, GitHub, and Git credentials for supply chain propagation
  • Targets 49 different cryptocurrency wallet extensions to drain funds
  • Deploys SOCKS proxy servers, turning developer machines into criminal infrastructure
  • Installs hidden VNC servers for complete remote access
  • Uses stolen credentials to compromise additional packages and extensions, spreading the worm further

The current state: Seven OpenVSX extensions compromised on October 17, 2025. Total downloads -35,800. Ten extensions still actively distributing malware as you read this. The attacker's C2 infrastructure is fully operational - payload servers are responding, and stolen credentials are being used to compromise additional packages.

Update (Oct 19, 2025): A new infected extension detected in Microsoft's VSCode marketplace - still active.

The attack went live yesterday. The infrastructure is active. The worm is spreading.

What Our Risk Engine Detected

Here's how this whole thing started. Our risk engine at Koi flagged an OpenVSX extension called CodeJoy when version 1.8.3 introduced some suspicious behavioral changes. When our researchers dug into it - like we do with any malware our risk engine flags - what we found was very disturbing.

CodeJoy risk report on Koidex

CodeJoy looked legitimate. A developer productivity tool with hundreds of downloads, regular updates, seemingly normal code. But our risk engine caught something that human code review would miss entirely: suspicious network connections and credential access patterns that had nothing to do with the extension's productivity features

So we opened up the source code to take a closer look.

And that's when we saw it. Or rather, didn't see it.

The Invisible Attack: Unicode Stealth Technique

Look at this screenshot of the CodeJoy extension's source code:

Invisible malicious code inCodeJoy's version 1.8.3

See that massive gap between lines 2 and 7? That's not empty space. That's malicious code. Encoded in unprintable Unicode characters that literally don't render in your code editor.

Let me say that again: the malware is invisible. Not obfuscated. Not hidden in a minified file. Actually invisible to the human eye.

The attacker used Unicode variation selectors - special characters that are part of the Unicode specification but don't produce any visual output. To a developer doing code review, it looks like blank lines or whitespace. To static analysis tools scanning for suspicious code, it looks like nothing at all. But to the JavaScript interpreter? It's executable code.

This is why we call it GlassWorm. Like glass, it's completely transparent. You can stare right at it and see nothing. The developer whose account got compromised probably looked at this file, saw what appeared to be their legitimate code, and had no idea they were about to distribute malware to hundreds of users.

Here's the thing - this technique completely breaks traditional code review. You can't spot what you can't see. GitHub's diff view? Shows nothing suspicious. Your IDE's syntax highlighting? All clear. Manual code inspection? Everything looks normal.

The invisible code technique isn't just clever - it's a fundamental break in our security model. We've built entire systems around the assumption that humans can review code. GlassWorm just proved that assumption wrong.

Stage 2: The Unkillable C2 - Solana Blockchain

So we decoded the invisible Unicode characters. What do we find inside? Another stage of sophistication that honestly made our jaws drop.

The malware uses the Solana blockchain as its command and control infrastructure.

Read that again. The attacker is using a public blockchain - immutable, decentralized, impossible to take down - as their C2 server.

Here's how it works:

Solana blockchain points to the next stage

The malware searches the Solana blockchain for transactions from the hardcoded wallet address. When it finds a transaction, it reads the memo field - a place where you can attach arbitrary text to blockchain transactions. Inside that memo? A JSON object with a base64-encoded link to download the next stage.

Link to the next stage in the memo of the transaction

Look at that screenshot. That's a real Solana transaction from October 15, 2025 - three days ago. The instruction data contains: {"link":"aHR0cDovLzIxNy42OS4zLjIxOC9xUUQlMkZKb2kzV0NXU2s4Z2dHSGlTdg=="}

That base64 string decodes to: http://217.69.3.218/qQD%2FJoi3WCWSk8ggGHiTdg%3D%3D

And just like that, the malware knows where to download its next payload.

Why this is absolutely brilliant (and terrifying):

  • Immutable : Once a transaction is on the blockchain, it can't be modified or deleted. Ever. No takedown requests. No domain seizures. It's there forever.
  • Anonymous : Crypto wallets are pseudonymous. Good luck tracing this back to a real person.
  • Censorship-resistant : There's no hosting provider to contact, no registrar to pressure, no infrastructure to shut down. The Solana blockchain just... exists.
  • Legitimate traffic : Connections to Solana RPC nodes look completely normal. Security tools won't flag it.
  • Dynamic and cheap : Want to update your payload? Just post a new transaction. Cost? 0.000005 SOL - less than a penny. The attacker can rotate infrastructure as often as they want for pocket change.

Even if you identify and block the payload URL ( 217.69.3.218 in this case), the attacker just posts a new transaction with a different URL, and all infected extensions automatically fetch the new location. You're playing whack-a-mole with an opponent who has infinite moles and infinite mallets.

This isn't some theoretical attack vector. This is a real-world, production-ready C2 infrastructure that's actively serving malware right now. And there's literally no way to take it down.

Stage 3: The Credential Harvest

The Solana transaction points to an IP address: 217.69.3.218 . We fetch the URL and get back a massive base64 payload. But it's encrypted. AES-256-CBC encryption with a key I don't have.

So where's the decryption key?

In the HTTP response headers.

The decryption key hides in the response headers

The attacker is dynamically generating encryption keys per request and passing them in custom HTTP headers. Smart - even if you intercept the encrypted payload, you need to make a fresh request to get the current keys.

We decrypted the payload and started analyzing what it does. This is where GlassWorm's true purpose becomes clear.

GlassWorm hunting for crypto wallets

The malware is hunting for credentials:

  • NPM authentication tokens - to publish malicious packages
  • GitHub tokens - to compromise repositories
  • OpenVSX credentials - to inject more extensions
  • Git credentials - to push malicious code
  • 49 different cryptocurrency wallet extensions - targeting MetaMask, Phantom, Coinbase Wallet, and dozens more

GlassWorm staling NPM and OpenVSX credentials

But wait, there's more. Buried in the code, we found something else: a Google Calendar link.

https://calendar.app.google/M2ZCvM8ULL56PD1d6

Strange title for a Google Calendar event right?

The malware reaches out to this Google Calendar event as a backup C2 mechanism. And guess what's in the event title? Another base64-encoded URL pointing to yet another encrypted payload.

The attacker created a Google Calendar event with the title: aHR0cDovLzIxNy42OS4zLjIxOC9nZXRfem9tYmlfcGF5bG9hZC9xUUQlMkZKb2kzV0NXU2s4Z2dHSGlUdg==

That decodes to: http://217.69.3.218/get_zombi_payload/qQD%2FJoi3WCWSk8ggGHiTdg%3D%3D

Notice the path: /get_zombi_payload/

Yeah. "Zombi" as in zombie botnet. The attacker is literally naming their endpoints after what they're turning victims into.

Why use Google Calendar as backup C2?

  • Free and legitimate (no one's blocking Google Calendar)
  • Can be updated anytime by editing the event
  • Completely bypasses security controls
  • Another unkillable infrastructure piece

So now we have a triple-layer C2 system:

  1. Solana blockchain (primary, immutable)
  2. Direct IP connection (217.69.3.218)
  3. Google Calendar (backup, legitimate service)

If one gets blocked, the others keep working. And all three are nearly impossible to take down.

Stage 4: ZOMBI - The Nightmare Reveal

We fetch the "zombi_payload" URL, capture the encryption keys from the headers, decrypt it, and start deobfuscating what turns out to be a massively obfuscated JavaScript payload.

And that's when we realized: this isn't just credential theft. This is a full-spectrum remote access trojan.

GlassWorm's final stage - the ZOMBI module - transforms every infected developer workstation into a node in a criminal infrastructure network. Let me break down what this thing can do, because it's honestly one of the most sophisticated pieces of malware we've analyzed.

Oh no! the GlassWorm is now a zombi!

SOCKS Proxy - Your Machine Becomes Criminal Infrastructure

The ZOMBI module can turn your computer into a SOCKS proxy server. Here's the code:

GlassWorm zombi - turns the workstation into socks server

Your developer workstation - the one sitting inside your corporate network, behind all your firewalls and security controls - just became a proxy node for criminal activity.

Why this is devastating:

  • Corporate network access : Your machine can reach internal systems that external attackers can't
  • Attack anonymization : Attackers route their traffic through your IP, not theirs
  • Firewall bypass : Internal machines can access resources external proxies can't reach
  • Free infrastructure : Why pay for proxy servers when victims provide them?

Every single infected developer becomes a node in a global proxy network. And you won't even know it's happening.

WebRTC P2P - Direct Peer-to-Peer Control

ZOMBI downloads and deploys WebRTC modules for peer-to-peer communication:

WebRTC enables direct peer-to-peer connections that bypass traditional firewalls through NAT traversal. The attacker can establish real-time, direct control channels to infected machines without going through any central server.

BitTorrent DHT - Decentralized Command Distribution

ZOMBI uses BitTorrent's Distributed Hash Table (DHT) network for command distribution:

Commands are distributed through the BitTorrent DHT network - the same decentralized system that makes torrent tracking impossible to shut down. There's no central C2 server to take offline. Commands propagate through a distributed network of millions of nodes.

Hidden VNC (HVNC) - Complete Invisible Remote Control

And here's the truly terrifying part - HVNC (Hidden Virtual Network Computing):

HVNC gives the attacker complete remote desktop access to your machine - but it's hidden. It runs in a virtual desktop that doesn't appear in Task Manager, doesn't show any windows on your screen, and operates completely invisibly.

The attacker can:

  • Use your browser with your logged-in sessions
  • Access your email, Slack, internal tools
  • Read your source code
  • Steal additional credentials
  • Pivot to other systems on your network
  • Do literally anything you could do - but you'll never see it happening

The Full Picture

ZOMBI isn't just malware. It's a complete remote access and network penetration toolkit:

  • SOCKS proxy for routing attacks through victim networks
  • WebRTC P2P for direct, firewall-bypassing control
  • BitTorrent DHT for unkillable command distribution
  • HVNC for invisible remote desktop access
  • Automatic restart on any failure (it won't go away)
  • Modular architecture supporting dynamic capability updates

For enterprises, this is a nightmare scenario. An infected developer workstation becomes:

  • An internal network access point
  • A persistent backdoor
  • A proxy for attacking other internal systems
  • An exfiltration channel for sensitive data
  • A command and control relay point

And it all started with an invisible Unicode character in a VS Code extension.

The Worm Spreads: Self-Propagation Through Stolen Credentials

Here's where GlassWorm earns the "Worm" part of its name.

Remember all those credentials it's stealing? NPM tokens, GitHub credentials, OpenVSX access? Those aren't just for data theft. They're for propagation.

The self-replication cycle:

  1. Initial infection - Compromised developer account pushes malicious code to legitimate extension
  2. Invisible payload - Unicode-hidden malware executes on victim machines
  3. Credential harvest - Steals NPM, GitHub, OpenVSX, Git credentials
  4. Automated spread - Uses stolen credentials to compromise MORE packages and extensions
  5. Exponential growth - Each new victim becomes an infection vector
  6. Repeat - The cycle continues automatically

This isn't a one-off supply chain attack. It's a worm designed to spread through the developer ecosystem like wildfire.

Just one month ago, the security community witnessed Shai Hulud - the first successful self-propagating worm in the npm ecosystem. That campaign compromised over 100 packages by stealing npm tokens and automatically publishing malicious versions.

GlassWorm brings this same technique to OpenVSX, but with terrifying evolutions:

  • Invisible code injection that bypasses all code review
  • Blockchain-based C2 that can't be taken down
  • Full RAT capabilities turning victims into criminal infrastructure
  • Multi-layered redundancy across three different C2 mechanisms

The pattern is clear. Attackers have figured out how to make supply chain malware self-sustaining. They're not just compromising individual packages anymore - they're building worms that can spread autonomously through the entire software development ecosystem.

With traditional supply chain attacks, you compromise one package and that's your blast radius. With worms like Shai Hulud and GlassWorm, each infection is a new launching point for dozens more. It's exponential growth. And we're just starting to see what that looks like.

Impact: 35,800 Victims, Active RIGHT NOW

Let's talk about the current state of this infection. Because this isn't some theoretical attack or historical incident. GlassWorm is active right now.

Attack Timeline:

  • October 17, 2025 : Seven OpenVSX extensions compromised (yesterday)
  • October 18, 2025 : We detected and began analysis (today)
  • October 19, 2025: More compromised extensions detected in OpenVSX and VSCode marketplaces
  • Current status : Five extensions still actively distributing malware

Total impact: 35,800 installations

Here's what makes this particularly urgent: VS Code extensions auto-update. When CodeJoy pushed version 1.8.3 with invisible malware, everyone with CodeJoy installed got automatically updated to the infected version. No user interaction. No warning. Just silent, automatic infection.

And since the malware is invisible, the original developers whose accounts were compromised probably had no idea. They might have even reviewed the "empty" lines in their code and seen nothing wrong.

What's happening right now to infected systems:

  1. Credential theft in progress - NPM tokens, GitHub credentials, Git credentials being harvested
  2. Cryptocurrency wallets being drained - 49 different wallet extensions targeted
  3. SOCKS proxies deploying - Turning developer workstations into criminal infrastructure
  4. HVNC installation - Hidden remote access being established
  5. Network reconnaissance - Infected machines mapping internal corporate networks
  6. Preparation for spread - Stolen credentials being validated for additional compromises

The C2 infrastructure is fully operational:

  • 217.69.3.218 - Responding and serving encrypted payloads
  • Solana blockchain - Transaction active, pointing to payload servers
  • Google Calendar event - Live and accessible
  • Exfiltration server (140.82.52.31) - Collecting stolen data

This is an active, ongoing compromise. Not a case study. Not a war story. This is happening right now, as you read this sentence.

If you have any of the infected extensions installed, you're compromised. Your credentials are likely stolen. Your crypto wallets may be drained. Your machine might already be serving as a SOCKS proxy for criminal activity. And you probably have no idea any of this is happening.

Two developers managed to push clean updates (vscode-theme-seti-folder and git-worktree-menu), suggesting they either regained access to their accounts or noticed something was wrong. But five extensions are still infected. Five developers who either don't know they're compromised or can't regain control of their accounts.

And remember: this is just what we've found so far. GlassWorm is designed to spread. Those stolen credentials are being used right now to compromise additional packages and extensions. The real victim count could be much higher.

Final Thoughts

This writeup was authored by the research team at Koi Security , with a healthy dose of paranoia and hope for a safer open-source ecosystem.

GlassWorm shows how easy it is for malicious extensions to slip past marketplace security and compromise sensitive data. With Koi, security teams gain visibility, risk scoring, and governance across binary & non-binary software before it ever hits production.

Book a demo to see how Koi closes the gap that legacy tools miss.

For too long, the use of untrusted third-party code, often running with the highest privileges has flown under the radar for both enterprises and attackers. That era is ending. The tide is shifting. Just last month we uncovered another campaign of 18 featured and verified extensions that turned malicious and affected millions of users.

We’ve built Koi to meet this moment; for practitioners and enterprises alike. Our platform helps discover, assess, and govern everything your teams pull from marketplaces like the Chrome Web Store, VSCode, Hugging Face, Homebrew, GitHub, and beyond.

Trusted by Fortune 50 organizations, BFSIs and some of the largest tech companies in the world, Koi automates the security processes needed to gain visibility, establish governance, and proactively reduce risk across this sprawling attack surface.

Because in a world where malware can be literally invisible, paranoia isn't a bug - it's a feature.

Stay safe out there.

IOCs

Compromised Extensions

OpenVSX Extensions (with malicious versions):

  • codejoy.codejoy-vscode-extension@1.8.3
  • codejoy.codejoy-vscode-extension@1.8.4
  • l-igh-t.vscode-theme-seti-folder@1.2.3
  • kleinesfilmroellchen.serenity-dsl-syntaxhighlight@0.3.2
  • JScearcy.rust-doc-viewer@4.2.1
  • SIRILMP.dark-theme-sm@3.11.4
  • CodeInKlingon.git-worktree-menu@1.0.9
  • CodeInKlingon.git-worktree-menu@1.0.91
  • ginfuru.better-nunjucks@0.3.2
  • ellacrity.recoil@0.7.4
  • grrrck.positron-plus-1-e@0.0.71
  • jeronimoekerdt.color-picker-universal@2.8.91
  • srcery-colors.srcery-colors@0.3.9
  • sissel.shopify-liquid@4.0.1
  • TretinV3.forts-api-extention@0.3.1

Microsoft VSCode Extensions:

  • cline-ai-main.cline-ai-agent@3.1.3

Infrastructure

Command & Control:

  • 217.69.3.218 (primary C2 server)
  • 140.82.52.31:80/wall (exfiltration endpoint)

Blockchain Infrastructure:

Solana Wallet: 28PKnu7RzizxBzFPoLp69HLXp9bJL3JFtT2s5QzHsEA2

Transaction: 49CDiVWZpuSW1b2HpzweMgePNg15dckgmqrrmpihYXJMYRsZvumVtFsDim1keESPCrKcW2CzYjN3nSQDGG14KKFM

Google Calendar C2:

https://calendar.app.google/M2ZCvM8ULL56PD1d6

Organizer: uhjdclolkdn@gmail.com

Payload URLs:

http://217.69.3.218/qQD%2FJoi3WCWSk8ggGHiTdg%3D%3D

http://217.69.3.218/get_arhive_npm/

http://217.69.3.218/get_zombi_payload/qQD%2FJoi3WCWSk8ggGHiTdg%3D%3D

Registry Indicators

Persistence Mechanisms:

HKCU\Software\Microsoft\Windows\CurrentVersion\Run

HKLM\Software\Microsoft\Windows\CurrentVersion\Run

Retail giant Muji halts online sales after ransomware attack on supplier

Bleeping Computer
www.bleepingcomputer.com
2025-10-20 19:45:33
Japanese retail company Muji has taken offline its store due to a logistics outage caused by a ransomware attack at its delivery partner, Askul. [...]...
Original Article

Retail giant Muji halts online sales after ransomware attack on supplier

Japanese retail company Muji has taken offline its store due to a logistics outage caused by a ransomware attack at its delivery partner, Askul.

On Sunday evening (Japan timezone), Muji said that the issue caused all retail services were affected, including browsing or making purchases on online stores, viewing order histories via the Muji app, and displaying some web content.

Although the company did not specify a timeline for restoring the systems, an update on Monday afternoon stated that only purchases from the online store and applying for a monthly flat-rate service continued to be impacted.

Muji also said that it was investigating which shipments were impacted to determine the orders placed before the attack and send email notification to customers.

A retailer of minimalist household goods, clothing, and furniture, Muji operates more than a thousand stores in Japan, China, Singapore, Europe, Australia, and North America.

The company has an annual revenue of roughly $4 billion, and it employs over 24,500 people worldwide.

Askul is a large business-to-business and business-to-consumer office supplies and logistics e-commerce company owned by Yahoo! Japan Corporation.

The company issued a statement yesterday, informing that it was targeted by ransomware that caused operational disruptions.

“Currently, a system failure has occurred on the Askul website due to a ransomware infection, and we have suspended orders and shipping operations,” reads the announcement (machine translated)

“We are currently investigating the scope of the impact, including the leakage of personal information and customer data, and will notify you as soon as we know.”

Product return applications, receipt mailing, catalog shipping, and collection services have been suspended, while Askul’s customer service desk is also unreachable right now, by phone or through the website.

Given that Askul only handles Muji’s Japan sales, the disruption only impacts that region, and Muji’s shops in other countries are available and operating normally.

At the time of writing, no ransomware gangs have announced Askul on their extortion portals.

This incident comes shortly after another ransomware attack on Japan’s largest beer producer, Asahi, that forced it to suspend production operations and delay scheduled product launches.

The attack was claimed by Qilin ransomware . The company confirmed in a statement that the hackers stole data from its systems.

Nix CI Benchmarks

Lobsters
garnix-io.github.io
2025-10-20 19:45:32
Comments...
Original Article

Benchmarks for Nix CI build times across different CI platforms

We picked some open source projects with lots of stars that have Nix already set up, forked them, and ran various different Nix CIs on the same commits, measuring the time CI took. Because Nix is Nix, these CIs did the same work, broadly speaking, making the comparisons more significant than they would be across wholly different stacks.

We ran the following setups:

  • (1) GitHub Actions, in serial and parallel, without any caching.
  • (2) GitHub Actions in parallel with magic-nix-cache for caching.
  • (3) GitHub Actions in parallel with Cachix for caching.
  • (4) GitHub Actions with nixbuild.net for building (using the CI workflow, with remote store building).
  • (5) garnix (without incremental builds).

These setups span the range of using GitHub Actions for everything (1-2), to including external caches (3), to including external builds (4), to having external evaluators too, and not using GitHub runners for anything (5).

We picked more popular repos that already had largely working Nix builds. For a start, we also focused on:

  • x86_64-linux builds. All CIs tested also support aarch64-linux. nixbuild.net does not support aarch64-darwin and x86_64-darwin;
  • Flakes. garnix in the non-enterprise plan only supports flakes;
  • GPU builds. Only GitHub supports runners with GPUs currently.
  • Tests without virtualization (e.g., no NixOS tests). garnix and GitHub support this; nixbuild.net seems to have an Early Access on x86_64-linux, though it seems to be Early Access since 2021 .

In the future, we might have different test types, excluding from those CIs that don't support the required features.

It's useful to understand the methodology of these benchmarks to interpret them correctly. For the impatient, however, you can skip to the results .

Methodology

We wrote a script that:

  • Takes as an argument a repo to be tested. This repo must have a flake file.
  • Gets the last 10 commits in that repo. For each of those commits:
    • It checks the commit out, and finds the derivations to build. This only includes x86_64-linux builds .
    • For each of the CI setups benchmarked :
      • It makes the changes necessary to set up that type of CI (deleting the existing .github and creating a new one, for example).
      • Pushes that changed commit to a new branch in a separate repo.
      • Waits for the check suite to finish, or times out after 2 hours.
      • Records the timing GitHub gives us for the check suite.

You can see all the GitHub Actions workflow runs (for 1-4) here and the garnix logs here .

Note that the packages and checks that we checked may differ from the ones that are enabled on the repo's own CI.

For configuration:

  • We used the default GitHub Linux runner. GitHub Actions has runners of up to 96-cores, which cost correspondingly much. For setups 1-3, this would presumably speed up builds substantially; for 4, presumably less so. Changing this would drive up the costs of benchmarking by quite a lot, however.
  • Our Cachix cache was 50GB in size.
  • nixbuild.net was left on the default configuration, but the CI Workflow (with remote stores) was used.

Running it yourself

If you'd like to try it out yourself, you can follow these steps:

  • Fork the benchmark-github and benchmark-garnix repos;
  • Change the references in benchmarking.ts from garnix-io/benchmark-github and garnix-io/benchmark-garnix to your new forked repo names;
  • Add your own CACHIX_AUTH_TOKEN and NIXBUILD_NET_TOKEN to your GitHub repository secrets;
  • Enable garnix on your fork of benchmark-garnix ;
  • Clone the benchmark repo ;
  • Add a token for GitHub API calls (with e.g. gh login );
  • From inside your clone, run nix run .#benchmark -- <REPO> (for example, nix run .#benchmark atuinsh/atuin ).

Important notes :

  • It will likely cost you some money.
  • It will take a long time, especially if you include the slower options (some of these repos took a full day to check!).
  • Check that all CI systems are succeeding and failing the same tests before looking at the timings! Sometimes CIs will finish quicker than they should because they are failing when they shouldn't. And some setups are fail-on-first-error (e.g. GitHub Actions in serial); most aren't.
  • garnix has a global cache. This means that (for public repos at least) if anyone built a particular commit, it'll likely be cached. For fairness, then, don't test garnix on repos that already use garnix, and don't retest on the same commits you've already tested.
  • Make sure benchmarks don't affect one another. For example, if you run two different types of CI that write to the same cache (Cachix, garnix, magic-nix-cache, and nixbuild.net), you will get artifically low timings
  • Queuing in builds is accounted for differently between garnix and CIs using GitHub Actions. The amount of time spend in a queue is not counted in the total time for GitHub Actions, but it is for garnix. It's somewhat rare that you will be queued with garnix, but potentially if you run multiple tests in parallel it will happen.

The results

We ran the benchmarks for three repos:

The commits picked were the last ten commits at the time the benchmark started. You can see the individual commit hashes by hovering on a datapoint

Note that by default we exclude the first commit from any calculations. This is for two reasons

  • The first commit is susceptible to various influences: in particular, with any CI that has a cache, to whether related builds were ever added to the cache. It's also, for setups with a cache, on average much slower than others.
  • For your average project, there will be many more commits than just ten. Letting the first commit contribute so much to the average would therefore be unrepresentative.

Analysis

A few facts stand out:

  • magic-nix-cache doesn't seem to help at all with build speeds over having no cache (i.e, over github-actions-parallel). This is surprising since Cachix, which also only differs in having a cache, speeds up builds considerably, and since the project's README claims a big speedup (which presumably at least some people are seeing if they're using it). Part of the reason seems to be slow uploads to the cache (example here ). It could be that serial GitHub Actions builds with magic-nix-cache are faster than serial GitHub Actions builds without, since downloads can be kicked off early while waiting on CPU-intensive work.
  • It's not clear why several CIs choked so badly on cryptic/echidna. The issue seems to always have been the package echidna-redistributable, but sometimes it was seemingly very slow uploads , and sometimes slow builds (click on the Settings button and then "Show Timestamp" to see timestamps). Disk space also seems to have been an issue . It's an outlier that deserves more careful examination, though it does not seem to be a fluke — it happened consistently on the same package across CIs, and across tests many hours apart, even as other packages built successfully.
  • garnix performed best across all repos.

Future improvements

There are a few benchmarks missing:

  • magic-nix-cache with FlakeHub Cache (only available for private repos);
  • garnix with incremental builds (faster for most compiled languages, but involves some manual, per-repo work);
  • nixbuild.net without remote store building (slower, but more stable — though we didn't notice any instabilities).
  • Different runner sizes for GitHub Actions.
  • ARM Linux and Macs. nixbuild.net doesn't support Mac builds yet, and for the others benchmarking may get expensive, however.

Ideally, we would also be more systematic in what repositories we check. Most starred repositories that are not starter templates or documentation, with a flake.nix which builds a substantial part of the project (instead of e.g. just devshells) might be a good criterion. Unfortunately, we couldn't figure out how to get GitHub search to accurately show repos, ordered by stars, with a flake.nix.

It was probably a mistake to let GitHub Actions (serial) fail on first error, since it means it did less work than other CIs. If we were to rerun this, we would change that.

Loading data...

Two Miles, One Bridge, 45 Pounds of Bicycle: 'Citi Bike Day' Participants Show Just What These Big Clumsy Bikes Can Do

hellgate
hellgatenyc.com
2025-10-20 19:43:41
Hundreds of cyclists raced over the Williamsburg Bridge on Saturday to test the limits of the "acoustic" Citi Bikes....
Original Article

The traffic light switched green, and a tidal wave of blue bikes lurched forward.

It was Saturday afternoon in Brooklyn, and hundreds of us had congregated at Roebling and Grand, ready to ride. We were preparing to sprint over the Williamsburg Bridge into Manhattan on classic blue Citi Bikes as part of an unsanctioned race known as "Citi Bike Day." The finish line was in Chinatown's Seward Park, just two miles away.

The race was hosted by City Bike Boys , a viral Instagram account that was, until recently—before Citigroup asked them to change their name—known as Citi Bike Boyz. "People look forward to this event the entire year," said the account's founder, Jerome Peel. "That's why I do it."

Give us your email to read the full story

Sign up now for our free newsletters.

Sign up

Claude Code on the Web

Hacker News
www.anthropic.com
2025-10-20 19:12:23
Comments...
Original Article

Today, we're introducing Claude Code on the web, a new way to delegate coding tasks directly from your browser.

Now in beta as a research preview, you can assign multiple coding tasks to Claude that run on Anthropic-managed cloud infrastructure, perfect for tackling bug backlogs, routine fixes, or parallel development work.

Run coding tasks in parallel

Claude Code on the web lets you kick off coding sessions without opening your terminal. Connect your GitHub repositories, describe what you need, and Claude handles the implementation.

Each session runs in its own isolated environment with real-time progress tracking, and you can actively steer Claude to adjust course as it’s working through tasks.

With Claude Code running in the cloud, you can now run multiple tasks in parallel across different repositories from a single interface and ship faster with automatic PR creation and clear change summaries.

Flexible for every workflow

The web interface complements your existing Claude Code workflow. Running tasks in the cloud is especially effective for:

  • Answering questions about how projects work and how repositories are mapped
  • Bugfixes and routine, well-defined tasks
  • Backend changes, where Claude Code can use test-driven development to verify changes

You can also use Claude Code on mobile. As part of this research preview, we’re making Claude Code available on our iOS app so developers can explore coding with Claude on the go. It’s an early preview, and we hope to quickly refine the mobile experience based on your feedback.

Security-first cloud execution

Every Claude Code task runs in an isolated sandbox environment with network and filesystem restrictions. Git interactions are handled through a secure proxy service that ensures Claude can only access authorized repositories—helping keep your code and credentials protected throughout the entire workflow.

You can also add custom network configuration to choose what domains Claude Code can connect to from its sandbox. For example, you can allow Claude to download npm packages over the internet so that it can run tests and validate changes.

Read our engineering blog and documentation for a deep dive on Claude Code’s sandboxing approach.

Getting started

Claude Code on the web is available now in research preview for Pro and Max users. Visit claude.com/code to connect your first repository and start delegating tasks.

Cloud-based sessions share rate limits with all other Claude Code usage. Explore our documentation to learn more.

Peanut Allergies Have Plummeted in Children

Hacker News
www.nytimes.com
2025-10-20 19:09:49
Comments...
Original Article

Please enable JS and disable any ad blocker

x86-64 Playground – An online assembly editor and GDB-like debugger

Hacker News
x64.halb.it
2025-10-20 18:55:18
Comments...
Original Article

An online assembly editor and GDB-like debugger

Screenshot of the Playground web app, in the desktop layout size.

Features

x86-64 Playground is a web app for experimenting and learning x86-64 assembly.

The Playground web app provides an online code editor where you can write, compile, and share assembly code for a wide range of popular assemblers such as GNU As, Fasm and Nasm.

Unlike traditional onlide editors, this playground allows you to follow the execution of your program step by step, inspecting memory and registers of the running process from a GDB-like interface.

You can bring your own programs! Drag and drop into the app any x86-64-Linux static executable to run and debug it in the same sandboxed environment, without having to install anything.

Who is this for?

The app is for anyone that wants to run amd64 assembly snippets or inspect the inner workings of simple Linux ELF files.

It has been designed with the academic world of binary exploitation in mind; The debugger interface offers visualizations similar to the GDB+PwnGDB debugger plugin, and all the controls are labelled with the respective GDB commands.

Combined with Compiler Explorer , this app provides a noise-free environment to learn the basics behind the inner workings of a Linux process. When you are ready, it includes the guides and resources necessary to keep experimenting on your own linux environment, with the actual GDB debugger.

Designed for the web

Have you ever seen a responsive debugger? The app places the mobile experience at the center of its design, and can be embedded in any web page to add interactivity to technical tutorials or documentations.

Follow the guide to embed in your website both the asm editor and debugger.

Screenshot of the Playground web app, showing the layout on mobile devices.

Offline-first and open-source

The app is open-source, and available on Github . It's powered by the Blink Emulator , which emulates an x86-64-Linux environment entirely client side in your browser. This means that all the code you write, or the excutables you debug are never sent to the server.

everything runs in your browser, and once the Web App loads it will work without an internet connection.

Why a 'Boring' Life Might Be the Happiest One

Hacker News
karunpal.substack.com
2025-10-20 18:43:57
Comments...
Original Article

After many years, I watched a film that touched my heart. A film that reminded me it’s always the little things that we overlook that make life worth living. A film that re-introduced me to me. It was a Japanese film called Perfect Days .

The story was simple:

The hero is a toilet cleaner. He lives alone. He wakes at six every morning. Brushes his teeth. Puts on his uniform. Drives his small van. Puts in a cassette in the player. Drives listening to English songs from the 80s. Cleans public toilets across Tokyo. Takes photographs of trees. Smiles at sunlight coming through the leaves. Eats in the same restaurant. Returns home. Reads a book under the lamp. Switches it off. Sleeps.

And the next day…. the same.

Just watching the film made me feel so calm. Like the film itself gave me a warm hug and whispered: “ It’s okay to live a slow boring life”.

When the film ended, I made myself a cup of green tea and sat on the balcony. Just looking at the trees and the sky and feeling the warm cup in my hands. And I said to myself: “ this is home”.

I feel the world has become too fast. Too restless. Too demanding. We don’t say it, but there’s always this quiet pressure we all feel. A pressure to know all the latest trends. The latest fashion. The latest tech. A pressure that creates a subtle fear inside our hearts.

The fear of being left behind…

I remember my father. When computers first came into our house, he would call me every week. How do I print this? What is a PDF? I remember how his hand trembled
moving the pointer across the screen.

It irritated me. What’s so hard about it? I thought. Now the memory hurts my throat. Because now I understand. He was trying. The world rushed past him too fast. But he was trying.

Everyday I read things like:

- If you don’t know these AI tips, you’re losing $10,000 a month.

- 8 exercises than cat melt pounds right out of your belly

- Invest in these 2 crypto coins and double your income in a week

They make you feel like if you don’t know these things, you are a lesser being.

Like last weekend, one of my friends told me about a web series that’s going viral everywhere.

When I told him I’ve not heard about it, he looked at me like: “ you can’t be real!!”

To tell you the truth, I cannot keep up with the world anymore. It’s changing too fast. I’m a quick learner. I’m hard working. But I can’t. I’m done. It’s just too much. And I’m tired. Tired of losing myself. Tired of chasing fads and trends that make me betray who I am.

Now I have reached a place in my life where I am ready to be left behind.

Maybe you’re tired too… and if you are, here are some little things that may help:

Boring doesn’t mean a dull life. Boring means a life that feels calming. To repeat the same cosy routines. To eat the same food. To wear a simple black tee and the old blue Levi’s. To sit with the same two friends you’ve known for twenty years. The world rewards spectacle. Flashy instagram reels. Curated Twitter lives. Always something new. Always someone shining brighter. But behind those lights there is emptiness. The truth is: there are many like you. Many like me. Living quietly. Doing what they like. Finding peace in repetition. A simple boring life may not be shiny and exciting, but it’s peaceful.

In the age of AI I still write with my hands. Using an 8”x 5” notebook and my old loyal Reynolds 045 pen or an HB pencil. There’s something fulfilling about creating something with your own hands. I can write a prompt in ChatGPT and it can spit out a letter like this in 5 seconds. But my conscience doesn’t allow me to do that. I want to do honest work. It may not be perfect and polished, but it carries my soul. I’m not against AI. All I’m saying is, you don’t have to follow the herd. Do it the way that feels right in your heart.

I guess we learn it too late that it’s okay to be average. I’m not an athlete, but I can walk 5 miles. I’m not an professional chef, but I can cook. I’m not a polyglot, but I can read and write in two languages. The only thing I guess I’m a little good at is putting thoughts in my head onto the paper. It may not be perfect, but I try to write the best I can. And sometimes, that’s enough.

I write to make a living. I walk to be healthy. And I read and watch films to keep my mind sharp. I don’t want anything else. I’m happy. People around you may do all kinds of cool things, but you don’t have to. Instead, do things that are sustainable in the long-term. Things that don’t make you feel like you’re betraying yourself. Things that don’t feel forced.

The 2 a.m peace. The same old restaurant. The smell of pencil shavings. The silence of an empty house. The warmth of tea long after the cup is empty. The little things. The ordinary things. We all know them. We all see them. We all feel them. But we rarely talk about them. And yet they are the very things that make life bearable.

When I watched Perfect Days , I realized again: beauty hides in repetition. In routine. In the same sun falling differently every morning. The toilet cleaner’s life looked “boring” from the outside. But inside it was filled with light.

And perhaps that is the secret. Not to run. Not to catch up. But to stay.

To stay with your pen. Your walk. Your small joys. Your quietness. To be left behind. And maybe in that place, you will find yourself. Again.

Algorithms will feed you more of what you like. That’s how they keep us on a leash. Craving for more. But you can stop it right now. Simply by overcoming your fear of missing out and embracing the joy of missing out. Replace it with things that make you peaceful. Like going out for a walk in nature. Reading a novel. Watching an art film. Staring at the ocean. Listening to the birds singing. The rustle of leaves. The calm of your own breath. Once you start doing these things you’ll feel you weren’t really missing anything. It was just your imagination.

I end many of my tweets with, “ you won’t get it ”. I used this phrase 3 years ago. And it resonated with thousands of introverts. Because they felt it, they got it. Sit on a path bench alone. Feel the warm sun. Look the the flowers. Stare at the clouds. Don’t fit it. Don’t be ashamed to be different. Do things that take you closure to who you are every single day. That’s what gives meaning to our lives. To just be your most honest self.

We spend so much of our lives chasing a version of “enough” that was never ours to begin with. Always trying to catch up. To know more. Do more. Become more.
But maybe… the answer was never “more.” Maybe it was less .

Less noise.
Less pressure.
Less chasing.

And more of the things that make you, you.

The world rewards loud voices, fast growth, constant novelty. But introverts, we thrive in silence. In the quiet walks. The familiar tea cups. The slow mornings. The repetition that grounds us.

You don’t need to race to matter.
You don’t need to keep up to be worthy.
You don’t need to be understood by everyone to live a meaningful life.

Maybe the real secret is simply to stay . Stay with your silence. Stay with your quiet hobbies. Stay with your boring, beautiful life.

Because sometimes, in choosing to be “left behind,” you finally return to yourself.

If this letter spoke to you, my book Born to Stand Out was written for moments like this.

It’s for introverts who feel too much. Who think deeply. Who are tired of running in a race that was never meant for them. It’s not another “fix yourself” manual. It’s a gentle invitation to come home to who you already are.

Inside, you’ll find stories, insights, and timeless principles to help you embrace your quiet power, protect your energy, and build a life that feels calm, confident, and true.

Get your copy HERE .

The world doesn’t need another copy. It needs you — slow, real, and unapologetically yourself.

Stay blessed,
Karun

Discussion about this post

Over 75,000 WatchGuard security devices vulnerable to critical RCE

Bleeping Computer
www.bleepingcomputer.com
2025-10-20 18:42:08
Nearly 76,000 WatchGuard Firebox network security appliances are exposed on the public web and still vulnerable to a critical issue (CVE-2025-9242) that could allow a remote attacker to execute code without authentication. [...]...
Original Article

Over 75,000 WatchGuard security devices vulnerable to critical RCE

Nearly 76,000 WatchGuard Firebox network security appliances are exposed on the public web and still vulnerable to a critical issue (CVE-2025-9242) that could allow a remote attacker to execute code without authentication.

Firebox devices act as a central defense hub that controls traffic between internal and external networks, providing protection through policy management, security services, VPN, and real-time real-time visibility through WatchGuard Cloud.

Scans from The Shadowserver Foundation currently show that there are 75,835 vulnerable Firebox appliances across the world, most of them in Europe and North America.

Specifically, the United States tops the list with 24,500 endpoints, followed by Germany (7,300), Italy (6,800), United Kingdom (5,400), Canada (4,100), and France (2,000).

Heatmap of vulnerable Firebox devices
Heatmap of vulnerable Firebox devices
Source: The Shadowserver Foundation

WatchGuard disclosed CVE-2025-9242 in a security bulletin on September 17 and rated the vulnerability with a critical-severity score of 9.3. The security problem is an out-of-bounds write in the Fireware OS ‘iked’ process, which handles IKEv2 VPN negotiations.

The flaw can be exploited without authentication by sending specially crafted IKEv2 packets to vulnerable Firebox endpoints, forcing it to write data to unintended memory areas.

It only affects Firebox appliances that use IKEv2 VPNs with dynamic gateway peers, on versions 11.10.2 through 11.12.4_Update1, 12.0 through 12.11.3, and 2025.1

The vendor suggested an upgrade to one of the following versions:

  • 2025.1.1
  • 12.11.4
  • 12.5.13
  • 12.3.1_Update3 (B722811)

Users should know that version 11.x has reached end of support and will not receive security updates. The recommendation for them is to move to a version that is still supported.

For devices set up only with Branch Office VPNs to static gateway peers, the vendor points to the documentation for securing the connection using the IPSec and IKEv2 protocols as a temporary workaround.

On October 19, The Shadowserver Foundation detected 75,955 vulnerable Firebox firewalls. A spokesperson told BleepingComputer that the current scan is considered reliable, and the figures reflect real deployments and not honeypots, yet.

Although no active exploitation of CVE-2025-9242 has been reported yet, administrators who haven’t applied the security updates are strongly advised to install the patch as soon as possible.

TernFS – an exabyte scale, multi-region distributed filesystem

Hacker News
www.xtxmarkets.com
2025-10-20 18:36:16
Comments...
Original Article

September 2025

XTX is an algorithmic trading firm: it builds statistical models that produce price forecasts for over 50,000 financial instruments worldwide. We use those forecasts to make trades. As XTX's research efforts to build better models ramped up, the demand for resources kept increasing.

The firm started out with a couple of desktops and an NFS server, and 10 years later ended up with tens of thousands of high-end GPUs, hundreds of thousands of CPUs, and hundreds of petabytes of storage.

As compute grew, storage struggled to keep up. We rapidly outgrew NFS first and existing open-source and commercial filesystems later. After evaluating a variety of third-party solutions, we made the decision to implement our own filesystem, which we called TernFS [1] .

We have decided to open source our efforts: TernFS is available as free software on our public GitHub. This post motivates TernFS , explains its high-level architecture , and then explores some key implementation details . If you just want to spin up a local TernFS cluster, head to the README .

Another filesystem?

There's a reason why every major tech company has developed its own distributed filesystem — they're crucial to running large-scale compute efforts, and liable to cause intense disruption if they malfunction. [2]

XTX was in the same position, so we designed TernFS to be a one-stop solution for most of our storage needs, going from relatively 'cold' storage of raw market data to short-lived random-access data used to communicate between GPU jobs running on our cluster.

TernFS:

  • Is designed to scale up to tens of exabytes, trillions of files, millions of concurrent clients.
  • Stores file contents redundantly to protect against drive failures.
  • Has no single point of failure in its metadata services.
  • Supports file snapshot to protect against accidental file deletion.
  • Can span across multiple regions.
  • Is hardware agnostic and uses TCP/IP to communicate.
  • Utilizes different types of storage (such as flash vs. hard disks) cost effectively.
  • Exposes read/write access through its own API over TCP and UDP, and a Linux kernel filesystem module.
  • Requires no external service and has a minimal set of build dependencies. [3]

Naturally, there are some limitations, the main ones being:

  • Files are immutable — once they're written they can't be modified.
  • TernFS should not be used for tiny files — our median file size is 2MB.
  • The throughput of directory creation and removal is significantly constrained compared to other operations.
  • TernFS is permissionless, deferring that responsibility to other services.

We started designing TernFS in early 2022 and began putting it into production in summer 2023. By mid-2024 all of our machine learning efforts were driven out of TernFS, and we're migrating the rest of the firm's storage needs onto it as well.

As of September 2025, our TernFS deployment stores more than 500PB across 30,000 disks, 10,000 flash drives, and three data centres. At peak we serve multiple terabytes per second. To this day, we haven't lost a single byte.

High-level overview

Now that the stage is set, we're ready to explain the various components that make up TernFS. TernFS' core API is implemented by four services:

  • Metadata shards store the directory structure and file metadata.
  • The cross-directory coordinator (or CDC) executes cross-shard transactions.
  • Block services store file contents.
  • The registry stores information about all the other services and monitors them.

 A ──► B means "A sends requests to B" 
                                       
                                       
 ┌────────────────┐                    
 │ Metadata Shard ◄─────────┐          
 └─┬────▲─────────┘         │          
   │    │                   │          
   │    │                   │          
   │ ┌──┴──┐                │          
   │ │ CDC ◄──────────┐     │          
   │ └──┬──┘          │     │          
   │    │             │ ┌───┴────┐     
   │    │             └─┤        │     
 ┌─▼────▼────┐          │ Client │     
 │ Registry  ◄──────────┤        │     
 └──────▲────┘          └─┬──────┘     
        │                 │            
        │                 │            
 ┌──────┴────────┐        │            
 │ Block Service ◄────────┘            
 └───────────────┘

In the next few sections, we'll describe the high-level design of each service and then give more background on other relevant implementation details . [4]

Metadata

To talk about metadata, we first need to explain what metadata is in TernFS. The short answer is: 'everything that is not file contents.' The slightly longer answer is:

  • Directory entries, including all files and directory names.
  • File metadata including creation/modification/access time, logical file size, and so on.
  • The mapping between files and the blocks containing their contents .
  • Other ancillary data structures to facilitate maintenance operations.

TernFS' metadata is split into 256 logical shards . Shards never communicate with each other. This is a general principle in TernFS: each service is disaggregated from the others, deferring to the clients to communicate with each service directly. [5]

A logical shard is further split into five physical instances, one leader and four followers, in a typical distributed consensus setup. The distributed consensus engine is provided by a purpose-built Raft-like implementation, which we call LogsDB, while RocksDB is used to implement read/write capabilities within a shard instance.

Currently all reads and writes go through the leader, but it would be trivial to allow clients to read from followers, and with a bit more effort to switch to a write-write setup.

    ┌─────────┐ ┌─────────┐       ┌───────────┐ 
    │ Shard 0 │ │ Shard 1 │  ...  │ Shard 255 │ 
    └─────────┘ │         │       └───────────┘ 
            ┌───┘         └───────────────────┐ 
            │                                 │ 
            │                  ┌────────────┐ │ 
            │ ┌───────────┐    │ Replica 0  │ │ 
            │ │           ◄────► (follower) │ │ 
 ┌────────┐ │ │ Replica 3 ◄──┐ └────────────┘ │ 
 │ Client ├─┼─► (leader)  ◄─┐│ ┌────────────┐ │ 
 └────────┘ │ │           ◄┐│└─► Replica 1  │ │ 
            │ └───────────┘││  │ (follower) │ │ 
            │              ││  └────────────┘ │ 
            │              ││  ┌────────────┐ │ 
            │              │└──► Replica 2  │ │ 
            │              │   │ (follower) │ │ 
            │              │   └────────────┘ │ 
            │              │   ┌────────────┐ │ 
            │              └───► Replica 4  │ │ 
            │                  │ (follower) │ │ 
            │                  └────────────┘ │ 
            └─────────────────────────────────┘ 

Splitting the metadata into 256 shards from the get-go simplifies the design, given that horizontal scaling of metadata requires no rebalancing, just the addition of more metadata servers.

For instance, our current deployment can serve hundreds of petabytes and more than 100,000 compute nodes with just 10 metadata servers per data centre, with each server housing roughly 25 shard leaders and 100 shard followers.

Given that the metadata servers are totally decoupled from one another, this means that we can scale metadata performance by 25× trivially, and by 100× if we were to start offloading metadata requests to followers.

TernFS shards metadata by assigning each directory to a single shard. This is done in a simple round-robin fashion by the cross-directory coordinator . Once a directory is created, all its directory entries and the files in it are housed in the same shard.

This design decision has downsides: TernFS assumes that the load will be spread across the 256 logical shards naturally. This is not a problem in large deployments, given that they will contain many directories, but it is something to keep in mind. [6]

Cross-directory transactions

Most of the metadata activity is contained within a single shard:

  • File creation, same-directory renames, and deletion.
  • Listing directory contents.
  • Getting attributes of files or directories.

However, some operations do require coordination between shards, namely directory creation, directory removal, and moving directory entries across different directories.

The cross-directory coordinator (CDC) performs these distributed transactions using a privileged metadata shard API. The CDC transactions are stateful, and therefore the CDC uses RocksDB and LogsDB much like the metadata shards themselves to persist its state safely.

 ┌────────┐    ┌──────────┐ ┌───────────┐ 
 │ Client ├─┐  │ Shard 32 │ │ Shard 103 │ 
 └────────┘ │  └────────▲─┘ └─▲─────────┘ 
 ┌─────┬────┼───────────┼─────┼─┐         
 │ CDC │  ┌─▼──────┐    │     │ │         
 ├─────┘  │ Leader ├────┴─────┘ │         
 │        └─────▲──┘            │         
 │              │               │         
 │       ┌──────┴───────┐       │         
 │       │              │       │         
 │ ┌─────▼────┐    ┌────▼─────┐ │         
 │ │ Follower │ .. │ Follower │ │         
 │ └──────────┘    └──────────┘ │         
 └──────────────────────────────┘   

The CDC executes transactions in parallel, which increases throughput considerably, but it is still a bottleneck when it comes to creating, removing, or moving directories. This means that TernFS has a relatively low throughput when it comes to CDC operations. [7]

Block services, or file contents

In TernFS, files are split into chunks of data called blocks . Blocks are read and written to by block services . A block service is typically a single drive (be it a hard disk or a flash drive) storing blocks. At XTX a typical storage server will contain around 100 hard disks or 25 flash drives — or in TernFS parlance 100 or 25 block services. [8]

Read/write access to the block service is provided using a simple TCP API currently implemented by a Go process. This process is hardware agnostic and uses the Go standard library to read and write blocks to a conventional local file system. We originally planned to rewrite the Go process in C++, and possibly write to block devices directly, but the idiomatic Go implementation has proven performant enough for our needs so far.

The registry

The final piece of the TernFS puzzle is the registry . The registry stores the location of each instance of service (be it a metadata shard, the CDC, or a block storage node). A client only needs to know the address of the registry to mount TernFS — it'll then gather the locations of the other services from it.

In TernFS all locations are IPv4 addresses. Working with IPv4 directly simplifies the kernel module considerably, since DNS lookups are quite awkward in the Linux kernel. The exception to this rule is addressing the registry itself, for which DNS is used.

The registry also stores additional information, such as the capacity and available size of each drive, who is a follower or a leader in LogsDB clusters, and so on.

Predictably, the registry itself is a RocksDB and LogsDB C++ process, given its statefulness.

Going global

TernFS tries very hard not to lose data, by storing both metadata and file contents on many different drives and servers. However, we also want to be resilient to the temporary or even permanent loss of one entire data centre. Therefore, TernFS can transparently scale across multiple locations .

The intended use for TernFS locations is for each location to converge to the same dataset. This means that each location will have to be provisioned with roughly equal resources. [9] Both metadata and file contents replication are asynchronous. In general, we judge the event of losing an entire data centre rare enough to tolerate a time window where data is not fully replicated across locations.

Metadata replication is set up so that one location is the metadata primary. Write operations in non-primary locations pay a latency price since they are acknowledged only after they are written to the primary location, replicated, and applied in the originating location. In practice this hasn't been an issue since metadata write latencies are generally overshadowed by writing file contents.

There is no automated procedure to migrate off a metadata primary location — again, we deem it a rare enough occurrence to tolerate manual intervention. In the future we plan to move from the current protocol to a multi-master protocol where each location can commit writes independently, which would reduce write latencies on secondary locations and remove the privileged status of the primary location.

File contents, unlike metadata, are written locally to the location the client is writing from. Replication to other locations happens in two ways: proactively and on-demand. Proactive replication is performed by tailing the metadata log and replicating new file contents. On-demand replication happens when a client requests file content which has not been replicated yet.

Important Details

Now that we've laid down the high-level design of TernFS, we can talk about several key implementation details that make TernFS safer, more performant, and more flexible.

Talking to TernFS

Speaking TernFS' language

The most direct way to talk to TernFS is by using its own API. All TernFS messages are defined using a custom serialization format we call bincode . We chose to develop a custom serialization format since we needed it to work within the confines of the Linux kernel and to be easily chopped into UDP packets.

We intentionally kept the TernFS API stateless, in the sense that each request executes without regard to previous requests made by the same client. This is in contrast to protocols like NFS, whereby each connection is very stateful, holding resources such as open files, locks, and so on.

A stateless API dramatically simplifies the state machines that make up the TernFS core services, therefore simplifying their testing. It also forces each request to be idempotent, or in any case have clear retry semantics, since they might have to be replayed, which facilitates testing further.

It also allows the metadata shards and CDC API to be based on UDP rather than TCP, which makes the server and clients (especially the kernel module) simpler, due to doing away with the need for keeping TCP connections. The block service API is TCP based, since it is used to stream large amounts of contiguous data, and any UDP implementation would have to re-implement a reliable stream protocol. The registry API is also TCP-based, given that it is rarely used by clients, and occasionally needs to return large amounts of data.

While the TernFS API is simple out-of-the-box, we provide a permissively licensed Go library implementing common tasks that clients might want to perform, such as caching directory policies and retrying requests. This library is used to implement many TernFS processes that are not part of the core TernFS services, such as scrubbing , garbage collection , migrations , and the web UI .

Making TernFS POSIX-shaped

While the Go library is used for most ancillary tasks, some with high performance requirements, the main way to access TernFS at XTX is through its Linux kernel module.

This is because, when migrating our machine learning workflows to TernFS, we needed to support a vast codebase working with files directly. This not only meant that we needed to expose TernFS as a normal filesystem, but also that said normal filesystem API needed to be robust and performant enough for our machine learning needs. [10]

For this reason, we opted to work with Linux directly, rather than using FUSE. Working directly with the Linux kernel not only gave us the confidence that we could achieve our performance requirements but also allowed us to bend the POSIX API to our needs, something that would have been more difficult if we had used FUSE. [11]

The main obstacle when exposing TernFS as a 'normal' filesystem is that TernFS files are immutable. More specifically, TernFS files are fully written before being 'linked' into the filesystem as a directory entry. This is intentional: it lets us cleanly separate the API for 'under construction' files and 'completed files', and it means that half-written files are not visible.

However this design is essentially incompatible with POSIX, which endows the user with near-absolute freedom when it comes to manipulating a file. Therefore, the TernFS kernel module is not POSIX-compliant, but rather exposes enough POSIX to allow many programs to work without modifications, but not all.

In practice this means that programs which write files left-to-right and never modify the files' contents will work out-of-the-box. While this might seem very restrictive, we found that a surprising number of programs worked just fine. [12] Programs that did not follow this pattern were modified to first write to a temporary file and then copy the finished file to TernFS.

While we feel that writing our own kernel module was the right approach, it proved to be the trickiest part of TernFS, and we would not have been able to implement it without some important safety checks in the TernFS core services. [13]

S3 gateway

Almost all the storage-related activity at XTX is due to our machine-learning efforts, and for those purposes the TernFS' kernel module has served us well. However, as TernFS proved itself there, we started to look into offering TernFS to the broader firm.

Doing so through the kernel module presented multiple challenges. For starters installing a custom kernel module on every machine that needed to reach TernFS is operationally cumbersome. Moreover, while all machine-learning happens in clusters housed in the same data centre as TernFS itself, we wanted to expose TernFS in a way that's more amenable to less local networks, for instance by removing the need for UDP. Finally, TernFS does not have any built-in support for permissions or authentication, which is a requirement in multi-tenant scenarios.

To solve all these problems, we implemented a gateway for TernFS, which exposes a TernFS subtree using the S3 API. The gateway is a simple Go process turning S3 calls into TernFS API calls. The S3 gateway is not currently open sourced since it is coupled to authentication services internal to XTX, but we have open sourced a minimal S3 gateway to serve as a starting point for third-party contributors to build their own.

We've also planned an NFS gateway to TernFS, but we haven't had a pressing enough need yet to complete it.

The web UI and the JSON interface

Finally, a view of TernFS is provided by its web UI. The web UI is a stateless Go program which exposes most of the state of TernFS in an easy-to-use interface. This state includes the full filesystem contents (both metadata and file contents), the status of each service including information about decommissioned block services, and so on.

Moreover, the web UI also exposes the direct TernFS API in JSON form, which is very useful for small scripts and curl-style automation that does not warrant a full-blown Go program.

Directory Policies

To implement some of the functionality we'll describe below, TernFS adopts a system of per-directory policies.

Policies are used for all sorts of decisions, including:

Each of the topics above (and a few more we haven't mentioned) correspond to a certain policy tag . The body of the policies are stored in the metadata together with the other directory attributes.

Policies are inherited: if a directory does not contain a certain policy tag, it transitively inherits from the parent directory. TernFS clients store a cache of policies to allow for traversal-free policy lookup for most directories.

Keeping blocks in check

A filesystem is no good if it loses, leaks, corrupts, or otherwise messes up its data. TernFS deploys a host of measures to minimize the chance of anything going wrong. So far, these have worked: we've never lost data in our production deployment of TernFS. This section focuses on the measures in place to specifically safeguard files' blocks.

Against bitrot, or CRC32-C

The first and possibly most obvious measure consists of aggressively checksumming all TernFS' data. The metadata is automatically checksummed by RocksDB, and every block is stored in a format interleaving 4KiB pages with 4byte CRC32-C checksums.

CRC32-C was picked since it is a high-quality checksum and implemented on most modern silicon. [14] It also exhibits some desirable properties when used together with Reed-Solomon coding .

4KiB was picked since it is the read boundary used by Linux filesystems and is fine-grained while still being large enough to render the storage overhead of the 4byte checksums negligible.

Interleaving the CRCs with the block contents does not add any safety, but it does improve operations in two important ways. First, it allows for safe partial reads: clients can demand only a few pages from a block which is many megabytes in size and still check the reads against its checksum. Second, it allows scrubbing files locally on the server which hosts the blocks, without communicating with other services at all.

Storing files redundantly, or Reed-Solomon codes

We've been talking about files being split into blocks, but we haven't really explained how files become blocks.

The first thing we do to a file is split it into spans . Spans are at most 100MiB and are present just to divide files into sections of a manageable size.

Then each span is divided into D data blocks , and P parity blocks . D and P are determined by the corresponding directory policy in which the file is created. When D is 1, the entire contents of the span become a single block, and that block is stored D+P times. This scheme is equivalent to a simple mirroring scheme and allows it to lose up to P blocks before losing file data.

While wasteful, mirroring the entire contents of the file can be useful for very hot files, since TernFS clients will pick a block at random to read from, thereby sharing the read load across many block services. And naturally files which we do not care much for can be stored with D = 1 and P = 0, without any redundancy.

That said, most files will not be stored using mirroring but rather using Reed-Solomon coding. Other resources can be consulted to understand the high-level idea and the low-level details of Reed-Solomon coding, but the gist is it allows us to split a span into D equally sized blocks (some padding might be necessary), and then generate P blocks of equal size such that up to any P blocks can be lost while retaining the ability to reconstruct all the other blocks.

As mentioned, D and P are fully configurable, but at XTX we tend to use D = 10 and P = 4, which allows us to lose up to any four drives for any file.

Drive type picking

We now know how to split files into a bunch of blocks. The next question is: which drives to pick to store the blocks on. The first decision is which kind of drive to use. At XTX we separate drives into two broad categories for this purpose — flash and spinning disks.

When picking between these two, we want to balance two needs: minimizing the cost of hardware by utilizing hard disks if we can [15] , and maximizing hard disk productivity by having them reading data most of the time, rather than seeking.

To achieve that, directory policies offer a way to tune how large each block will be, and to tune which drives will be picked based on block size. This allows us to configure TernFS so that larger files that can be read sequentially are stored on hard disks, while random-access or small files are stored on flash. [16]

Currently this system is not adaptive, but we found that in practice it's easy to carve out sections of the filesystem which are not read sequentially. We have a default configuration which assumes sequential reads and then uses hard disks down to roughly 2.5MB blocks, below which hard disks stop being productive enough and blocks start needing to be written to flash.

Block service picking

OK, we now know what type of drive to select for our files, but we still have tens of thousands of individual drives to pick from. Picking the 'right' individual drive requires some sophistication.

The first thing to note is that drive failures or unavailability are often correlated. For instance, at XTX a single server handles 102 spinning disks. If the server is down, faulty, or needs to be decommissioned, it'll render its 102 disks temporarily or permanently unavailable.

It's therefore wise to spread a file's blocks across many servers. To achieve this, each TernFS block service (which generally corresponds to a single drive) has a failure domain . When picking block services in which to store the blocks for a given file, TernFS will make sure that each block is in a separate failure domain. In our TernFS deployment a failure domain corresponds to a server, but other users might wish to tie it to some other factor as appropriate.

TernFS also tries hard to avoid write bottlenecks by spreading the current write load across many disks. Moreover, since new drives can be added at any time, it tries to converge to a situation where each drive is roughly equally filled by assigning writing more to drives with more available space.

Mechanically this is achieved by having each shard periodically request a set of block services to use for writing from the registry. When handing out block services to shards, the registry selects block services according to several constraints:

  • It never gives block services from the same failure domain to the same shard
  • It minimizes the variance in how many shards each block service is currently assigned to
  • It prioritizes block services which have more available space.

Then when a client wants to write a new span, requiring D+P blocks, the shard simply selects D+P block services randomly amongst the ones it last received from the registry.

One concept currently absent from TernFS is what is often known as 'copyset replication'. When assigning disks to files at random (even with the caveat of failure domains) the probability of rendering at least one file unreadable quickly becomes a certainty as more and more drives fail:

Probability of data loss vs Failed disks

Copysets reduce the likelihood of data loss occurring by choosing blocks out of a limited number of sets of drives, as opposed to picking the drives randomly. This dramatically reduces the probability of data loss [17] . They are generally a good idea, but we haven't found them to be worthwhile, for a few reasons.

First, evacuating a 20TB drive takes just a few minutes, and in the presence of multiple failed drives the migrator process evacuates first the files which are present in multiple failed drives to get ahead of possible data loss. This means that for TernFS to lose data within a single data centre tens of drives would have to fail within a matter of seconds.

More importantly, our TernFS deployment is replicated across three data centres. This replication eliminates the chance of losing data due to 'independent' drive failures — thousands of drives would need to fail at once. Obviously, data centre wide events can cause a large proportion of the drives within it to fail, but having such an event in three data centres at once is exceedingly unlikely.

Finally, copysets are not without drawbacks or complications. Assigning drives at random is an optimal strategy when it comes to evacuating drives quickly, since the files with blocks in the drives to be evacuated will be evenly spread over the rest of the filesystem, and since we only ever need to replace the failed blocks given that we're not constrained by fitting the new set of blocks in predetermined copysets. This means that the evacuation procedure will not be bottlenecked by drive throughput, which is what enables evacuation to finish in a matter of minutes. Moreover, the algorithm to distribute drives to shards is significantly simpler and more flexible than if it needed to care about copysets.

However, users that wish to deploy TernFS within a single data centre might wish to implement some form of copyset replication. Such a change would be entirely contained to the registry and would not change any other component.

Block Proofs

We now have a solid scheme to store files redundantly (thanks to Reed-Solomon codes) and protect against bitrot (thanks to the checksums). However, said schemes are only as good as their implementation.

As previously mentioned, TernFS clients communicate their intention to write a file to metadata servers, the metadata servers select block services that the blocks should be written to, and the clients then write the blocks to block services independently of the metadata services. The same happens when a client wants to erase blocks: the client first communicates its intentions to delete the blocks to the right metadata shard and then performs the erasing itself.

This poses a challenge. While verifying the correctness of the core TernFS services is feasible, verifying all clients is not, but we'd still like to prevent buggy clients from breaking key invariants of the filesystem.

Buggy clients can wreak havoc in several ways:

  • They can leak data by writing blocks to block services that are not referenced anywhere in the metadata.
  • They can lose data by erasing blocks which are still referenced in metadata.
  • They can corrupt data by telling the metadata services they'll write something and then writing something else.

We address all these points by using what we call block proofs . To illustrate how block proofs work, it's helpful to go through the steps required to write new data to a file.

  1. When a client is creating a file, it'll do so by adding its file spans one-by-one. For each span the client wants to add it sends an 'initiate span creation' request to the right metadata shard. This request contains both the overall checksum of the span, and the checksum of each block in it (including parity blocks).
  2. The metadata shard checks the consistency of the checksum of the span and of its blocks, something it can do thanks to some desirable mathematical properties of CRCs.
  3. The shard picks block services for the blocks to be written in and returns this information to the client together with a signature for each 'block write' instruction.
  4. The client forwards this signature to the block services, which will refuse to write the block without it. Crucially, the cryptographic signature ranges over a unique identity for the block (ensuring we only write the block we mean to write), together with its checksum, ensuring we don't write the wrong data. [18]
  5. After committing the block to disk, the block service returns a 'block written' signature to the client.
  6. Finally, the client forwards the block written signature back to the shard, which certifies that the span has been written only when it has received the signatures for all the blocks that make up the span. [19]

Similarly, when a client wants to delete a span, it first asks the metadata shard to start doing so. The metadata shard marks the span as 'in deletion' and returns a bunch of 'block erase' signatures to the client. The client then forwards the signatures to the block services that hold the blocks, which delete the blocks, and return a 'block erased' signature. The clients forward these signatures back to the metadata shards, which can then forget about the span entirely.

We use AES to generate the signatures for simplicity but note that the goal here is not protecting ourselves from malicious clients — just buggy ones. The keys used for the signature are not kept secret, and CRC32-C is not a secure checksum. That said, we've found this scheme enormously valuable in the presence of complex clients . We spent considerable efforts making the core services very simple so we could then take more implementation risks in the clients, with the knowledge that we would have a very low chance of corrupting the filesystem itself.

Scrubbing

Finally, if things go wrong, we need to notice. The most common failure mode for a drive is for it to fail entirely, in which case our internal hardware monitoring system will pick it up and migrate from it automatically. The more insidious (and still very common) case is a single sector failing in a drive, which will only be noticed when we try to read the block involving that sector.

This is acceptable for files which are read frequently, but some files might be very 'cold' but still very important.

Consider the case of raw market data taps which are immediately converted to some processed, lossy format. While we generally will use the file containing the processed data, it's paramount to store the raw market data forever so that if we ever want to include more information from the original market data, we can. So important cold files might go months or even years without anyone reading them, and in the meantime, we might find that enough blocks have been corrupted to render them unreadable. [20]

To make sure this does not happen, a process called the scrubber continuously reads every block that TernFS stores, and replaces blocks with bad sectors before they can cause too much damage.

Snapshots and garbage collection

We've talked at length about what TernFS does to try to prevent data loss due to hardware failure or bugs in clients. However, the most common type of data loss is due to human error — the rm —rf / home/alice/notes.txt scenario.

To protect against these scenarios, TernFS implements a lightweight snapshotting system. When files or directories are deleted, their contents aren't actually deleted. Instead, a weak reference to them is created. We call such weak references snapshot directory entries.

Snapshot entries are not be visible through the kernel module or the S3 gateway, but are visible through the direct API , and at XTX we have developed internal tooling to easily recover deleted files through it. [21] Deleted files are also visible through the TernFS web UI.

Given that 'normal' file operations do not delete files, but rather make them a snapshot, the task of freeing up space is delegated to an external Go process, the garbage collector . The garbage collector traverses the filesystem and removes expired snapshots, which involves deleting their blocks permanently. Snapshot expiry is predictably regulated by directory policies .

Keeping TernFS healthy

This last section covers how we (humans of XTX) notice problems in TernFS, and how TernFS self-heals when things go wrong — both key topics if we want to ensure no data loss and notice performance problems early.

Performance metrics

TernFS exposes a plethora of performance metrics through the HTTP InfluxDB line protocol . While connecting TernFS to a service which ingests these metrics is optional, it is highly recommended for any production service.

Moreover, the kernel module exposes many performance metrics itself through DebugFS.

Both types of metrics, especially when used in tandem, have proved invaluable to resolve performance problems quickly.

Logging and alerts

TernFS services log their output to files in a simple line-based format. The internal logging API is extremely simple and includes support for syslog levels out-of-the-box. At XTX we run TernFS as normal systemd services and use journalctl to view logs.

As with metrics, the kernel module includes various logging facilities as well. The first type of logging is just through dmesg, but the kernel module also includes numerous tracepoints for low-overhead opt-in logging of many operations.

TernFS is also integrated with XTX's internal alerting system, called XMon , to page on call developers when things go wrong. XMon is not open source, but all the alerts are also rendered as error lines in logs. [22] We plan to eventually move to having alerts feed off performance metrics, which would make them independent from XMon, although we don't have plans to do so in the short-term.

Migrations

Finally, there's the question of what to do when drives die — and they will die, frequently, when you have 50,000 of them. While drives dying is not surprising, we've been surprised at the variety of different drive failures. [23] A malfunctioning drive might:

  • Produce IO errors when reading specific files. This is probably due to a single bad sector.
  • Produce IO errors when reading or writing anything. This might happen because enough bad sectors have gone bad and the drive cannot remap them, or for a variety of other reasons.
  • Return wrong data. This is usually caught by the built-in error correction codes in the hard drives, but not always.
  • Lie about data being successfully persisted. This can manifest in a variety of ways: file size being wrong on open, file contents being partially zero'd out, and so on.
  • Disappear from the mount list, only to reappear when the machine is rebooted, but missing some data.

When clients fail to read from a drive, they'll automatically fall back on other drives to reconstruct the missing data, which is extremely effective in hiding failures from the end-user. That said, something needs to be done about the bad drives, and done quickly to avoid permanent data loss .

The TernFS registry allows marking drives as faulty. Faulty drives are then picked up by the migrator , a Go process which waits for bad drives and then stores all its blocks onto freshly picked block services.

TernFS also tries to mark drives as bad automatically using a simple heuristic based on the rate of IO errors the drive is experiencing. The number of drives automatically marked as faulty is throttled to avoid having this check go awry and mark the whole cluster as faulty, which would not be catastrophic but would still be messy to deal with.

Moreover, drives that are faulty in subtle ways might not be picked up by the heuristics, which means that occasionally a sysadmin will need to mark a drive as faulty manually, after which the migrator will evacuate them.

Closing thoughts

At XTX we feel strongly about utilizing our resources efficiently. When it comes to software, this means having software that gets close to some theoretical optimum when it comes to total cost of ownership. This culture was borne out by competing hard for technological excellence when doing on-exchange trading at first, and by our ever-growing hardware costs as our business has grown later.

Such idealized tools might not exist or be available yet, in which case we're happy to be the tool makers. TernFS is a perfect example of this and we're excited to open source this component of our business for the community.

Crucially, the cost of implementation of a new solution is often overblown compared to the cost of tying yourself to an ill-fitting, expensive third-party solution. Designing and implementing a solution serving exactly your needs allows for much greater simplicity. If the requirements do change, as often happens, changes can be implemented very quickly, again only catering to your needs.

That said, we believe that TernFS' set of trade-offs are widely shared across many organizations dealing with large-scale storage workloads, and we hope we'll contribute to at least slowing down the seemingly constant stream of new filesystems .

Fil-C

Lobsters
fil-c.org
2025-10-20 18:25:56
Comments...
Original Article

Welcome to Fil-C, a memory safe implementation of the C and C++ programming languages you already know and love.

What is Fil-C?

Fil-C is a fanatically compatible memory-safe implementation of C and C++. Lots of software compiles and runs with Fil-C with zero or minimal changes. All memory safety errors are caught as Fil-C panics. Fil-C achieves this using a combination of concurrent garbage collection and invisible capabilities ( InvisiCaps ). Every possibly-unsafe C and C++ operation is checked. Fil-C has no unsafe statement and only limited FFI to unsafe code.

Key Features

  • Memory Safety : Advanced runtime checks to prevent exploitable memory safety errors . Unlike other approaches to increasing the safety of C, Fil-C achieves complete memory safety with zero escape hatches.
  • C and C++ Compatibility : Your C or C++ software most likely compiles and runs in Fil-C with zero changes. Many open source programs , including CPython, OpenSSH, GNU Emacs, and Wayland work great in Fil-C. Even advanced features like threads, atomics, exceptions, signal handling, longjmp / setjmp , and shared memory ( mmap style or Sys-V style) work. It's possible to run a totally memory safe Linux userland , including GUI, with Fil-C.
  • Modern Tooling : Compiler is based on a recent version of clang (20.1.8), supports all clang extensions, most GCC extensions, and works with existing C/C++ build systems (make, autotools, cmake, meson, etc).

Quick Links

License

Fil-C's compiler is licensed under Apache 2 . Fil-C's runtime is licensed under BSD 2-clause . Fil-C has two standard libraries; musl is used in the pizfix slice binary distributions and is licensed under MIT , while glibc is available in the /opt/fil binary distributions and in Pizlix and is licensed under LGPL 2.1 .

Community

Join the Fil-C Discord community to discuss the language implementation, share projects, and contribute to its development.

Dutch spy services have restricted intelligence-sharing with the United States

Hacker News
intelnews.org
2025-10-20 18:25:45
Comments...
Original Article

Mark Rutte NATO Trump INTELLIGENCE SERVICES IN THE Netherlands have restricted intelligence-sharing with their United States counterparts due to political developments in Washington, according to two leading Dutch intelligence officials. This development—which may typify Europe’s current approach to transatlantic intelligence-sharing—was confirmed last week by the heads of the Netherlands’ two largest intelligence agencies in a joint interview with De Volkskrant newspaper.

The joint interview was given to De Volkskrant by Erik Akerboom, director of the General Intelligence and Security Service (AIVD), and Peter Reesink , director of the General Intelligence and Security Service (MIVD)—AIVD’s civilian counterpart.

Both men stressed that inter-agency relations between Dutch and American intelligence organizations remain “excellent”. However, they added that the Netherlands has grown more selective about what it chooses to share with American intelligence agencies—particularly the Central Intelligence Agency and the National Security Agency. “That we sometimes don’t share things anymore, that’s true,” Reesink said, referring to sharing information with American intelligence agencies. Akerboom added: “sometimes you have to think case by case.” He went on to say: “We can’t say what we will or won’t share. But we can say that we are more critical.”

According to the two senior officials, Dutch spies have been intensifying intelligence cooperation and sharing with their European counterparts. This is particularly applicable to a collection of central and northern European intelligence services from countries like Scandinavia, France, Germany, the United Kingdom, and Poland, according to De Volkskrant .

Author : Ian Allen | Date : 20 October 2025 | Permalink

Chess grandmaster Daniel Naroditsky has passed away

Hacker News
old.reddit.com
2025-10-20 18:24:58
Comments...
Original Article

whoa there, pardner!

Your request has been blocked due to a network policy.

Try logging in or creating an account here to get back to browsing.

If you're running a script or application, please register or sign in with your developer credentials here . Additionally make sure your User-Agent is not empty and is something unique and descriptive and try again. if you're supplying an alternate User-Agent string, try changing back to default as that can sometimes result in a block.

You can read Reddit's Terms of Service here .

If you think that we've incorrectly blocked you or you would like to discuss easier ways to get the data you want, please file a ticket here .

When contacting us, please include your Reddit account along with the following code:

a79ca9f9-6488-4da7-9849-25690b806405

Getting DeepSeek-OCR working on an NVIDIA Spark via brute force using Claude Code

Simon Willison
simonwillison.net
2025-10-20 18:21:52
DeepSeek released a new model yesterday: DeepSeek-OCR, a 6.6GB model fine-tuned specifically for OCR. They released it as model weights that run using PyTorch and CUDA. I got it running on the NVIDIA Spark by having Claude Code effectively brute force the challenge of getting it working on that part...
Original Article

20th October 2025

DeepSeek released a new model yesterday: DeepSeek-OCR , a 6.6GB model fine-tuned specifically for OCR. They released it as model weights that run using PyTorch and CUDA. I got it running on the NVIDIA Spark by having Claude Code effectively brute force the challenge of getting it working on that particular hardware.

This small project (40 minutes this morning, most of which was Claude Code churning away while I had breakfast and did some other things) ties together a bunch of different concepts I’ve been exploring recently. I designed an agentic loop for the problem, gave Claude full permissions inside a Docker sandbox, embraced the parallel agents lifestyle and reused my notes on the NVIDIA Spark from last week.

I knew getting a PyTorch CUDA model running on the Spark was going to be a little frustrating, so I decided to outsource the entire process to Claude Code to see what would happen.

TLDR: It worked. It took four prompts (one long, three very short) to have Claude Code figure out everything necessary to run the new DeepSeek model on the NVIDIA Spark, OCR a document for me and produce copious notes about the process.

The setup

I connected to the Spark from my Mac via SSH and started a new Docker container there:

docker run -it --gpus=all \
  -v /usr/local/cuda:/usr/local/cuda:ro \
  nvcr.io/nvidia/cuda:13.0.1-devel-ubuntu24.04 \
  bash

Then I installed npm and used that to install Claude Code:

apt-get update
DEBIAN_FRONTEND=noninteractive TZ=Etc/UTC apt-get install -y npm
npm install -g @anthropic-ai/claude-code

Then started Claude Code, telling it that it’s OK that it’s running as root because it’s in a sandbox:

IS_SANDBOX=1 claude --dangerously-skip-permissions

It provided me a URL to click on to authenticate with my Anthropic account.

The initial prompts

I kicked things off with this prompt:

Create a folder deepseek-ocr and do everything else in that folder

Then I ran the following, providing links to both the GitHub repository and the Hugging Face model, providing a clue about NVIDIA ARM and giving it an image ( this one , see previous post ) that I wanted it to run OCR on.

Your task is to get this working: https://github.com/deepseek-ai/DeepSeek-OCR —it uses Hugging Face Transformers and models from https://huggingface.co/deepseek-ai/DeepSeek-OCR —you are running in a Docker container on an NVIDIA ARM device, first research your environment to check it will be possible to do this. Git clone both of those—you may need to enable LFS first. Make extensive notes as you go in a notes.md file which you should only ever append new notes to. At the end of this you should have scripts in the folder which can be run to both setup and then run the model against an example—use https://static.simonwillison.net/static/2025/ft.jpeg as the example image that you OCR. As a final step create a detailed README.md file explaining the project and what you learned and how to run your code.

Note how I also told it to “make extensive notes as you go in a notes.md file which you should only ever append new notes to” so I’d have some useful notes at the end.

The follow-ups

It churned away for 13 minutes, and made a sequence of discoveries about the platform and the model and the difficulties of running PyTorch on it.

And then... it gave up :

The NVIDIA GB10 GPU has compute capability sm_121, which is too new for PyTorch 2.5.1

  • PyTorch 2.5.1 supports: sm_50, sm_80, sm_86, sm_89, sm_90, sm_90a
  • GB10 requires: sm_121
  • Error: CUDA error: no kernel image is available for execution on the device

Additionally, the model has hardcoded .cuda() calls, preventing CPU fallback without code modifications.

So I gave it some encouragement:

Any other options that might help get CUDA working like using different versions of PyTorch that have available CUDA wheels for this platform? Have you checked to see what PyTorch versions already have these ARM CUDA wheels that would work here?

It ran this command to grep for all available PyTorch wheels on download.pytorch.org that match the ARM architecture:

curl -s https://download.pytorch.org/whl/torch/ | grep -o 'torch-[0-9.]*.*aarch64.whl' | sort -V | tail -20

... and that unblocked it.

Excellent! I can see that PyTorch 2.9.0 has ARM64 wheels for CUDA 12.8, 12.9, and CUDA 13.0 (cu130)! This is exactly what we need!

It installed the wheels it found and reported:

Good news! PyTorch 2.9.0 can detect the GB10 GPU with CUDA capability 12.1 (sm_121), but it says the maximum supported is 12.0. However, it’s just a warning now instead of a hard error! Let’s try running inference: [...]

Then kept on churning for a bit, ignoring warnings, and managed to run the model ! It loaded the image and ran this prompt:

<|grounding|>OCR this image.

And got back output that started like this:

<|ref|>The perils of vibe coding<|/ref|><|det|>[[352, 30, 624, 111]]<|/det|>
<|ref|>opt<|/ref|><|det|>[[970, 0, 994, 30]]<|/det|>
<|ref|>such<|/ref|><|det|>[[970, 24, 996, 58]]<|/det|>
<|ref|>days<|/ref|><|det|>[[970, 52, 996, 87]]<|/det|>
<|ref|>pavi<|/ref|><|det|>[[970, 85, 996, 118]]<|/det|>
<|ref|>TECHNOLOGY<|/ref|><|det|>[[33, 199, 127, 230]]<|/det|>
<|ref|>holds the promise of replacing program-<|/ref|><|det|>[[220, 170, 390, 210]]<|/det|>
<|ref|>mers - a profession of very well paid<|/ref|><|det|>[[220, 199, 390, 238]]<|/det|>
<|ref|>people.For another, the work can be<|/ref|><|det|>[[220, 228, 390, 267]]<|/det|>

Claude declared victory and pointed me to the output/result.mmd file, which contained only whitespace. So OCR had worked but the result had failed to be written correctly to disk.

I said :

That result.mmd file looks to be exclusively whitespace to me, try and get a better result there—the bounding box image was good though

Which caused Claude to reconsider the prompt it used:

Maybe we need to use a different prompt. Looking at the README from DeepSeek-OCR, there are different prompts:

  • "<image>\n<|grounding|>Convert the document to markdown." —for documents
  • "<image>\n<|grounding|>OCR this image." —general OCR
  • "<image>\nFree OCR." —without layouts

The “grounding” mode seems to be focused on bounding boxes, which is why we got good bounding box results but the text output might be in a different format.

Let me try running it again with a different prompt that’s more focused on text extraction rather than grounding.

It tried all of the prompt variants, made a note about them and created a file called PROMPTS_GUIDE.md with a bunch more details, including this comparison table:

Prompt Speed Text Quality Structure Coordinates Best Use Case
Free OCR ⚡⚡⚡ Fast ⭐⭐⭐ Excellent ⭐ Basic ❌ No General OCR
Markdown ⚡⚡ Medium ⭐⭐⭐ Excellent ⭐⭐⭐ Full ⭐⭐ Partial Documents
Grounding ⚡ Slow ⭐⭐ Good ⭐ Basic ⭐⭐⭐ Full Annotations
Detailed ⚡⚡⚡ Fastest ⭐ N/A ❌ N/A ❌ No Image analysis

And this table under the heading “Performance Benchmarks”:

Test image: 3503×1668 pixels (Financial Times article)

Prompt Time Output Size Tokens
Free OCR 24s Clean text 2257
Markdown 39s Formatted MD 2257 + structure
Grounding 58s Text + coords 2257 + boxes
Detailed 9s Description ~300

My final prompt was this, to gather everything together into a zip file I could extract from the Docker container:

Create a zip file with the output and output_text and all of the scripts and notes—but leave out the github repo and the huggingface repo directories

I added the contents of that zip file to my new simonw/research GitHub repo in the deepseek-ocr-nvidia-spark folder.

Claude really likes writing notes! Here’s the directory listing of that finished folder:

  |-- download_test_image.sh
  |-- FINAL_SUMMARY.md
  |-- notes.md
  |-- output
  |   |-- images
  |   |-- result_with_boxes.jpg
  |   `-- result.mmd
  |-- output_text
  |   |-- detailed
  |   |   |-- images
  |   |   |-- result_with_boxes.jpg
  |   |   `-- result.mmd
  |   |-- free_ocr
  |   |   |-- images
  |   |   |-- result_with_boxes.jpg
  |   |   `-- result.mmd
  |   `-- markdown
  |       |-- images
  |       |   `-- 0.jpg
  |       |-- result_with_boxes.jpg
  |       `-- result.mmd
  |-- PROMPTS_GUIDE.md
  |-- README_SUCCESS.md
  |-- README.md
  |-- run_ocr_best.py
  |-- run_ocr_cpu_nocuda.py
  |-- run_ocr_cpu.py
  |-- run_ocr_text_focused.py
  |-- run_ocr.py
  |-- run_ocr.sh
  |-- setup.sh
  |-- SOLUTION.md
  |-- test_image.jpeg
  |-- TEXT_OUTPUT_SUMMARY.md
  `-- UPDATE_PYTORCH.md

Takeaways

My first prompt was at 15:31:07 (UTC). The final message from Claude Code came in at 16:10:03. That means it took less than 40 minutes start to finish, and I was only actively involved for about 5-10 minutes of that time. The rest of the time I was having breakfast and doing other things.

Having tried and failed to get PyTorch stuff working in the past, I count this as a huge win. I’ll be using this process a whole lot more in the future.

How good were the actual results? There’s honestly so much material in the resulting notes created by Claude that I haven’t reviewed all of it. There may well be all sorts of errors in there, but it’s indisputable that it managed to run the model and made notes on how it did that such that I’ll be able to do the same thing in the future.

I think the key factors in executing this project successfully were the following:

  1. I gave it exactly what it needed: a Docker environment in the target hardware, instructions on where to get what it needed (the code and the model) and a clear goal for it to pursue. This is a great example of the pattern I described in designing agentic loops .
  2. Running it in a Docker sandbox meant I could use claude --dangerously-skip-permissions and leave it running on its own. If I’d had to approve every command it wanted to run I would have got frustrated and quit the project after just a few minutes.
  3. I applied my own knowledge and experience when it got stuck. I was confident (based on previous experiments with the Spark) that a CUDA wheel for ARM64 existed that was likely to work, so when it gave up I prompted it to try again, leading to success.

Oh, and it looks like DeepSeek OCR is a pretty good model if you spend the time experimenting with different ways to run it.

Bonus: Using VS Code to monitor the container

A small TIL from today: I had kicked off the job running in the Docker container via SSH to the Spark when I realized it would be neat if I could easily monitor the files it was creating while it was running.

I asked Claude.ai :

I am running a Docker container on a remote machine, which I started over SSH

How can I have my local VS Code on MacOS show me the filesystem in that docker container inside that remote machine, without restarting anything?

It gave me a set of steps that solved this exact problem:

  1. Install the VS Code “Remote SSH” and “Dev Containers” extensions
  2. Use “Remote-SSH: Connect to Host” to connect to the remote machine (on my Tailscale network that’s spark@100.113.1.114 )
  3. In the window for that remote SSH session, run “Dev Containers: Attach to Running Container”—this shows a list of containers and you can select the one you want to attach to
  4. ... and that’s it! VS Code opens a new window providing full access to all of the files in that container. I opened up notes.md and watched it as Claude Code appended to it in real time.

At the end when I told Claude to create a zip file of the results I could select that in the VS Code file explorer and use the “Download” menu item to download it to my Mac.

Kickstarter Campaign for Ben Zotto’s ‘Go Computer Now!’, a Book on Sphere, the Nearly Forgotten Personal Computer Company

Daring Fireball
www.kickstarter.com
2025-10-20 18:20:48
Ben Zotto: Name every pioneering personal computer you can think of from the 1970s. The MITS Altair 8800. The Apple-1. The IMSAI 8080. You may even know about the SWTPC 6800 or Processor Technology Sol-20. There’s a computer missing from that list, and it’s an important one: Sphere Corporation’...

Ruby Butler: It’s Time to Rethink RubyGems and Bundler

Lobsters
rubyelders.com
2025-10-20 18:18:44
Comments...
Original Article

By simi avatar simi October 19, 2025 — Praha, Žižkov

The Realization

Earlier this year — long before the recent Ruby Central fiasco — I started exploring ideas for the next major version (4.x) of RubyGems and Bundler. That’s when it struck me: somewhere along the way, these projects stopped moving forward. Formats like gemspec , Gemfile , and others haven’t evolved in years, and any attempt to change them usually ends in early rejection.

Over time, RubyGems and Bundler became finely tuned for large company deployments and super-stable environments , but that same stability now limits innovation. The excitement of experimenting, hacking, and building new ideas into Ruby’s tooling slowly faded — at least for me.

The Inspiration

While working on some Python projects recently (mostly Airflow -related), I was deeply inspired by uv — its simplicity blew me away. The run command just worked : no confusing errors, no excuses, no manual setup. It installed everything needed — even Python itself — and executed the program at lightning speed.

As a partial web developer, I’ve also long admired how JavaScript projects use the scripts section in package.json . It’s such a clean way to define project-specific commands — no more copy-pasting long, platform-specific commands from README s. In Ruby, I often had to scan CI setups to figure out how to run specs ( rake , rspec , or another runner), how to run linters, and so on.

The Acceptance

In Ruby, there’s no single entry point for this kind of extra project metadata or developer scripts. Yes, there’s Rake , but it suffers from the classic chicken-and-egg problem — it often loads the entire app environment just to run a simple command. It can be configured otherwise, but that’s rare in practice. Meanwhile, Gemfile is for dependencies, gemspecs aren’t meant for apps, and trying to push features like “install by default” for bundle exec inside Bundler felt nearly impossible (even though an environment variable for it already exists).

At the same time, I’ve always admired how well RubyGems and Bundler work internally — they’re elegant, consistent, and incredibly stable. But I realized I wanted to recompose them: use the same powerful internals in a different way, with a simpler and more modern interface. Looking deeper inside, I had to accept a truth — changing them directly would be nearly impossible. However, building a custom “frontend” wrapper that instructs these tools to behave differently was possible.

That’s how the idea for Ruby Butler was born. From that point, it began taking shape as a simple command-line tool — the rb command in your terminal, ready to serve.

Ruby Butler, nice to meet you!
Ruby Butler, nice to meet you! Curious what's that? Keep reading to find out.

The Discovery

Ruby Butler started as a simple Bash script, just mangling a few environment variables needed by Ruby, RubyGems, and Bundler. To control the Ruby environment, all you really need is to put the desired Ruby’s bin directory in your PATH and set two variables — GEM_HOME (where to install gems) and GEM_PATH ( GEM_HOME plus Ruby’s default gems). That’s it. That’s the whole “rocket science” behind tools like chruby .

Every Ruby environment tool I’d used — chruby , rbenv , and others — was tied tightly to the shell (mostly Bash and friends). That worked, but it often caused friction. You had to ensure your current shell instance matched the Ruby setup you wanted. Tools like chruby-auto helped a bit, but switching Rubies in the same shell session remained tricky. And on systems without a preinstalled Ruby — my preferred setup today — simply running ruby failed until everything was manually configured. I wanted something truly unified — cross-shell, cross-platform, and automatic. I wanted something that would automatically prepare the exact Ruby environment I’m looking for.

That’s where today’s Ruby Butler architecture was born.

The Architecture

Ruby Butler builds on a simple idea: Rubies are already on your system — you just need the right one at the right time.
By default, it scans ~/.rubies (for compatibility with tools like ruby-install ) and detects which versions are available. From there, it can automatically select the best Ruby for the current context — or let you decide manually.

Ruby Butler presenting detected runtimes.
Ruby Butler presenting detected runtimes.

But the real goal isn’t just choosing a Ruby version. It’s to prepare a clean, ephemeral environment where Ruby programs can run without leaving traces behind. This environment exists only for the duration of the command — no shell pollution, no manual resets. For quick scripts, the latest Ruby works just fine. If a .ruby-version file or a Gemfile with a ruby section is detected, Butler automatically switches to the matching version. And when you need something specific, you can ask explicitly: rb -r 3.4.2 .

Everything runs through the exec command — rb exec , or simply rb x . It behaves as expected, with no setup needed unless the environment can’t be inferred. Ruby Butler is also smart enough to use RubyGems binstubs , eliminating the need for bundle exec or bin/ prefixes. Working in a Rails app? Just run rb x rails s . I’ve been experimenting with a shorter alias, rbx , but haven’t fully settled on it yet.

Ruby Butler executing commands in prepared environment!
Ruby Butler executing commands in prepared environment!

Butler is also clever enough to compose your Bundler environment automatically . It runs bundle check and, if necessary, bundle install — all within its own isolated .rb folder. In practice, this means you can literally clone a project and run rb x rspec , and everything just works (unless, of course, other setup like a database is required).

Ruby Butler executing commands in bundler prepared environment (with some debug info)!
Ruby Butler executing commands in bundler prepared environment (with some debug info)!

The Innovation

And finally, Ruby Butler lets you describe how your project should behave to elevate the overall developer experience. You don’t even have to start from scratch — just run rb init , and Butler will generate a basic rbproject.toml file for you. You can also use alternative names like gem.toml , or the KDL variants rbproject.kdl and gem.kdl if you prefer a more structured format.

Ruby Butler rbproject.toml example
Ruby Butler rbproject.toml example
Ruby Butler rbproject.kdl example
Ruby Butler rbproject.kdl example

This marks the beginning of a new, unified project format for Ruby — something that could, in time, replace both gemspec and Gemfile. For now, it simply decorates your project with a name and a set of scripts — commands (optionally with descriptions) that appear as a clean, discoverable list in the CLI via rb run .

And the best part? The Butler prepares the full environment for you automatically , just as it does with the exec command. When you run rb run test or rb run lint , Butler silently sets up the correct Ruby, Bundler, and gem context behind the scenes, ensuring everything “just works.” No shell hacks, no manual setup, no guesswork — it’s all part of Butler’s service.

This format is still being explored in collaboration with other modern tools (like rv ) and may grow into a new long-term standard. The goal is simple but ambitious: to make Ruby development feel effortless, consistent, and joyful again .

Ruby Butler run list example
Ruby Butler run list example

The Future

Feel free to try out Ruby Butler ( github.com/RubyElders/ruby-butler ) — maybe it will serve you well too. Personally, I’ve been using it as my main Ruby development tool for the past couple of months across two systems — Windows and Linux — with great success. It already runs an extensive integration test suite on both platforms: Pester for PowerShell on Windows and Shellspec for Bash on Linux .

Ruby Butler being tested
Ruby Butler being tested

It doesn’t yet solve every problem I’ve encountered over the years with RubyGems and Bundler, but in my opinion, it’s already a huge improvement in everyday developer experience. I’d love to hear your ideas and feedback — on GitHub or anywhere else the Ruby conversation lives.

Since Ruby Butler operates in complete isolation, it won’t interfere with your existing setup or tools . Just make sure to add the .rb folder and rbproject.toml (or any of its variants) to your global .gitignore for now.

And one last note: please don’t use it in production yet. Ruby Butler is still evolving rapidly , and both its API and CLI flags are subject to change. But if you enjoy experimenting and helping Ruby move forward again — that’s exactly the kind of energy Butler was made to serve.

Ruby Butler can explain what's happening when asked via -v or -vv to be verbose.
Ruby Butler can explain what's happening when asked via -v or -vv to be verbose.

And have I mentioned it’s implemented in Rust ? Curious why? I’ll explain that story — and what it enables — in one of the next posts.
And as for whether I’m still happy reusing Bundler and RubyGems after recent events… let’s just say that deserves its own story — coming soon. Time will tell how those tools evolve from here, and I’ll be watching closely.


The Epilogue

During Ruby Butler’s early development, I shared early ideas and prototypes with several friends and respected members of the Ruby ecosystem — even before it was public on GitHub. To be transparent, I was also looking for potential funding or interest to see whether this kind of work could grow into something more official. I reached out to Ruby Central , offering them the opportunity to develop Ruby Butler under their umbrella as an experimental incubator — a safe space to test and validate new ideas that could later be ported or integrated into RubyGems and Bundler once proven useful.

The response was polite but simple: there’s no interest or funding for this kind of development.

More recently, I’ve also submitted a grant proposal to the Ruby Association — also rejected.

CISA: High-severity Windows SMB flaw now exploited in attacks

Bleeping Computer
www.bleepingcomputer.com
2025-10-20 18:18:16
CISA says threat actors are now actively exploiting a high-severity Windows SMB privilege escalation vulnerability that can let them gain SYSTEM privileges on unpatched systems. [...]...
Original Article

Windows

CISA says threat actors are now actively exploiting a high-severity Windows SMB privilege escalation vulnerability that can let them gain SYSTEM privileges on unpatched systems.

Tracked as CVE-2025-33073 , this security flaw impacts all Windows Server and Windows 10 versions, as well as Windows 11 systems up to Windows 11 24H2.

Microsoft patched the vulnerability during the June 2025 Patch Tuesday, when it also revealed that it stems from an improper access control weakness that enables authorized attackers to elevate privileges over a network.

"The attacker could convince a victim to connect to an attacker controlled malicious application (for example, SMB) server. Upon connecting, the malicious server could compromise the protocol," the company explained.

"To exploit this vulnerability, an attacker could execute a specially crafted malicious script to coerce the victim machine to connect back to the attack system using SMB and authenticate. This could result in elevation of privilege."

At the time, a security advisory indicated that information about the bug was already publicly accessible before the security updates were released, however the company has yet to publicly acknowledge CISA's claims that CVE-2025-33073 is under active exploitation.

Microsoft has attributed the discovery of this flaw to multiple security researchers, including CrowdStrike's Keisuke Hirata, Synacktiv's Wilfried Bécard, SySS GmbH's Stefan Walter, Google Project Zero's James Forshaw, and RedTeam Pentesting GmbH.

CISA has yet to share more information regarding ongoing CVE-2025-33073 attacks, but it has added the flaw to its Known Exploited Vulnerabilities Catalog , giving Federal Civilian Executive Branch (FCEB) agencies three weeks to secure their systems by November 10, as mandated by Binding Operational Directive (BOD) 22-01.

While BOD 22-01 only targets federal agencies, the U.S. cybersecurity agency encourages all organizations, including those in the private sector, to ensure that this actively exploited security bug is patched as soon as possible.

"These types of vulnerabilities are frequent attack vectors for malicious cyber actors and pose significant risks to the federal enterprise," CISA cautioned on Monday.

How London Became a Global Hub for Phone Theft

Daring Fireball
www.nytimes.com
2025-10-20 18:09:02
Lizzie Dearden and Amelia Nierenberg, reporting for The New York Times (gift link): For years, London’s police assumed most of the phone thefts were the work of small-time thieves looking to make some quick cash. But last December, they got an intriguing lead from a woman who had used “Find My i...
Original Article

Please enable JS and disable any ad blocker

Apple and NBCUniversal Introduce the Apple TV and Peacock Bundle

Daring Fireball
www.apple.com
2025-10-20 18:01:49
Apple Newsroom: Apple and NBCUniversal today announced the launch of the Apple TV and Peacock Bundle, available beginning October 20. The first-of-its-kind bundle offers the services’ complementary array of award-winning originals, marquee live events and sports, beloved franchises, and blockbus...
Original Article
opens in new window

Apple and NBCUniversal introduce the Apple TV and Peacock Bundle

One subscription, available October 20, unlocks both services’ complementary mix of award-winning originals, marquee live events and sports, beloved franchises, and blockbuster movies — all for one great price

Apple TV and Peacock will each provide an extensive sampling of each other’s shows in app

The Apple TV and Peacock Bundle launches October 20, offering an array of award-winning originals, live and marquee events, beloved franchises, and more.
Apple and NBCUniversal today announced the launch of the Apple TV and Peacock Bundle , available beginning October 20. The first-of-its-kind bundle offers the services’ complementary array of award-winning originals, marquee live events and sports, beloved franchises, and blockbuster movies, including Ted Lasso , Severance , The Paper , The Traitors , How to Train Your Dragon , the NBA (tipping off October 21 on Peacock), F1 The Movie (coming later this year), and much more, all through one convenient monthly subscription.
Customers in the U.S. can save over 30 percent by subscribing to the Apple TV and Peacock Premium bundle for $14.99 per month, or Apple TV and Peacock Premium Plus for $19.99 per month, through either app or website. 1 Apple One subscribers on the Family and Premier plans can subscribe to Peacock Premium Plus and receive a 35 percent discount — the first benefit of its kind for Apple’s all-in-one subscription bundle.
“There is no more perfect combination of entertainment than the Apple TV and Peacock Bundle, with our complementary content portfolios featuring the best shows, movies, sports, and live events,” said Matt Strauss, Chairman, NBCUniversal Media Group. “With this first-of-its-kind initiative with Apple, we’re redefining what the customer journey should be for a streaming bundle with rich sampling and a smooth sign-up process, and we’re broadening the reach of Peacock to even more audiences.”
“We’re thrilled to introduce this all-inclusive bundle of incredible programming and to bring Apple TV’s beloved shows and movies to more viewers in more places,” said Oliver Schusser, Apple’s vice president of Apple TV, Apple Music, Sports, and Beats. “From acclaimed originals to can’t-miss live sports, the Apple TV and Peacock Bundle unites the best of entertainment in a simple and seamless experience.”
As part of this unique cross-service collaboration, Apple TV app users and Peacock subscribers will be able to sample an extensive and curated selection of the partner services’ shows in each respective app. At launch, Peacock subscribers can enjoy up to three episodes of Stick , Slow Horses , Silo , The Buccaneers , Foundation , Palm Royale, and Prehistoric Planet from Apple TV for free, while Apple TV app users will be able to watch up to three episodes of Law & Order , Bel-Air , Twisted Metal , Love Island Games , Happy’s Place , The Hunting Party , and Real Housewives of Miami from Peacock.
  1. Bundle cannot be purchased on third-party device partners. Management of any existing subscriptions may be required to avoid multiple charges.
  2. The Apple TV subscription (formerly Apple TV+) does not include third-party branded subscription services or content available for rental or purchase in the Apple TV app.
Stay up to date with the latest articles from Apple Newsroom.

Counterpoint Research Claims iPhone 17 Sales Are Up Year-Over-Year in the U.S. and China

Daring Fireball
www.cnbc.com
2025-10-20 17:52:07
CNBC: Apple shares rose 4% on Monday as a new report showed iPhone 17 sales off to a strong start in the U.S. and China. The iPhone 17 series, which dropped in September, has outsold the iPhone 16 series by 14% in the U.S. and China within its first 10 days of availability, according to data fr...
Original Article

A next generation iPhone 17 is held during an Apple special event at Apple headquarters on Sept. 9, 2025 in Cupertino, California.

Justin Sullivan | Getty Images

Apple shares rose 4% on Monday as a new report showed iPhone 17 sales off to a strong start in the U.S. and China.

The iPhone 17 series, which dropped in September, has outsold the iPhone 16 series by 14% in the U.S. and China within its first 10 days of availability, according to data from Counterpoint research.

"The base model iPhone 17 is very compelling to consumers, offering great value for money," Counterpoint senior analyst Mengmeng Zhang said in the report. "A better chip, improved display, higher base storage, selfie camera upgrade – all for the same price as last year's iPhone 16. Buying this device is a no brainer, especially when you throw channel discounts and coupons into the mix."

The company is positioned to rally with demand for the latest iPhone generation exceeding expectations, according to Loop Capital.

The investment bank upgraded Apple from hold to buy, raising its price target to $315 per share from $226.

"While [Wall] Street is baking in some degree of outperformance from AAPL's iPhone 17 family of products, we believe there remains material upside to Street expectations through CY2027," Loop Capital's Ananda Baruah said in a note to clients on Monday.

Last week, Apple CEO Tim Cook visited employees and government officials in China, and appeared at a launch for Apple's new iPhone Air, which reportedly sold out in minutes after going on sale, according to the South China Morning Post.

In a note on Sunday, Evercore analyst Amit Daryanani said that the iPhone Air's apparent popularity in China may boost Apple shares. They have the equivalent of a buy rating on the stock.

The Evercore analysts wrote in the note that Apple is poised to report strong numbers during next week's earnings report.

"We think AAPL is well positioned to report upside to Sep-qtr expectations later this month," Daryanani wrote.

Apple shares have had an up-and-down year so far. The stock is up about 5% in 2025, significantly lagging big Magnificent 7 winners Nvidia , Meta , Microsoft and Alphabet , but has roared 24% over the last 3 months.

Stock Chart Icon Stock chart icon

hide content

Apple 5-day stock chart.

News Outlets Won't Describe Trump's AI Video For What It Is: The President Pooping on America

403 Media
www.404media.co
2025-10-20 17:45:01
Major outlets said the President dumped "brown liquid" that "appeared to be feces" in an AI-generated video. They refused to call a spade a spade, or poop, poop....
Original Article

On Saturday, millions of people across the U.S. attended “No Kings” protests —a slogan born in response to President Donald Trump’s self-aggrandizing social media posts where he’s called himself a king , including with AI-generated images of himself in a crown , and his continuous stretching of executive power. While Americans were out in the street, the president was posting .

In an AI-generated video originally posted on X by a genAI shitposter, Trump, wearing a crown, takes off in a fighter jet to the song “Danger Zone” like he’s in Top Gun . Flying over protestors in American cities, Pilot King Trump bombs people with gallons of chunky brown liquid. It’s poop, ok? It’s shit. It’s diarrhea, and in reposting it, it’s clear enough to me that Trump is fantasizing about doing a carpet-bomb dookie on the people he put his hand on a bible and swore to serve nine months ago. The first protestor seen in the video is a real person, Harry Sisson, a liberal social media influencer.

0:00

/ 0:19

The video Trump reposted to Truth Social

But this was not clear, it seems, to many other journalists. Most national news outlets seem scared to call it how they see, and how everyone sees it: as Trump dropping turd bombs on America, instead opting for euphemisms. Some of the best have included:

  • The Hill called it “brown liquid” and “what looked like feces”
  • The Guardian deemed it “brown sludge” and “bursts of brown matter”
  • More “brown liquid” from the New York Times
  • NBC News got close with “what appeared to be feces”
  • A CNN contributor’s “analysis” said Trump was “appearing to dump raw sewage”
  • Axios’ helpful context: “suspect brown substances falling from the sky”
  • ABC News opted to cut the video before the AI poop even started falling

The New York Post , never one to waste a prime alliteration opportunity, didn’t disappoint: “Trump’s fighter jet was shown dropping masses of manure.”

I can understand some of these venerated news establishments might be skittish about using a word like “poop” in their headlines, and I can also concede that I haven’t had an editor tell me I can’t use a bad word in a headline in a long, long time. I can imagine the logic: we can’t “prove” it’s meant to be shit, so we can’t call it shit. But there’s nothing in these outlets’ style guides that has kept them from saying “poop” in the headline in the past: “Women Poop,” the New York Times once proclaimed . Axios writes extensively and frequently about dog poop . CNN’s analysis extends to poop often.

Along with the above concessions, I can also promise I don’t feel that passionately about getting poop on anyone else’s homepages. But we are in an era where the highest office in the country is disseminating imagery that isn’t just fake and stupid, but actively hostile to the people living in this country. When I first saw someone talking about the Trump Poop Bomber video—on Reels, of all places—I thought it must be someone doing satire about what they imagined Trump would post about the protests. I had to search for it to find out if he really did, and what I found was the above: trusted sources of truth and information too scared to call fake poop fake poop. It’s not about poop, it’s about being able to accurately describe what we see, an essential skill when everything online is increasingly built to enrage, trick, or mislead us. AI continues to be the aesthetic of fascism : fast, easy, ugly. When we lose the ability to say what it is, we’re losing a lot more than the chance to pun on poop.

Add to this the fact that no one in Trump’s circle will say what we can all plainly see, either: that the president hates the people. “The president uses social media to make the point. You can argue he’s probably the most effective person who’s ever used social media for that,” Speaker of the House Mike Johnson said at a news conference this morning. “He is using satire to make a point. He is not calling for the murder of his political opponents.” Johnson did not say what that point was, however.

About the author

Sam Cole is writing from the far reaches of the internet, about sexuality, the adult industry, online culture, and AI. She's the author of How Sex Changed the Internet and the Internet Changed Sex.

Samantha Cole

David Hockney’s Xerox Prints

Daring Fireball
www.myartbroker.com
2025-10-20 17:40:46
Erin-Atlanta Argun: While Hockney is perhaps best known for his larger-than-life swimming pool paintings, bold coloured acrylics are certainly not his only forte. Contrary to the old saying, Hockney is a jack of all trades and a master of all he has worked with: from paint to iPads. In the late ...
Original Article

david hockney xerox prints

Red Flowers And Green Leaves, Separate, May 1988 by David Hockney Red Flowers And Green Leaves, Separate, May 1988 © David Hockney 1988

Celine Thompson

Celine Thompson

Registrar

[email protected]

Interested in buying or selling
David Hockney?

Always an artist to embrace new technologies, David Hockney’s Xerox prints showcase his commitment to reflecting technological progress in his artwork. Innovative, versatile and curious, acclaimed British artist Hockney has never been one to shy away from the challenge of a new medium.

While Hockney is perhaps best known for his larger-than-life swimming pool paintings, bold coloured acrylics are certainly not his only forte. Contrary to the old saying, Hockney is a jack of all trades and a master of all he has worked with: from paint to iPads . In the late 1980s, his fascination with technology and new ways of creating art led him to the Xerox photocopying machine. The copy machine offered Hockney his speediest technique of printing yet, allowing the artist to build layers, textures and colours like never before. However, it was not only a swift and spontaneous way for Hockney to produce his prints. The tech-savvy artist said that he had a more “philosophical” interest in the Xerox machine as a new iteration of the camera.

“I realised it was a printing machine and a camera of a new kind”

What is a Xerox Print?

Made by Xerox in the late-1950s, their pioneering xeorographic machine was the first fully automotive photocopier for businesses. The machines revolutionised offices around the world and were used as an efficient and rapid way of making paper copies. Xerox copiers became the focus of artists like Keith Haring and Jean-Michel Basquiat in the late 1980s, who used the medium to cement the ties between art and commercial culture.

David Hockney exploited the technology by building his prints with a series of layering, tweaking his compositions and colour palettes with every scan on the Xerox machine to create a cohesive print. Each colour element of the print was first drawn onto a separate piece of paper, before being loaded back onto the copybed. As we see in his Mulholland Drive digital print of 1986, Hockney would repeat this process of superimposing and scanning different visual elements to create prints using his own drawings and borrowed material. Here, two large green mounds with Hockney’s signature mark-making are layered on top of a map of the Californian road it is named after. Between the two hills and the pylons that connect them, the map reads ‘Hollywood’, alluding to the sites that drivers would see when cruising Mulholland Drive. Like the more traditional printing techniques of lithography and screenprinting, the Xerox collage allowed Hockney to layer these interesting motifs to create prints with a greater spatial depth and meaning.

When did Hockney start creating these and why?

In February 1986, Hockney began playing around with his friend’s Xerox copy machine, and instantly became fascinated by its ability to marry the photographic and painterly arts. As an artist who had been creating prints since his student years at Bradford Art School, Hockney had been continuously searching for a printing method that was more akin to drawing. In the Xerox copier, he found an exciting new way to overlap drawn elements with abstract shapes, assembling prints rapidly and with a noted ease.

One thing Hockney definitely enjoyed about Xerox printing was the spontaneity the technique afforded. The printing studio demanded that Hockney come prepared with his composition, and left little room for mistake and innovation while at work on the print. With the Xerox copier as his new sidekick however, Hockney was able to add unplanned elements to his prints and therefore create works which perfectly suited his eccentric style. His mixed-media Self Portrait utilised the Xerox machine to combine elements in watercolour, pencil and ink. The Xerox machine portrait is collaged together with the top half of his face on one piece of paper, and the bottom on another, pushing the printing method to extend over one plane as is usually expected of a print. Ironically, it was this digital mode of printing that brought Hockney closer to achieving a drawn image more than ever before.

Instant Valuation

Self Portrait by David Hockney Self Portrait © David Hockney 1986

“The closest I have ever come in printing to what it’s like to paint”

Hockney and Technology

Hockney has always been an artist to embrace new developments in technology , and has never allowed himself to become entrenched in one media or style. Reflecting on the unavoidable ties between technology and art in his Retrospective, Hockney wrote that “New technologies have started revolutions that need not frighten us.” For Hockney, technology was never a threat to creativity because it could be “humanised by artists”. In fact, Hockney has continuously experimented with new kinds of everyday technologies, like the iPhone and iPad, and transformed them into devices of art creation for the modern age.

Hockney's fusion of technology and creativity is beautifully showcased in the exhibition David Hockney: Bigger & Closer (not smaller & further away) . Initially held at Lightroom, King's Cross, London, the exhibition is now on display at Aviva Studios, Manchester, from 10 December 2024 to 25 January 2025. These immersive shows celebrate Hockney's innovative use of digital tools, taking viewers on a journey through his artistic process - from his pioneering Xerox experiments to his modern iPad creations.

Which print series did Hockney produce with the Xerox copier?

In 1986 Hockney produced Home Made Prints , a series created entirely with the Xerox copier. As the title of the series would suggest, the Xerox machine allowed Hockney to create his prints in the comfort of his own home, rather than the often constrictive environment of the professional printing studio. Many of the prints in this series, like Living Room and Terrace , capture the interiors of Hockney’s house on the outskirts of LA. This invigorated focus on his own home in Home Made Prints reveals the more relaxed approach to printmaking that the Xerox copier afforded Hockney.

In this particular print from the series, Hockney envisions the view out to his terrace across two pieces of printed paper. The diagonal lines that delineate the flooring of both living room and terrace converge at the meeting point of the two paper sheets, and create an abstract and dizzying play with perspective. These wide painterly strokes are contrasted with the more intricate scanned elements, like the sculptural motif that appears on the far-left wall of the print, which Hockney scaled down to appear like a drawing above the abstracted fireplace. Home Made Prints is testament not only to Hockney’s fascination with new technologies, but also his ability to manipulate these technologies and transform them into mediums of art that best speak to the viewers of their age.

Are Hockney's Xerox prints popular?

Hockney’s xerox and other print works have garnered continued demand due to both his artistic innovation and the accessible nature of printmaking. For example, his Flowers Apple and Pear on Table (1986), a signed digital print, has seen strong and steady growth in its valuation. Since its auction debut in April 2012, this print has been sold six times, notably for £22,516 in April, 2023. Limited to an edition of 59, this piece exemplifies Hockney’s enduring appeal in the print market, consistently drawing collector interest.

You can read more about David Hockney’s fascination with technology here.

Want to learn more about David Hockney's most iconic and investable prints? Read our latest David Hockney Market Watch here.

Erin-Atlanta Argun

Erin is MyArtBroker's Excuse Me Girl, who asks the public what work of art they'd love to own and why. When she's not interviewing people with her paintbrush mic, she's the Commissioning Editor of Edition Magazine, devising content for our audience of art lovers and collectors.

Erin joined MyArtBroker after completing her MA in Fashion History at the Courtauld Institute of Art in 2022, and graduated from the University of Cambridge in 2021 with a BA in History of Art. Having previously worked with galleries and art publications, Erin comes to us with a passion for storytelling and making the art world and market accessible, and fun.

Opt Out October: Daily Tips to Protect Your Privacy and Security

Electronic Frontier Foundation
www.eff.org
2025-10-20 17:35:35
Trying to take control of your online privacy can feel like a full-time job. But if you break it up into small tasks and take on one project at a time it makes the process of protecting your privacy much easier. This month we’re going to do just that. For the month of October, we’ll update this post...
Original Article

Trying to take control of your online privacy can feel like a full-time job. But if you break it up into small tasks and take on one project at a time it makes the process of protecting your privacy much easier. This month we’re going to do just that. For the month of October, we’ll update this post with new tips every weekday that show various ways you can opt yourself out of the ways tech giants surveil you.

Online privacy isn’t dead . But the tech giants make it a pain in the butt to achieve. With these incremental tweaks to the services we use, we can throw sand in the gears of the surveillance machine and opt out of the ways tech companies attempt to optimize us into advertisement and content viewing machines. We’re also pushing companies to make more privacy-protective defaults the norm, but until that happens, the onus is on all of us to dig into the settings.

All month long we’ll share tips, including some with the help from our friends at Consumer Reports’ Security Planner tool. Use the Table of Contents here to jump straight to any tip.

Table of Contents

Tip 1: Establish Good Digital Hygiene

Before we can get into the privacy weeds, we need to first establish strong basics. Namely, two security fundamentals: using strong passwords ( a password manager helps simplify this) and two-factor authentication for your online accounts. Together, they can significantly improve your online privacy by making it much harder for your data to fall into the hands of a stranger.

Using unique passwords for every web login means that if your account information ends up in a data breach , it won’t give bad actors an easy way to unlock your other accounts. Since it’s impossible for all of us to remember a unique password for every login we have, most people will want to use a password manager, which generates and stores those passwords for you.

Two-factor authentication is the second lock on those same accounts. In order to login to, say, Facebook for the first time on a particular computer, you’ll need to provide a password and a “second factor,” usually an always-changing numeric code generated in an app or sent to you on another device. This makes it much harder for someone else to get into your account because it’s less likely they’ll have both a password and the temporary code.

This can be a little overwhelming to get started if you’re new to online privacy! Aside from our guides on Surveillance Self-Defense , we recommend taking a look at Consumer Reports’ Security Planner for ways to help you get started setting up your first password manager and turning on two-factor authentication .

Tip 2: Learn What a Data Broker Knows About You

Hundreds of data brokers you’ve never heard of are harvesting and selling your personal information. This can include your address, online activity, financial transactions, relationships, and even your location history. Once sold, your data can be abused by scammers , advertisers , predatory companies , and even law enforcement agencies .

Data brokers build detailed profiles of our lives but try to keep their own practices hidden . Fortunately, several state privacy laws give you the right to see what information these companies have collected about you. You can exercise this right by submitting a data access request to a data broker. Even if you live in a state without privacy legislation, some data brokers will still respond to your request.

There are hundreds of known data brokers , but here are a few major ones to start with:

Data brokers have been caught ignoring privacy laws, so there’s a chance you won’t get a response. If you do, you’ll learn what information the data broker has collected about you and the categories of third parties they’ve sold it to. If the results motivate you to take more privacy action, encourage your friends and family to do the same. Don’t let data brokers keep their spying a secret.

You can also ask data brokers to delete your data, with or without an access request. We’ll get to that later this month and explain how to do this with people-search sites, a category of data brokers.

Tip 3: Disable Ad Tracking on iPhone and Android

Picture this: you’re doomscrolling and spot a t-shirt you love. Later, you mention it to a friend and suddenly see an ad for that exact shirt in another app. The natural question pops into your head: “ I s my phone listening to me? ” Take a sigh of relief because, no, your phone is not listening to you . But advertisers are using shady tactics to profile your interests. Here’s an easy way to fight back: disable the ad identifier on your phone to make it harder for advertisers and data brokers to track you.

Disable Ad Tracking on iOS and iPadOS:

  • Open Settings > Privacy & Security > Tracking , and turn off “Allow Apps to Request to Track.”
  • Open Settings > Privacy & Security > Apple Advertising , and disable “Personalized Ads” to also stop some of Apple’s internal tracking for apps like the App Store.
  • If you use Safari, go to Settings > Apps > Safari > Advanced and disable “Privacy Preserving Ad Measurement.”

Disable Ad Tracking on Android:

  • Open Settings > Security & privacy > Privacy controls > Ads , and tap “Delete advertising ID.”
  • While you’re at it, run through Google’s “Privacy Checkup” to review what info other Google services—like YouTube or your location—may be sharing with advertisers and data brokers.

These quick settings changes can help keep bad actors from spying on you. For a deeper dive on securing your iPhone or Android device , be sure to check out our full Surveillance Self-Defense guides.

Tip 4: Declutter Your Apps

Decluttering is all the rage for optimizers and organizers alike, but did you know a cleansing sweep through your apps can also help your privacy? Apps collect a lot of data, often in the background when you are not using them. This can be a prime way companies harvest your information, and then repackage and sell it to other companies you've never heard of. Having a lot of apps increases the peepholes that companies can gain into your personal life.

Do you need three airline apps when you're not even traveling? Or the app for that hotel chain you stayed in once? It's best to delete that app and cut off their access to your information. In an ideal world, app makers would not process any of your data unless strictly necessary to give you what you asked for. Until then, to do an app audit:

  • Look through the apps you have and identify ones you rarely open or barely use.
  • Long-press on apps that you don't use anymore and delete or uninstall them when a menu pops up.
  • Even on apps you keep, take a swing through the location, microphone, or camera permissions for each of them. For iOS devices you can follow these instructions to find that menu. For Android, check out this instructions page.

If you delete an app and later find you need it, you can always redownload it. Try giving some apps the boot today to gain some memory space and some peace of mind.

Tip 5: Disable Behavioral Ads on Amazon

Happy Amazon Prime Day! Let’s celebrate by taking back a piece of our privacy.

Amazon collects an astounding amount of information about your shopping habits. While the only way to truly free yourself from the company’s all-seeing eye is to never shop there, there is something you can do to disrupt some of that data use: tell Amazon to stop using your data to market more things to you (these settings are for US users and may not be available in all countries).

  • Log into your Amazon account, then click “Account & Lists” under your name.
  • Scroll down to the “Communication and Content” section and click “Advertising preferences” (or just click this link to head directly there).
  • Click the option next to “Do not show me interest-based ads provided by Amazon.”
  • You may want to also delete the data Amazon already collected, so click the “Delete ad data” button.

This setting will turn off the personalized ads based on what Amazon infers about you, though you will likely still see recommendations based on your past purchases at Amazon.

Of course, Amazon sells a lot of other products. If you own an Alexa, now’s a good time to review the few remaining privacy options available to you after the company took away the ability to disable voice recordings . Kindle users might want to turn off some of the data usage tracking . And if you own a Ring camera, consider enabling end-to-end encryption to ensure you’re in control of the recording, not the company.

Tip 6: Install Privacy Badger to Block Online Trackers

Every time you browse the web, you’re being tracked. Most websites contain invisible tracking code that lets companies collect and profit from your data. That data can end up in the hands of advertisers, data brokers, scammers, and even government agencies. Privacy Badger , EFF’s free browser extension, can help you fight back.

Privacy Badger automatically blocks hidden trackers to stop companies from spying on you online. It also tells websites not to share or sell your data by sending the “ Global Privacy Control” signal , which is legally binding under some state privacy laws. Privacy Badger has evolved over the past decade to fight various methods of online tracking. Whether you want to protect your sensitive information from data brokers or just don’t want Big Tech monetizing your data, Privacy Badger has your back.

Visit privacybadger.org to install Privacy Badger .

It’s available on Chrome, Firefox, Edge, and Opera for desktop devices and Firefox and Edge for Android devices. Once installed, all of Privacy Badger’s features work automatically. There’s no setup required! If blocking harmful trackers ends up breaking something on a website, you can easily turn off Privacy Badger for that site while maintaining privacy protections everywhere else.

When you install Privacy Badger, you’re not just protecting yourself—you’re joining EFF and millions of other users in the fight against online surveillance.

Tip 7: Review Location Tracking Settings

Data brokers don’t just collect information on your purchases and browsing history. Mobile apps that have the location permission turned on will deliver your coordinates to third parties in exchange for insights or monetary kickbacks. Even when they don’t deliver that data directly to data brokers, if the app serves ad space, your location will be delivered in real-time bid requests not only to those wishing to place an ad, but to all participants in the ad auction—even if they lose the bid. Location data brokers take part in these auctions just to harvest location data en masse, without any intention of buying ad space.

Luckily, you can change a few settings to protect yourself against this hoovering of your whereabouts. You can use iOS or Android tools to audit an app’s permissions, providing clarity on who is providing what info to whom. You can then go to the apps that don’t need your location data and disable their access to that data (you can always change your mind later if it turns out location access was useful). You can also disable real-time location tracking by putting your phone into airplane mode, while still being able to navigate using offline maps. And by disabling mobile advertising identifiers ( see tip three ), you break the chain that links your location from one moment to the next.

Finally, for particularly sensitive situations you may want to bring an entirely separate, single-purpose device which you’ve kept clean of unneeded apps and locked down settings on. Similar in concept to a burner phone, even if this single-purpose device does manage to gather data on you, it can only tell a partial story about you—all the other data linking you to your normal activities will be kept separate.

For details on how you can follow these tips and more on your own devices, check out our more extensive post on the topic .

Tip 8: Limit the Data Your Gaming Console Collects About You

Oh, the beauty of gaming consoles—just plug in and play! Well... after you speed-run through a bunch of terms and conditions, internet setup, and privacy settings. If you rushed through those startup screens, don’t worry! It’s not too late to limit the data your console is collecting about you. Because yes, modern consoles do collect a lot about your gaming habits.

Start with the basics: make sure you have two-factor authentication turned on for your accounts. PlayStation , Xbox , and Nintendo all have guides on their sites. Between payment details and other personal info tied to these accounts, 2FA is an easy first line of defense for your data.

Then, it’s time to check the privacy controls on your console:

  • PlayStation 5 : Go to Settings > Users and Accounts > Privacy to adjust what you share with both strangers and friends. To limit the data your PS5 collects about you, go to Settings > Users and Accounts > Privacy , where you can adjust settings under Data You Provide and Personalization .
  • Xbox Series X|S : Press the Xbox button > Profile & System > Settings > Account > Privacy & online safety > Xbox Privacy to fine-tune your sharing . To manage data collection , head to Profile & System > Settings > Account > Privacy & online safety > Data collection .
  • Nintendo Switch : The Switch doesn’t share as much data by default, but you still have options. To control who sees your play activity, go to System Settings > Users > [ your profile ] > Play Activity Settings . To opt out of sharing eShop data, open the eShop, select your profile (top right), then go to Google Analytics Preferences > Do Not Share .

Plug and play, right? Almost. These quick checks can help keep your gaming sessions fun—and more private.

Tip 9: Hide Your Start and End Points on Strava

Sharing your personal fitness goals, whether it be extended distances, accurate calorie counts, or GPS paths—sounds like a fun, competitive feature offered by today's digital fitness trackers. If you enjoy tracking those activities, you've probably heard of Strava . While it's excellent for motivation and connecting with fellow athletes, Strava's default settings can reveal sensitive information about where you live, work, or exercise, creating serious security and privacy risks. Fortunately, Strava gives you control over how much of your activity map is visible to others, allowing you to stay active in your community while protecting your personal safety.

We've covered how Strava data exposed classified military bases in 2018 when service members used fitness trackers. If fitness data can compromise national security, what's it revealing about you?

Here's how to hide your start and end points:

  • On the website : Hover over your profile picture > Settings > Privacy Controls > Map Visibility .
  • On mobile : Open Settings > Privacy Controls > Map Visibility .
  • You can then choose from three options: hide portions near a specific address, hide start/end of all activities, or hide entire maps

You can also adjust individual activities:

  • Open the activity you want to edit.
  • Select the three-dot menu icon.
  • Choose "Edit Map Visibility."
  • Use sliders to customize what's hidden or enable "Hide the Entire Map."

Great job taking control of your location privacy! Remember that these settings only apply to Strava, so if you share activities to other platforms, you'll need to adjust those privacy settings separately. While you're at it, consider reviewing your overall activity visibility settings to ensure you're only sharing what you want with the people you choose.

Tip 10: Find and Delete An Account You No Longer Use

Millions of online accounts are compromised each year. The more accounts you have, the more at risk you are of having your personal data illegally accessed and published online. Even if you don’t suffer a data breach , there’s also the possibility that someone could find one of your abandoned social media accounts containing information you shared publicly on purpose in the past, but don’t necessarily want floating around anymore. And companies may still be profiting off details of your personal life, even though you’re not getting any benefit from their service.

So, now’s a good time to find an old account to delete. There may be one you can already think of, but if you’re stuck, you can look through your password manager, look through logins saved on your web browser, or search your email inbox for phrases like “new account,” “password,” “welcome to,” or “confirm your email.” Or, enter your email address on the website HaveIBeenPwned to get a list of sites where your personal information has been compromised to see if any of them are accounts you no longer use.

Once you’ve decided on an account, you’ll need to find the steps to delete it. Simply deleting an app off of your phone or computer does not delete your account. Often you can log in and look in the account settings, or find instructions in the help menu, the FAQ page, or the pop-up customer service chat. If that fails, use a search engine to see if anybody else has written up the steps to deleting your specific type of account.

For more information, check out the Delete Unused Accounts tip on Security Planner.

Tip 11: Search for Yourself

Today's tip may sound a little existential, but we're not suggesting a deep spiritual journey. Just a trip to your nearest search engine. Pop your name into search engines such as Google or DuckDuckGo, or even AI tools such as ChatGPT, to see what you find. This is one of the simplest things you can do to raise your own awareness of your digital reputation. It can be the first thing prospective employers (or future first dates) do when trying to figure out who you are. From a privacy perspective, doing it yourself can also shed light on how your information is presented to the general public. If there's a defunct social media account you'd rather keep hidden, but it's on the first page of your search results, that might be a good signal for you to finally delete that account . If you shared your cellphone number with an organization you volunteer for and it's on their home page, you can ask them to take it down.

Knowledge is power. It's important to know what search results are out there about you, so you understand what people see when they look for you. Once you have this overview, you can make better choices about your online privacy.

Tip 12: Tell “People Search” Sites to Delete Your Information

When you search online for someone’s name, you’ll likely see results from people-search sites selling their home address, phone number, relatives’ names, and more. People-search sites are a type of data broker with an especially dangerous impact . They can expose people to scams, stalking, and identity theft. Submit opt out requests to these sites to reduce the amount of personal information that is easily available about you online.

Check out this list of opt-out links and instructions for more than 50 people search sites, organized by priority. Before submitting a request, check that the site actually has your information. Here are a few high-priority sites to start with:

Data brokers continuously collect new information, so your data could reappear after being deleted. You’ll have to re-submit opt-outs periodically to keep your information off of people-search sites. Subscription-based services can automate this process and save you time, but a Consumer Reports study found that manual opt-outs are more effective.

Tip 13: Remove Your Personal Addresses from Search Engines

Your home address may often be found with just a few clicks online. Whether you're concerned about your digital footprint or looking to safeguard your physical privacy, understanding where your address appears and how to remove or obscure it is a crucial step. Here's what you need to know.

Your personal addresses can be available through public records like property purchases, medical licensing information, or data brokers. Opting out from data brokers will do a lot to remove what's available commercially, but sometimes you can't erase the information entirely from things like property sales records.

You can ask some search engines to remove your personal information from search indexes, which is the most efficient way to make information like your personal addresses, phone number, and email address a lot harder to find. Google has a form that makes this request quite easy, and we’d suggest starting there.

Come back tomorrow for another tip!

Join EFF Lists

[$] Explicit lazy imports for Python

Linux Weekly News
lwn.net
2025-10-20 17:30:26
Importing modules in Python is ubiquitous; most Python programs start with at least a few import statements. But the performance impact of those imports can be large—and may be entirely wasted effort if the symbols imported end up being unused. There are multiple ways to lazily import modules, inc...
Original Article

The page you have tried to view ( Explicit lazy imports for Python ) is currently available to LWN subscribers only.

Reader subscriptions are a necessary way to fund the continued existence of LWN and the quality of its content.

If you are already an LWN.net subscriber, please log in with the form below to read this content.

Please consider subscribing to LWN . An LWN subscription provides numerous benefits, including access to restricted content and the warm feeling of knowing that you are helping to keep LWN alive.

(Alternatively, this item will become freely available on October 30, 2025)

Continuously keep local mailboxes in sync with an IMAP server

Lobsters
whynothugo.nl
2025-10-20 17:27:16
Comments...
Original Article

ImapGoose is a small program to keep local mailboxes in sync with an IMAP server. The wording “keep […] in sync” implies that it does so continuously, rather than a one-time sync. ImapGoose is designed as a daemon, monitoring both the IMAP server and the local filesystem, and immediately synchronising changes. When the IMAP server receives an email, it shows up in the filesystem within a second. When an email is deleted on another email client, it is removed 1 from the filesystem within a second.

ImapGoose is highly optimised to reduce the amount of network traffic and tasks performed. To do so, it relies on a few modern IMAP extensions and only supports modern email servers. “Modern servers” in the context of email means servers which support extensions which were standardised between 2005 and 2009 .

ImapGoose uses the CONDSTORE extension ( standardised in 2006 ), which basically allows it to tell the server “I last saw this mailbox when it was in state XYZ, please tell me what’s new”. This avoids the need to download an entire message list (which can be tens of thousands of emails), making incremental syncs much more efficient. It also uses the QRESYNC extension ( standardised in 2008 ) so that the server includes a list of deleted messages too (i.e. VANISHED ). Finally, ImapGoose uses the NOTIFY extension ( standardised in 2009 ), which allows an IMAP client to tell the server “please let me know when there are changes to these mailboxes”, and then leave a connection open. NOTIFY has two nice consequences: (1) the client doesn’t need to ask the server if there have been any changes at regular intervals, and (2) the client is informed of any changes immediately, so they can be processed without delay. Unlike the older IDLE extension (from 1996), NOTIFY (from 2009) allows monitoring multiple mailboxes per connection, rather than just one.

In this article, I’ll cover some of the general design details, inner workings and other development details.

General mode of operation

[ permalink ]

First off, ImapGoose keeps a small status database with some minor metadata about the last-seen status of both the server and local Maildirs. This includes the mapping between server UIDs and filesystem filenames. Its general design is strongly inspired by how OfflineIMAP works .

At start-up, ImapGoose lists all mailboxes in the server and in the local filesystem. It then starts monitoring them (the server via NOTIFY, the client via inotify/kqueue), so we receive notifications of any changes that may happen after our initial listing. This ensures that, for example, if we receive a new email while performing the initial sync, we get a notification for it.

Once monitoring is set up, ImapGoose queues a task to perform a full sync of each mailbox. Initially, we determine if this is the first time we see this mailbox by its absence in the status database. If this mailbox has not been seen before, then we request all messages. The server returns all of these along with a HIGHESTMODSEQ , which we store in the status database. This HIGHESTMODSEQ is a numeric property of each mailbox and increases every time a change occurs inside that mailbox. If a mailbox has been seen before, then we can ask the server for changes since that HIGHESTMODSEQ , which delivers only the minimal amount of data which we need, and nothing else about all the other thousands of unchanged messages.

When a message is present in the server and absent in the filesystem (or vice versa), we need to determine whether it is a new message, or if it is a message that was previously present in both and deleted from the local filesystem. To determine this, we use the status database and apply the exact same algorithm as offlineimap. It’s simple and well tested.

At times, ImapGoose may disconnect from the server (for example, due to a laptop disconnecting from Wi-Fi, or going into sleep mode). It will try to re-connect automatically using an exponential back-off: after 1 second, then after 2 seconds, 4 seconds, 8 seconds, 16 seconds, 32 seconds,… all the way up to 17 minutes. Then it will continue retrying every 17 minutes. This means users don’t really have to worry about ImapGoose’s current state, whether it’s still working, etc. It knows how to back-off when there’s no network and how to get back to work when it is feasible again.

As mentioned above, ImapGoose “queues” sync tasks. Internally, it uses a task queue; when changes are detected on the server, a task to sync that entire mailbox is queued. A worker picks this up from the queue, asks for changes in that mailbox, and synchronises them. When changes are detected in the filesystem, a task to sync that particular message is queued. It may happen that multiple messages arrive in quick succession for the same mailbox. In this case, we don’t want to trigger multiple syncs of the same mailbox, and we especially don’t want two workers to sync the same mailbox concurrently: this would quickly lead to duplicate emails.

To work around concurrent syncs and redundant mailbox updates, ImapGoose uses a “dispatcher”, which hands off sync tasks to workers. When a task to sync a specific mailbox is handed to a worker, that mailbox is marked as “busy”, and we don’t process other tasks for that queue until that worker notifies that it has finished its work on that mailbox. While a worker is synchronising a mailbox, we may receive several notifications that changes have happened to that mailbox. These changes could be the result of the changes made by the worker, or they could be new emails being delivered, so we have to queue another task to sync that mailbox. These tasks are kept in queue until the worker frees up the mailbox, and the dispatcher additionally de-duplicates them: synchronising a mailbox just once after the last change notification is enough to synchronise the changes in all the notifications.

When a message changes in the filesystem, ImapGoose receives an inotify event. This doesn’t trigger a sync of the full mailbox, but instead a “targeted” sync, which focuses only on that email message. We know that a single message has changed, so there’s no point in re-scanning the thousands of messages in the mailbox. These targeted syncs are taken into account in deduplication; they only get de-duplicated if the path for them is the same.

While the connection which is listening for changes from the server is kept alive by sending periodic NOOP commands, the connections for workers are allowed to time out. If no activity is happening, these connections simply time out, but a connection is re-established once a worker needs it again. Great care has been taken to avoid unnecessary churn in all possible aspects.

Prior art

[ permalink ]

Before developing ImapGoose, I studied prior art in the field. In particular, offlineimap does a great job at synchronising mailboxes. However, it doesn’t “keep in sync” in the same way; offlineimap needs to execute periodic syncs, doesn’t rely on modern extensions, and tends to “hang” when there are network time-outs. ImapGoose is new and has no existing users, so it can just require modern extensions or declare other scenarios as unsupported. Existing tools have to maintain compatibility for existing users, which might rely on some legacy email server. If I couldn’t rely on NOTIFY, implementing ImapGoose in such a clean efficient way would not have been possible. If I couldn’t rely on CONDSTORE and QRESYNC , I would have had to download lists of thousands of emails each time even a single one changes. Thanks to UIDPLUS , the server returns the UID of a newly uploaded message, and we don’t need any ugly workarounds to retrieve it.

If someone needs to sync data from legacy servers, plenty of tools are still out there, providing the best experience which those servers can offer.

Development

[ permalink ]

When working on ImapGoose, I focused exactly on my needs for my particular use case: keep my local mailboxes in sync with an IMAP server. There’s no other supported scenario, there’s no fallback for legacy servers, and there’s no support for alternative email backends. All these constraints allowed me to focus on making a tool that’s great for a single use case: it does one thing and does it well.

I strongly believe that my keeping tight constraints (e.g.: focusing on just one use case, ignoring support for legacy servers, keeping things as simple as possible) helped develop this much faster and with much cleaner results.

I started with a very clear picture of how the whole thing would work. I was also familiar with go-imap , and knew it to be a well designed and well implemented IMAP library. My immense appreciation goes to emersion and the contributors who’ve worked on it. I didn’t need to worry about the inner details of talking to an IMAP server, parsing responses, tracking connection state, etc. go-imap provides a simple idiomatic Go interface for IMAP commands and their responses.

go-imap was lacking two features which I needed: support for the NOTIFY command and for VANISHED ( rfc5162 ). While still standing on the shoulders of giants, I implemented both of these and sent patches for both of them ( NOTIFY , VANISHED ). Until those are merged, ImapGoose is built using my own (temporary) fork which has those two patches applied.

Configuration

[ permalink ]

For configuration, I opted for the very simple and straightforward scfg configuration format. The configuration file looks something like:

account example {
    server imap.example.com:993
    username hugo@example.com
    password-cmd pass show email/example
    local-path ~/mail/example
}

Naming

[ permalink ]

I wanted something easy to remember, easy to pronounce and that won’t yield thousands of unrelated search engine results. There’s also room for an obvious mascot/logo: a goose wearing a postman’s hat carrying an envelope, using the colour palette from the Go ecosystem. Please reach out if you are an illustrator willing to contribute with artwork.

Open source

[ permalink ]

ImapGoose is open source and distributed under the terms of the ISC licence . The source code is available via git . Feedback is welcome, including bug reports.

Self-spreading GlassWorm malware hits OpenVSX, VS Code registries

Bleeping Computer
www.bleepingcomputer.com
2025-10-20 17:13:20
A new and ongoing supply-chain attack is targeting developers on the OpenVSX and Microsoft Visual Studio marketplaces with self-spreading malware called GlassWorm that has been installed an estimated 35,800 times. [...]...
Original Article

Self-spreading GlassWorm malware hits OpenVSX,  VS Code registries

A new and ongoing supply-chain attack is targeting developers on the OpenVSX and Microsoft Visual Studio marketplaces with self-spreading malware called GlassWorm that has been installed an estimated 35,800 times.

The malware hides its malicious code by using invisible characters. It can also spread itself using stolen account information to infect more extensions the victim can access.

GlassWorm operators use Solana blockchain for command-and-control, making takedown very difficult, with Google Calendar as backup option.

Microsoft Visual Studio and the OpenVSX platforms host extensions and integrations for Visual Studio products and are constant targets of threat actors looking to steal cryptocurrency [ 1 , 2 , 3 ].

Researchers at endpoint security provider Koi found that the current GlassWorm campaign relies on "invisible Unicode characters that make malicious code literally disappear from code editors."

The "invisible" malicious code
Hidden malicious code
Source: Koi Security

Once installed, the malware attempts to steal credentials for GitHub, npm, and OpenVSX accounts, as well as cryptocurrency wallet data from 49 extensions.

Additionally, GlassWorm deploys a SOCKS proxy to route malicious traffic through the victim’s machine and installs VNC clients (HVNC) for invisible remote access.

The worm has a hardcoded wallet with transactions on the Solana blockchain that provide base64-encoded links for the next-stage payloads. According to the researchers, the final payload is called ZOMBI and is a "massively obfuscated JavaScript" code that turns infected systems into nodes for the cybercriminal activities.

"GlassWorm's final stage - the ZOMBI module - transforms every infected developer workstation into a node in a criminal infrastructure network," Koi Security says .

Using the blockchain to hide payloads is a method that has been gaining traction due to the multiple operational benefits it offers, including resilience to takedowns, anonymity, low cost, and flexibility for updates.

The Solana transaction that fetches the next-stage
Solana transaction that fetches next-stage payload
Source: Koi Security

A backup method for sourcing payloads involves a Google Calendar event title that includes a base64-encoded URL. A third delivery mechanism uses direct connection to the IP address 217.69.3[.]218.

For further evasion and resilience, the malware uses BitTorrent’s Distributed Hash Table (DHT) for decentralized command distribution.

Researchers found at least eleven extensions infected by GlassWorm on OpenVSX and one on Microsoft’s VS Code Marketplace:

  1. codejoy.codejoy-vscode-extension@1.8.3 and 1.8.4
  2. l-igh-t.vscode-theme-seti-folder@1.2.3
  3. kleinesfilmroellchen.serenity-dsl-syntaxhighlight@0.3.2
  4. JScearcy.rust-doc-viewer@4.2.1
  5. SIRILMP.dark-theme-sm@3.11.4
  6. CodeInKlingon.git-worktree-menu@1.0.9 and 1.0.91
  7. ginfuru.better-nunjucks@0.3.2
  8. ellacrity.recoil@0.7.4
  9. grrrck.positron-plus-1-e@0.0.71
  10. jeronimoekerdt.color-picker-universal@2.8.91
  11. srcery-colors.srcery-colors@0.3.9
  12. cline-ai-main.cline-ai-agent@3.1.3 (Microsoft VS Code)

The researchers say that seven extensions on OpenVSX were compromised on October 17 and more infections followed over the next couple of days on both OpenVSX and VS Code. Koi Security notes that the full impact is 35,800 active GlassWorm installations.

"Here's what makes this particularly urgent: VS Code extensions auto-update. When CodeJoy pushed version 1.8.3 with invisible malware, everyone with CodeJoy installed got automatically updated to the infected version. No user interaction. No warning. Just silent, automatic infection," the researchers say.

At publishing time, at least four of the compromised extensions Koi Security found, were still available for download on OpenVSX. Microsoft has removed the malicious extension frrom its marketplace following the researchers' alert.

The publishers of vscode-theme-seti-folder and git-worktree-menu have updated the extensions to remove the malicious code.

Function that targets developers' secrets
Function that targets developers' secrets
Source: Koi Security

Last month, a similar worm-style attack dubbed “Shai-Hulud” hit the npm ecosystem, compromising 187 packages. The malware used the TruffleHog scanning tool to identify secrets, passwords, and sensitive keys.

Koi Security says that GlassWorm "is one of the most sophisticated supply chain attack" and the first documented case of a worm-like attack on VS Code.

The C2 and payload servers in the GlassWorm campaign remain active, the researchers warn. On Saturday, there were still ten extensions actively distributing the malware.

Kohler launches smart toilet camera

Hacker News
techcrunch.com
2025-10-20 17:12:32
Comments...
Original Article

In Brief

Posted:

Product image of the Kohler Dekoda
Image Credits: Kohler

Home goods company Kohler recently unveiled a new device called the Dekoda — a $599 camera that can be attached to your toilet bowl and take pictures of what’s inside.

CNET reports that the Dekoda analyzes these images in order to provide updates on your gut health and hydration, and to potentially detect blood. It also comes with a rechargeable battery, a USB connection, and a fingerprint sensor to identify who’s using the toilet.

The Dekoda is currently available for pre-order , with shipments scheduled to begin on October 21. In addition to the hardware purchase fee, customers will need to pay between $70 and $156 per year for a subscription.

If you’re uneasy about the privacy implications of putting a camera right below your private parts, the company says, “Dekoda’s sensors see down into your toilet and nowhere else.” It also notes that the resulting data is secured via end-to-end encryption.

Kohler isn’t the only company seeking to take pictures of your poop — we’ve also written about the toilet camera offered by a startup called Throne .

Newsletters

Subscribe for the industry’s biggest tech news

Related

Latest in Gadgets

What I Self Host

Hacker News
fredrikmeyer.net
2025-10-20 17:07:52
Comments...
Original Article

I’ve always liked reading blogs, and have used several feed readers in the past (Feedly, for example). For a long time I was thinking it would be fun to write my own RSS reader, but instead of diving into the challenge, I did the next best thing, which was finding a decent one, and learning how to self host it.

In this post I will tell about the self hosting I do, and end by sketching the setup.

Miniflux is a “minimalist and opinionated feed reader”. I host my own instance at https://rss.fredrikmeyer.net/ It is very easy to set up using Docker, see the documentation .

Miniflux

I do have lots of unread blog posts 🤨.

Grafana, Strava Integration

I host a Grafana instance, also using Docker. What first triggered me to make this instance was an old project (that I want to revive one day): I had a Raspberry Pi with some sensors measuring gas and dust at my previous apartment, and a Grafana dashboard showing the data. It was interesting seeing how making food at home had a measurable impact on volatile gas levels.

Later I discovered the Strava datasource plugin for Grafana. It is a plugin that lets Grafana connect to the Strava API, and gives you summaries of your Strava activities. Below is an example of how it looks for me:

Grafana

One gets several other dashboards included in the plugin.

Spotify

One day YourSpotify was mentioned on HackerNews. It is an application that connects to the Spotify API, and gives you aggregated statistics of artists and albums you’ve listened to over time (why they chose to store the data in MongoDB I have no idea of!).

YourSpotify

It is interesting to note that I have listened to less and less music over the years (I have noticed that the more experience I have at work, the less actual programming I do).

Because I didn’t bother setting up DNS, this one is only exposed locally, so I use Tailscale to be able to access YourSpotify. This works by having Tailscale installed on the host, and connecting to the Tailnet. It lets me access the application by writing http://forgottensuperhero:3001/ in the browser.

Bookmark manager

I have a problem with closing tabs, and a tendency to hoard information (don’t get me started on the number of unread PDF books on my Remarkable!). So I found Linkding, a bookmark manager, which I access at https://links.fredrikmeyer.net/bookmarks/shared .

LinkDing

In practice it is a grave yard for interesting things I never have the time to read, but it gives me peace some kind of peace of mind.

How

I have an ambition of making the hosting “production grade”, but at the moment this setup is a mix of practices of varying levels of quality.

I pay for a cheap droplet at DigitalOcean , about $5 per month, and an additional dollar for backup. The domain name and DNS is from Domeneshop . SSL certificates from Let’s Encrypt .

All the apps run in different Docker containers, with ports exposed. These ports are then listened to by Nginx, which redirects to HTTPS.

I manage most of the configuration using Ansible. Here I must give thanks to Jeff Geerling ’s book Ansible for DevOps , which was really good. So if I change my Nginx configuration, I edit it on my laptop, and run

ansible-playbook -i inventory.ini docker.yml  --ask-become-pass

to let Ansible do its magic. In this case, “most” means the Nginx configuration and Grafana.

Miniflux and YourSpotify are managed by simply doing scp spotify_stats.yml droplet:~ and running sudo docker-compose -f ./spotify_stats.yml up -d on the host.

Ideally, I would like to have a 100% “infrastructure as code” approach, but hey, who has time for that!

Ideas for the future

It would be nice to combine AI and user manuals of house appliances to make an application that lets you ask questions like “what does the red light on my oven mean?”. Or write my own Jira, or… Lots of rabbit holes in this list on Github.

Until next time!

Some Smalltalk about Ruby Loops

Lobsters
tech.stonecharioteer.com
2025-10-20 16:59:04
Comments...
Original Article

Let’s talk about loops.

1
2
3
for i in (1..10) do
  puts "i= #{i}"
end

Three weeks ago, when I was new to Ruby, I focussed on this. This was a loop. I mean, I come from Python and that’s what loops look like.

I was wrong, wasn’t I?

1
10.times { |i| puts "i = #{i}" }

If you haven’t read my post on Ruby blocks, you should. And then come back to this.

Damn, Ruby. Why do you need to be scratching my brain in ways I didn’t think were possible all the time?

Where’s that meme?

Khaby Lazy Meme

What’s so great about this? Either you’ve never written Ruby before or you have your eyes closed.

That wasn’t a “loop” now was it? It looks like you’re simply saying “do this 10 times”, but I’m fresh off the heels of diving into blocks and I know that .times is a method call and the { |i| puts "i = #{i}" } is a block.

We can inspect the class of a Ruby object like this:

1
2
puts 10.class # Integer
puts 10.respond_to?(:times) # true

Those are method calls. But what is a method call?

I come from Python, and in Python, a method call looks like this:

1
object.method(argument)

I’m hand-waving a lot of the details away, but here, we’re calling a method on an object. The object has the method. The method does something .

In Ruby, that analogy isn’t exactly right. You’re not calling a method. You’re sending a message.

1
2
10.times { puts "hi" }
10.send(:times) { puts "hi" }

If you use .send you might understand this better. You are sending the message :times to the object 10 .

The object 10 receives that message and then decides what to do with it. You could say this is a segue into philosophy.

How do you code? You write logic in a form that the computer can understand, and you use the tools you’ve been given by the language of your choice. It’s electronic telepathy, albeit an extremely broken form of it. The code represents what you want to do.

By saying:

1
2
for i in range(10)
  print(f"i = {i}")

You are not saying “print 5 times”, although that’s what your brain interprets it as.

You are saying:

  • Make an iterable object that yields integers from 0 to 9.
  • Iterate through this object, and assign the yielded value to i at each turn.
  • Upon each iteration, print "i = #{i}" to the screen.
  • Break out of the loop when you exhaust the iterator.

That’s a mouthful. It might indicate the same thing as:

1
10.times { |i| puts "i = #{i}" }

But it isn’t.

In the ruby block, you are saying:

  • Call the :times method on the Integer object of value 10 ;
  • For each iteration, pass the block { |i| puts "i = #{i}" } .

That… is lesser?

No, it’s different .

In Python, you indicate your purpose by controlling the loop. The code doesn’t attach the intent to the number 10 or to the range(10) iterator.

In Ruby, you are asking the object to do something . You are directly attaching the meaning of your code to the Integer of value 10 .

.send is the inner mechanic that orchestrates this. If you’ve done any amount of Python metaprogramming, you might have seen something like

1
2
method = obj.getattr("method_name")
method.__call__(args)

Note the difference in names here. It’s subtle enough to blink and miss. Python says __call__ . Ruby says __send__ .

If you’ve sat on the Erlang fence for as long as I have (I’ll learn it yet one day!), your message passing alarms are going off, aren’t they?

But this isn’t from Erlang, in fact there’s nothing similar besides the metaphor, it’s from Smalltalk. And this style of programming philosophy is Protocol over Syntax .

Meet Smalltalk

I’ve had a bunch of people tell me that Ruby is inspired by Smalltalk . I’d never written or read a line of Smalltalk before today, but I thought I’d check it out to understand Ruby’s philosophy a little.

1
5 timesRepeat: [ Transcript show: 'Hello' ].

Okay, that’s familiar enough, isn’t it?

1
1 to: 10 do [:i | Transcript show: i].

Okay, that looks similar to:

1
1.upto(10) { |i| puts i }

How about iterating over a list?

1
#(1 2 3 4 5) do: [:each | Transcript show: each ].
1
[1, 2, 3, 4, 5].each { |each| puts each }

To understand this paradigm, consider a PseudoInteger class that behaves like an integer but overwrites the times method. For brevity I’m not going to implement any of the other methods.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
class PseudoInteger
  def initialize(value)
    @value = value
  end

  def times(&block)
    puts "I'm going to run #{@value} times"

    @value.times do |y|
      block.call(y)
    end
    puts "Done with the block calls."
  end
end

i = PseudoInteger.new(10)
i.times { |x| puts x }

# I'm going to run 10 times
# 0
# 1
# 2
# 3
# 4
# 5
# 6
# 7
# 8
# 9
# Done with the block calls.

I’ve said this earlier, but here, the object PseudoInteger controls the iteration through .times . But what if the object chose to lie to you about what it’s doing?

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
class LyingPseudoInteger
  def initialize(value)
    @value = value
  end

  def times(&block)
    puts "I'm going to run #{@value} times"
    block.call(0)
    puts "Done with the block calls."
  end
end

i = LyingPseudoInteger.new(10)
i.times { |x| puts x }

# I'm going to run 10 times
# 0
# Done with the block calls

The new class LyingPseudoInteger does not do what it says it does .

Leaving the handling of the loop to the method allows us to add behaviour to loops that are controlled by the object and not by the user. This is a nice way to add side-effects.

But what does this have to do with message-passing?

In most languages, methods are bound to objects at compile time (or at least, the languages pretend they are). In Smalltalk, and by association Ruby, methods are looked up at runtime in response to messages.

In Python, Java or C++, calling object.method() asks the compiler to find the method in the class and call it.

In Ruby, 10.times { puts 'hi' } is literally interpreted as 10.send(:times) { puts 'hi' } . , which is equivalent to 10.public_send(:times) { puts 'hi' } .

The :times is a symbol. I haven’t written about Symbols yet, that’s a whole other rabbithole that I have yet to go down in.

But with this sort of implementation, we could (but shouldn’t), do this :

1
2
3
4
5
class Integer
  undef_method :times
end

10.times { puts 'hi' } # -> NoMethodError : undefined method 'times' for 10:Integer

The Ruby interpreter is saying “I have received a message :times but I don’t know how to respond to it.”

But I’m still not convinced, so let’s try this.

1
2
3
4
5
6
7
8
9
class Integer
  def method_missing(method_name, *args, &block)
    puts "I received a message I don't understand: #{method_name}"
    puts "I could do anything here - respond, forward it, ignore it..."
    puts "This is message passing - the message exists independently of the method"
  end
end

10.foobar # I received a message I don't understand: foobar...

In traditional method calling, this would be a compile-time error. This method doesn’t exist, so compilation fails. In message passing, the message got sent. The object receives it and then says “I do not know this message.” This is what we can catch.

But Python also looks up methods at runtime. Does that mean Python also does message passing?

Not quite.

In Python, you’d access methods like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
class MyClass:
    def my_method(self):
      return "Hello"

obj = MyClass()

# Methods are just attributes here.
print(obj.__dict__) # {}
print(MyClass.__dict__) # {... 'my_method': <function>...}

method = getattr(obj, "my_method")
method()

However, getattr is generic, it’s for all attributes. Not just methods. You could call getattr(obj, 'anything) and get a property called anything on the object.

In Python, “calling a method” is interpreted as “look up an attribute that implements __call__ ”.

Ruby does something different.

1
2
3
4
5
6
7
8
class MyClass

  def my_method
    "hello"
  end
end

obj = MyClass.new

You can explicitly SEND messages : obj.send(:my_method) .

The language has built-in constructs for message handling.

1
2
obj.respond_to?(:my_method)     # -> Can you respond to this message?
obj.method(:my_method)          # -> Give me the method that corresponds to this message

And we can intercept unknown messages with .method_missing as shown above. Ruby treats “calling a method” as “sending a message to an object.”

While Python is also interpreted, any attempt to attach middleware to this sort of method resolution is wrapped in attribute access interception . You are always getting attributes .

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
class Spy:
    def __getattribute__(self, name):
        print(f"Looking up attribute: {name}")
        return super().__getattribute__(name)

    def hello(self):
        return "hi"

spy = Spy()
spy.hello()
# Looking up attribute: hello
# "hi"

In Ruby, you’re intercepting messages .

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
class Spy
  def method_missing(name, *args, &block)
    puts "Received message: #{name}"
    super
  end

  def hello
    "hi"
  end
end

spy = Spy.new
spy.hello
# "hi"

spy.foobar
# Received message: foobar

The difference is in the philosophy. Python gets the attribute method on obj and if it’s callable, it calls it. Ruby sends the :method message to obj . This emphasizes object autonomy: the object receives a message and chooses its response.

This is taken to the extreme when you realize that Ruby has primitives (Symbols) that you can use to treat message passing like the first class concept that it is in Ruby.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
msg = :times
puts msg.class          # Symbol

# You can get the method that will respond
method = 10.method(msg) 
puts method.class       # Method

# You can call it later
method.call { |i| puts i }

# You can see what messages an object responds to:

puts 10.methods.grep(/time/) # [:times]

In Python, doing this is rather roundabout.

1
2
3
4
5
method = getattr(10, "times", None)

# method is None

# You can use `dir()` to get all the attributes in an object, but you'd have to filter them for methods yourself.

But what is a loop?

In Ruby, you can iterate through anything that includes Enumerable .

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
class Countdown
  include Enumerable
  def initialize(from)
    @from = from
  end

  def each
    current = @from
    while current > 0
      yield current
      current -= 1
    end
  end
end

c = Countdown.new(5)
c.each { |x| puts x }
c.map { |x| x * 2 }
c.select { |x| x.even? }
c.reduce(:*)

By including Enumerable and by implementing each , we get access to so many methods that we didn’t need to manually implement. This is the same as the Collection protocol from Smalltalk - implementing :do: gives us the methods we need.

Why For Feels Like It Doesn’t Belong

By now, you should look at for as an alien thing. It doesn’t feel like it belongs. But why is that?

for is syntax, and Ruby, by design, encourages message passing.

1
2
3
4
5
6
7
[1, 2, 3].each { |x| y = x }
puts y                        # NameError

for x in [1, 2, 3]
  y = x
end
puts y # 3

Using for pollutes the namespace, spilling variables over to the including scope. But that’s not the only thing. for is syntactical sugar. It’s not message passing.

In a sense, the familiarity of the loop construct deceives you. I reached towards for in my first days learning Ruby thinking that’s the way to loop. Yet, Rubyists encouraged me to look into .each and .times to write loops. I wanted to understand that choice better. It’s a different paradigm, because you could do something in Python that does the same thing, but it’s still not the same. You’d still shoehorn in the loop syntax and use that. And of course it also begs the question: why should you?

The Beauty of Protocol Over Syntax

In Smalltalk, there is no for loop. Everything is message passing.

To loop 5 times you send :timesRepeat to 5 .

To iterate over a collection, send :do: to the collection.

To filter, send :select: to the collection.

Matz took this philosophy and ran with it.

1
2
3
4
5.times { puts 'Hello' }         # timesRepeat:
[1, 2, 3].each { |x| puts x }    # do:
[1, 2, 3].select { |x| x.even? } # select:
[1, 2, 3].map { |x| x * 2 }      # collect: 

Understanding the message passing philosophy helps you map Ruby’s choices.

  • Integers have .times because they need respond to iteration messages.
  • Arrays have .each for the same reason.
  • Adding .countdown to an Integer teaches it to respond to a new message.
  • Adding include Enumerable to a class and implementing .each gives you the ability to send multiple types of iterative messages to objects of that class, because that implements the Collection Protocol.
  • for doesn’t send messages the same way.

Writing Ruby in Ruby and Python in Python is important. I’ve been a strong believer of learning how to write idiomatic code in a programming language. You cannot transplant features between languages without coding with an “accent”.

Asking an object to iterate over itself allows objects to develop interfaces that dictate how to iterate. This is at the heart of the message passing paradigm that Ruby and Smalltalk use.

Writing Ruby encourages you to embrace this, and that helps you build a sense of style that it comes with.

I’ve noticed this of Ruby. Python, a language I love, looks different depending on who’s writing it. Ruby coaxes you to develop this sense of style, whether you want to spend a weekend learning why it does this or not.

And now, when I see:

1
10.times { |i| puts "i = #{i}" }

I do not see a loop anymore. I see an object responding to a message. I see Smalltalk’s legacy. I see protocol over syntax.

Microsoft fixes Windows Server Active Directory sync issues

Bleeping Computer
www.bleepingcomputer.com
2025-10-20 16:58:21
Microsoft is rolling out a fix for Active Directory issues affecting some Windows Server 2025 systems after installing security updates released since September. [...]...
Original Article

Windows Server

Microsoft is rolling out a fix for Active Directory issues affecting some Windows Server 2025 systems after installing security updates released since September.

As Redmond explained when it acknowledged it on Tuesday, this known issue affects Active Directory Domain Services (AD DS) synchronization, including Microsoft Entra Connect Sync.

"Applications that use the Active Directory directory synchronization (DirSync) control for on-premises Active Directory Domain Services (AD DS), such as when using Microsoft Entra Connect Sync, can result in incomplete synchronization of large AD security groups exceeding 10,000 members," Microsoft said .

"This issue occurs only on Windows Server 2025 after installing the September 2025 Windows security update (KB5065426), or later updates."

Microsoft now allows IT administrators to fix this bug on managed devices by installing and configuring this Known Issue Rollback Group Policy on impacted Windows devices.

Admins can find more information on deploying and configuring KIR group policies on Microsoft's support website .

Until next month's Patch Tuesday when the fix will rollout to all customers, the issue can also be resolved on non-managed business devices and for most home users by adding the following registry key as soon as possible to avoid Microsoft Entra Connect Sync disruptions:

Path: Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Policies\Microsoft\FeatureManagement\Overrides
Name: 2362988687
Type: REG_DWORD
Value: 0

Microsoft is also working to fix a bug affecting Windows 11 24H2 and Windows Server 2025 devices and causing Windows update failures when using the Windows Update Standalone Installer (WUSA) to install updates from a network share.

On Friday, the company also provided guidance on addressing smart card authentication issues impacting all Windows 10, Windows 11, and Windows Server releases after installing the October 2025 Windows security updates.

One day earlier, Microsoft fixed another known issue breaking HTTP/2 localhost (127.0.0.1) connections after installing recent Windows security updates and removed two compatibility holds that blocked users from Windows 11 upgrades via Windows Update.

Production RAG: what I learned from processing 5M+ documents

Hacker News
blog.abdellatif.io
2025-10-20 16:55:36
Comments...
Original Article

October 20, 2025 3 min read

I've spent the past 8 months the RAG trenches, I want to share what actually worked vs. wasted our time. We built RAG for Usul AI (9M pages) and an unnamed legal AI enterprise (4M pages).

Langchain + Llamaindex

We started out with youtube tutorials. First Langchain -> Llamaindex. Got to a working prototype in a couple of days and were optimistic with the progress. We run tests on subset of the data (100 documents) and the results looked great. We spend the next few days running the pipeline on the production dataset and got everything working in a week — incredible.

Except it wasn't, the results were subpar and only the end users could tell. We spent the following few months rewriting pieces of the system, one at a time, until the performance was at the level we wanted. Here are things we did ranked by ROI.

What moved the needle

  1. Query Generation : not all context can be captured by the user's last query. We had an LLM review the thread and generate a number of semantic + keyword queries. We processed all of those queries in parallel, and passed them to a reranker. This made us cover a larger surface area and not be dependent on a computed score for hybrid search.
  2. Reranking : the highest value 5 lines of code you'll add. The chunk ranking shifted a lot . More than you'd expect. Reranking can many times make up for a bad setup if you pass in enough chunks. We found the ideal reranker set-up to be 50 chunk input -> 15 output.
  3. Chunking Strategy : this takes a lot of effort, you'll probably be spending most of your time on it. We built a custom flow for both enterprises, make sure to understand the data, review the chunks, and check that a) chunks are not getting cut mid-word or sentence b) ~each chunk is a logical unit and captures information on its own
  4. Metadata to LLM : we started by passing the chunk text to the LLM, we ran an experiment and found that injecting relevant metadata as well (title, author, etc.) improves context and answers by a lot.
  5. Query routing : many users asked questions that can't be answered by RAG (e.g. summarize the article, who wrote this). We created a small router that detects these questions and answers them using an API call + LLM instead of the full-blown RAG set-ups.

Our stack

  • Vector database : Azure -> Pinecone -> Turbopuffer (cheap, supports keyword search natively)
  • Document Extraction : Custom
  • Chunking : Unstructured.io by default, custom for enterprises (heard that Chonkie is good)
  • Embedding : text-embedding-large-3, haven't tested others
  • Reranker : None -> Cohere 3.5 -> Zerank (less known but actually good)
  • LLM : GPT 4.1 -> GPT 5 -> GPT 4.1, covered by Azure credits

Going Open-source

We put all our learning into an open-source project: agentset-ai/agentset under an MIT license. Feel free to reach out if you have any questions.

How Artists Are Keeping 'The Lost Art' of Neon Signs Alive

403 Media
www.404media.co
2025-10-20 16:47:58
Watch 404 Media’s first short documentary, about an artist keeping real neon signs alive....
Original Article

Next to technicolor neon signs featuring Road Runner, an inspirational phrase that says “everything will be fucking amazing,” and a weed leaf, Geovany Alvarado points to a neon sign he’s particularly proud of: “The Lost and Found Art,” it says.

“I had a customer who called me, it was an old guy. He wanted to meet with someone who actually fabricates the neon and he couldn’t find anyone who physically does it,” Alvarado said. “He told me ‘You’re still doing the lost art.’ It came to my head that neon has been dying, there’s less and less people who have been learning. So I made this piece.”

For 37 years, Alvarado has been practicing “the lost and found art” of neon sign bending, weathering the general ups and downs of business as well as, most threateningly, the rise of cheap LED signs that mimic neon and have become popular over the last few years.

“When neon crashed and LED and the big letters like McDonald’s, all these big signs—they took neon before. Now it’s LED,” he said. In the last few years, though, he said there has been a resurgent interest in neon from artists and people who are rejecting the cheap feel of LED. “It came back more like, artistic, for art. So I’ve been doing 100 percent neon since then.”

At his shop, Quality Neon Signs in Mid-City Los Angeles, there are signs in all sorts of states of completion and functionality strewn about Alvarado’s shop: old, mass-produced beer advertisements whose transformers have blown and are waiting for him to repair them, signs in the shapes of soccer and baseball jerseys, signs with inspirational phrases (“Everything is going to be fucking amazing,” “NEED MONEY FOR FAKE ART”), signs for restaurants, demonstration tubes that show the different colors he offers, weed shop signs, projects he made when he was bored. There are projects that are particularly meaningful to him: a silhouette he made of his wife holding their infant daughter, and a sign of the Los Angeles skyline with a wildfire burning in the background, “just to represent Los Angeles,” he said. There are old little bits of tube that have broken off of other pieces. “We save everything,” Alvarado said, “in case we want to fix it or need it for a repair.” His workshop, a few minutes away, features a “Home Sweet Home” sign,” a sign he made years ago for Twitter/Chanel collaboration featuring the old Twitter bird logo, and a sign for the defunct Channing Tatum buddy cop show Comrade Detective .

The overwhelming majority of signs Alvarado sells are traditional neon glass. The real thing. But he does offer newer LED faux-neon signs to clients who want it, though he doesn’t make those in-house. Alvarado says he sells LED to keep up with the times and because they can be more practical for one-off events because they are less likely to break in transit, but it’s clear that he and the overwhelming majority of neon sign makers think the LED stuff is simply not the same. Most LED signs look cheaper and do not emit the same warmth of light, but are more energy efficient.

I asked two neon sign creators about the difference while I was shopping for signs. They said they think the environmental debate isn’t quite as straightforward as it seems because a lot of the LED signs they make seem to be for one-off events, meaning many LED signs are manufactured essentially for a single use and then turned into e-waste. Many real neon signs are bought as either artwork or are bought by businesses who are interested in the real aesthetic. And because they are generally more expensive and are handmade, they are used for years and can be repaired indefinitely.

I asked Alvarado to show me the process and make a neon sign for 404 Media, which I’ve wanted for years. It’s a visceral, loud, scientific process, with gas-powered burners that sound like jet engines heating up the glass tubes to roughly 1,000 degrees so they can be bent into the desired shapes. When he first started bending neon, Alvarado says he used to use an overheard projector and a transparency to project a schematic onto the wall. These days, he mocks up designs on a computer aided design program and prints them out on a huge printer that uses a sharpie to draw the schematic. He then painstakingly marks out his planned glass bends on the paper, lining up the tubes with the mockup as he works.

“You burn yourself a lot, your hands get burnt. You’re dealing with fire all the time,” Alvarado said. He burned himself several times while working on my piece. “For me it’s normal. Even if you’re a pro, you still burn yourself.” Every now and then, even for someone who has been doing this for decades, the glass tubes shatter: “You just gotta get another stick and do it again,” he said.

After bending the glass and connecting the electrodes to one end of the piece, he connects the tubes to a high-powered vacuum that sucks the air out of them. The color of the light in Alvarado’s work is determined by a powdered coating within the tubes or a different colored coating of the tubes themselves; the type of gas and electrical current also changes the type and intensity of the colors. He uses neon for bright oranges and reds, and argon for cooler hues.

Alvarado, of course, isn’t the only one still practicing the “lost art” of neon bending, but he’s one of just a few commercial businesses in Los Angeles still manufacturing and repairing neon signs for largely commercial customers. Another, called Signmakers, has made several large neon signs that have become iconic for people who live in Los Angeles. The artist Lili Lakich has maintained a well-known studio in Los Angeles’ Arts District for years and has taught “The Neon Workshop” to new students since 1982, and the Museum of Neon Art is in Glendale, just a few miles away.

A few days after he made my neon sign, I was wandering around Los Angeles and came across an art gallery displaying Tory DiPietro’s neon work, which is largely fine art and pieces where neon is incorporated to other artworks; a neon “FRAGILE” superimposed on a globe, for example. Both DiPietro and Alvarado told me that there are still a handful of people practicing the lost art, and that in recent years there’s been a bit of a resurgent interest in neon, though it’s not that easy to learn.

On the day I picked up my sign, there were two bright green “Meme House” signs for a memecoin investor house in Los Angeles that Alvarado said he had bent and made immediately after working on the 404 Media sign. “I was there working til about 11 p.m.” he said.

About the author

Jason is a cofounder of 404 Media. He was previously the editor-in-chief of Motherboard. He loves the Freedom of Information Act and surfing.

Jason Koebler

Photos From Saturday’s ‘No Kings’ Protests Across the U.S.

Daring Fireball
www.theatlantic.com
2025-10-20 16:40:41
Picking up steam, these protests are.  ★  ...
Original Article

Organizers estimated that millions of Americans attended rallies in cities and towns across America, protesting the policies and actions of President Trump and his administration.

Updated at 12:30 p.m. ET on October 19, 2025

A person dressed in a Statue of Liberty costume participates in a

Timothy A. Clary / AFP / Getty

A person dressed in a Statue of Liberty costume participates in a “No Kings” national day of protest in New York on October 18, 2025.

A crowd of protesters marches downa street, carrying signs.

Alyssa Pointer / Reuters

People attend a “No Kings” protest against American President Donald Trump’s policies, in Atlanta on October 18, 2025.

A person watches protesters march past, hands clasped beneath their chin.

Alyssa Pointer / Reuters

A person looks on during the “No Kings” protest in Atlanta on October 18, 2025.

A large number of protestors hold signs on a street corner.

Andrew West / The News-Press / USA Today / Reuters

A large number of protesters hold signs at the corner of Daniels Parkway and U.S. 41 in Fort Myers, Florida, on October 18, 2025.

An elevated view of a large crowd of protesters in a square in Chicago.

Jacek Boczarski / Anadolu / Getty

Demonstrators gather at Butler Field in Grant Park for the “No Kings! Rally and March” in Chicago on October 18, 2025.

A crowd of protesters gathers in front of Florida's Old Capitol Building.

Ana Goñi-Lessan / USA Today / Reuters

A scene from the October 18, 2025, “No Kings” protest in Tallahassee, Florida, which drew large crowds to both sides of the street in front of Florida’s old Capitol.

A large crowd of protesters hold signs, including one that says

Allison Robbert / AP

Crowds gather to listen to Senator Bernie Sanders during a “No Kings” protest in Washington, D.C.

A protester waves a flag, while holding a sign that reads

Roberto Schmidt / AFP / Getty

People participate in a “No Kings” national day of protest in Washington, D.C., on October 18, 2025.

Protesters march toward the U.S. Capitol building. Some carry a large replica of the U.S. Constitution.

Leah Millis / Reuters

Demonstrators gather near the U.S. Capitol building during a “No Kings” protest on October 18, 2025.

Two protesters wearing inflatable costumes of Squidward, a character from the SpongeBob SquarePants TV show, carry signs, including one that says

Bill Clark / CQ-Roll Call, Inc / Getty

Protesters march down Pennsylvania Avenue for the second “No Kings” protest in Washington, D.C., on October 18, 2025.

A group of protesters, hold signs, including one that reads

Grant Baldwin / Getty

Protesters gather at a main traffic intersection in Waxhaw, North Carolina, in support of the nationwide “No Kings” rallies on October 18, 2025.

A large group of protesters cheer and hold signs.

Benjamin Chambers / Delaware News / USA Today / Reuters

Thousands of people showed up for the second “No Kings” rally along North Bancroft Parkway, between Delaware Avenue and 17th Street, in Wilmington, Delaware, on October 18, 2025.

An aerial view of hundreds of people gathered on a beach, many holding up a large American flag.

Laure Andrillon / AFP / Getty

This aerial picture taken on October 18, 2025, shows protesters holding an American flag during the “No Kings” national day of protest on Ocean Beach in San Francisco.

An elevated view of a large crowd of protesters

Albert Cesare / The Enquirer / USA Today / Reuters

People gather for the “No Kings” protest at Smale Riverfront Park in Cincinnati on October 18, 2025.

A group of protesters gathers in front of a building in Iowa City.

Julia Hansen / Iowa City Press-Citizen / USA Today / Reuters

Protesters gather for a “No Kings” protest on October 18, 2025, in downtown Iowa City.

A group of protesters march in a Montana town, with mountains, visible in the background.

William Campbell / Getty

Residents in Livingston, Montana, participate in a “No Kings” protest march down Main and Park Streets on October 18, 2025.

A group of protesters hold signs near a fellow protester wearing an inflatable frog costume.

Jan Sonnenmair / Getty

Protesters wear inflatable costumes for the “No Kings” protest on October 18, 2025, in Memphis.

Protesters carry signs down a New York City street, including one that reads simply

Timothy A. Clary / AFP / Getty

Protesters march through New York City on October 18, 2025.

A large group of protesters carry signs and American flags on a city street.

Spencer Platt / Getty

Thousands of people participate in a “No Kings” protest in Manhattan on October 18, 2025.

Demonstrators holding signs stand on either side of a road.

Megan Mendoza / The Arizona Republic / USA Today / Reuters

In Scottsdale, Arizona, protesters crowd the sidewalks along Scottsdale Road and Camelback Road for a “No Kings” protest on October 18, 2025.

A large crowd of protesters gather in a city square in Philadelphia.

Matthew Hatcher / AFP / Getty

People participate in a “No Kings” national day of protest in Philadelphia on October 18, 2025.

Sign-carrying protesters line a busy street in a Florida town.

Jonah Hinebaugh / Naples Daily News / USA Today / Reuters

Thousands of protesters line the sidewalk during the “No Kings” protest at the Collier County Government Center in Naples, Florida, on October 18, 2025.

A crowd of protesters watches the Mayor of Boston speak at a podium, holding signs, including one that reads

Brian Snyder / Reuters

Michelle Wu, the mayor of Boston, speaks during the “No Kings” protest in Boston on October 18, 2025.

A large group of protesters gather in a park in Boston, holding signs, including one featuring a stylized cartoon frog and the word

Joseph Prezioso / AFP / Getty

People participate in the “No Kings” national day of protest in Boston.

Protesters gather on all four corners of a busy intersection.

Paul Kuehnel / York Daily Record / USA Today / Reuters

A view of the “No Kings” protest at Continental Square in York, Pennsylvania, on October 18, 2025

A large crowd of protesters hold signs in a street in Oklahoma City.

Bryan Terry / The Oklahoman / USA Today / Reuters

People in Oklahoma City gather for the “No Kings” protest on October 18, 2025, at City Hall Park.

Protesters in a Michigan town stand along a sidewalk holding signs.

Jeff Kowalsky / AFP / Getty

In Howell, Michigan, people participate in a “No Kings” national day of protest on October 18, 2025.

A large group of protesters hold signs in a Chicago park.

Nam Y. Huh / AP

People gather during a “No Kings” protest on October 18, 2025, in Chicago.

Protesters march and carry signs, including several wearing inflatable animal costumes.

Jovanny Hernandez / Milwaukee Journal Sentinel / USA Today / Reuters

Demonstrators in Milwaukee gather at Cathedral Square Park for the second “No Kings” protest on October 18, 2025.

A large crowd of protesters gather in a park in Portland, Oregon, seen from above.

John Rudoff / Reuters

People in Portland, Oregon, gather for a “No Kings” protest against U.S. President Donald Trump’s policies, at Waterfront Park, on October 18, 2025.

This photo essay originally misidentified the city where one of the protests took place.

Postman which I thought worked locally on my computer, is down

Hacker News
status.postman.com
2025-10-20 16:40:40
Comments...
Original Article

Update - We have seen significant recovery of the features. We are continuing to monitor for any further issues.
Oct 20 , 2025 - 08:20 PDT

Monitoring - Majority of the services have recovered. We are continuing to monitor.
Oct 20 , 2025 - 06:21 PDT

Update - We are seeing significant recovery and are continuing to monitor.
Oct 20 , 2025 - 05:56 PDT

Update - We are seeing significant recovery and are continuing to monitor.
Oct 20 , 2025 - 05:52 PDT

Identified - We are currently experiencing significantly increased error rates which is impacting functionality on Postman. There is a major issue with our underlying cloud provider and we are working with them to restore full access as quickly as possible.
Oct 20 , 2025 - 05:39 PDT

Postman Platform on Desktop ? Operational

Postman Platform on Browser ? Operational

Postman Login ? Operational

Postman Monitors ? Operational

Postman Mocks ? Operational

Postman API ? Operational

Postman API Network (API Explore) ? Operational

Postman Search ? Operational

Public Collection Documentation ? Operational

Postman Learning Center and Documentation ? Operational

Postman Support and Community Forum ? Operational

Postman Integrations ? Operational

Postman Interceptor Operational

Marketing Website ? Operational

Postman VS Code extension ? Operational

API Builder ? Operational

API Specifications ? Operational

Collection Runner Operational

Postman Platform on Desktop ? Operational

Postman Platform on Browser ? Operational

Postman Login ? Operational

Postman Mocks ? Operational

Postman API ? Operational

Postman Search ? Operational

Operational

Degraded Performance

Partial Outage

Major Outage

Maintenance

Past Incidents

Oct 20 , 2025

Unresolved incident: Users may encounter issues with accessing or using Postman.

Oct 19 , 2025

No incidents reported.

Oct 18 , 2025

No incidents reported.

Oct 17 , 2025

No incidents reported.

Oct 16 , 2025

No incidents reported.

Oct 15 , 2025

No incidents reported.

Oct 14 , 2025

No incidents reported.

Oct 13 , 2025

No incidents reported.

Oct 12 , 2025

Completed - The scheduled maintenance has been completed.
Oct 12 , 08:52 PDT

Verifying - Verification is currently underway for the maintenance items.
Oct 12 , 08:49 PDT

In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Oct 12 , 08:00 PDT

Scheduled - We will be performing a planned database upgrade in the US data centre to ensure the latest version and security patches are applied. During this time, users with data in US region may experience temporary service disruptions while using Postman across all platforms.

Maintenance is expected to last approximately 2 hours and we will provide regular updates on the progress. Please note, this downtime only affects users with data in US region. Services for users in our EU data centre will remain unaffected.

If you have any questions, please don't hesitate to reach out to our support team at support@postman.com .
We appreciate your patience and understanding during this maintenance window.


Oct 9 , 06:30 PDT

Oct 11 , 2025

No incidents reported.

Oct 10 , 2025

Completed - Scheduled database maintenance has been successfully completed.
Oct 10 , 21:40 PDT

Verifying - We are in the process of verifying the maintenance items.
Oct 10 , 21:23 PDT

In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Oct 10 , 20:30 PDT

Scheduled - We have a scheduled database maintenance coming up to enable upgrades to the service. Users may experience intermittent service disruptions when doing collection runs via Collection Runner or Postman CLI. Postman Monitors and Scheduled Collections will remain unaffected. We expect to complete maintenance within two hours and will provide frequent updates throughout the process.

Contact our support team at support@postman.com with any questions.
We appreciate your patience during this maintenance window.


Oct 7 , 09:15 PDT

Oct 9 , 2025

No incidents reported.

Oct 8 , 2025

No incidents reported.

Oct 7 , 2025

No incidents reported.

Oct 6 , 2025

No incidents reported.

Show HN: I created a cross-platform GUI for the JJ VCS (Git compatible)

Hacker News
judojj.com
2025-10-20 16:35:19
Comments...
Original Article

The full-featured GUI for JJ VCS

(works with Git repos too!)

Judo application screenshot

Operation Log

Restore your repo to any point in time with the Operation Log. Undo and redo any change.

Combined diffs

View combined diffs of multiple commits, or the diff between commits

Apply or revert hunks

Apply or revert hunks of any diff, files, commits, or even multiple commits at once

Custom revsets

Use custom revsets to select which commits are shown. Filter by descriptions, authors, ancestry, and more.

Drag and drop rebase

Drag and drop rebase

Advanced operations

Duplicate, split, abandon, revert, absorb, squash, and more

Bookmark management

Keep your bookmarks managed

Hackers Say They Have Personal Data of Thousands of NSA and Other Government Officials

403 Media
www.404media.co
2025-10-20 16:16:50
The same hackers who doxed hundreds of DHS, ICE, and FBI officials now say they have the personal data of tens of thousands of officials from the NSA, Air Force, Defense Intelligence Agency, and many other agencies....
Original Article

A hacking group that recently doxed hundreds of government officials , including from the Department of Homeland Security (DHS) and Immigration and Customs Enforcement (ICE), has now built dossiers on tens of thousands of U.S. government officials, including NSA employees, a member of the group told 404 Media. The member said the group did this by digging through its caches of stolen Salesforce customer data. The person provided 404 Media with samples of this information, which 404 Media was able to corroborate.

As well as NSA officials, the person sent 404 Media personal data on officials from the Defense Intelligence Agency (DIA), the Federal Trade Commission (FTC), Federal Aviation Administration (FAA), Centers for Disease Control and Prevention (CDC), the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF), members of the Air Force, and several other agencies.

The news comes after the Telegram channel belonging to the group, called Scattered LAPSUS$ Hunters, went down following the mass doxing of DHS officials and the apparent doxing of a specific NSA official. It also provides more clarity on what sort of data may have been stolen from Salesforce’s customers in a series of breaches earlier this year , and which Scattered LAPSUS$ Hunters has attempted to extort Salesforce over.

💡

Do you know anything else about this breach? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.

“That’s how we’re pulling thousands of gov [government] employee records,” the member told 404 Media. “There were 2000+ more records,” they said, referring to the personal data of NSA officials. In total, they said the group has private data on more than 22,000 government officials.

Scattered LAPSUS$ Hunters’ name is an amalgamation of other infamous hacking groups—Scattered Spider, LAPSUS$, and ShinyHunters. They all come from the overarching online phenomenon known as the Com. On Discord servers and Telegram channels, thousands of scammers, hackers, fraudsters, gamers, or just people hanging out congregate, hack targets big and small, and beef with one another. The Com has given birth to a number of loose-knit but prolific hacking groups, including those behind massive breaches like MGM Resorts , and normalized extreme physical violence between cybercriminals and their victims.

On Thursday, 404 Media reported Scattered LAPSUS$ Hunters had posted the names and personal information of hundreds of government officials from DHS, ICE, the FBI, and Department of Justice. 404 Media verified portions of that data and found the dox sometimes included peoples’ residential addresses. The group posted the dox along with messages such as “I want my MONEY MEXICO,” a reference to DHS’s unsubstantiated claim that Mexican cartels are offering thousands of dollars for dox on agents.

Hackers Dox Hundreds of DHS, ICE, FBI, and DOJ Officials

Scattered LAPSUS$ Hunters—one of the latest amalgamations of typically young, reckless, and English-speaking hackers—posted the apparent phone numbers and addresses of hundreds of government officials, including nearly 700 from DHS.

404 Media Joseph Cox

After publication of that article, a member of Scattered LAPSUS$ Hunters reached out to 404 Media. To prove their affiliation with the group, they sent a message signed with the ShinyHunters PGP key with the text “Verification for Joseph Cox” and the date. PGP keys can be used to encrypt or sign messages to prove they’re coming from a specific person , or at least someone who holds that key, which are typically kept private.

They sent 404 Media personal data related to DIA, FTC, FAA, CDC, ATF and Air Force members. They also sent personal information on officials from the Food and Drug Administration (FDA), Health and Human Services (HHS), and the State Department. 404 Media verified parts of the data by comparing them to previously breached data collected by cybersecurity company District 4 Labs . It showed that many parts of the private information did relate to government officials with the same name, agency, and phone number.

Except the earlier DHS and DOJ data, the hackers don’t appear to have posted this more wide ranging data publicly. Most of those agencies did not immediately respond to a request for comment. The FTC and Air Force declined to comment. DHS has not replied to multiple requests for comment sent since Thursday. Neither has Salesforce.

The member said the personal data of government officials “originates from Salesforce breaches.” This summer Scattered LAPSUS$ Hunters stole a wealth of data from companies that were using Salesforce tech , with the group claiming it obtained more than a billion records . Customers included Disney/Hulu, FedEx, Toyota, UPS, and many more. The hackers did this by social engineering victims and tricking them to connect to a fraudulent version of a Salesforce app . The hackers tried to extort Salesforce, threatening to release the data on a public website, and Salesforce told clients it won’t pay the ransom, Bloomberg reported .

On Friday the member said the group was done with extorting Salesforce. But they continued to build dossiers on government officials. Before the dump of DHS, ICE, and FBI dox, the group posted the alleged dox of an NSA official to their Telegram group.

Over the weekend that channel went down and the member claimed the group’s server was taken “offline, presumably seized.”

The doxing of the officials “must’ve really triggered it, I think it’s because of the NSA dox,” the member told 404 Media.

Matthew Gault contributed reporting.

About the author

Joseph is an award-winning investigative journalist focused on generating impact. His work has triggered hundreds of millions of dollars worth of fines, shut down tech companies, and much more.

Joseph Cox

In favor of comments, what does empirical engineering says? (2022)

Lobsters
sadraskol.com
2025-10-20 16:12:47
Comments...
Original Article

Okay so what's up with comments? Devs seem to hate them. Infamous Cleanest Coders do not write comments, right?

I disagree. I think comments are great. A good comment can make my day wonderful. Look at all these beauties !

But this is only opinion against opinion. Even if I'm not the only one advocating for writing comments , it's still unfounded opinions. Can we find evidence that commenting is good?

Quick Overview

Thankfully for me, there's a lot of research on the subject. A quick search in Empirical Software Engineering yields 44 pages of results!

Most of the articles are positive towards commenting code. This one states that:

Existing studies show that code comments help developers comprehend programs and reduce additional time spent on reading and navigating source code.

Unfortunately, I have to pay 40 euros to have access to it... This is extortion. There's an open-access article with a lot of references. I use it as a starting point.

The Papers

Firstly, i'll skim through all the papers. Then i'll devise a documented opinion on the matter.

Documentation Reuse: Hot or Not? An Empirical Study [1]

Documentation is crucial for software development. It helps understand other people's code, without reading it. This is critical for libraries. Imagine having to choose between a well-documented library and some library without documentation.

Also, engineers prefer writing the documentation close to the code. So comments are the most obvious way to write documentation.

These are great but the studies' core subject is comment reuse in object-oriented languages. This is not what i'm looking for.

This paper is an excellent pointer to other references though.

An Empirical Investigation on Documentation Usage Patterns in Maintenance Tasks [2]

This article is promising at first: analyzing the process of how developers analyze code. It is a weak article for me: it only evaluates students.

It shows that they went back and forth between the code and the documentation. That's it...

Quality Analysis of Source Code Comments [3]

This paper suggests that 2 types of comments are bad:

  • short comments that can be extracted as methods
  • comments with not enough information: what does it mean?

It has a great survey question section to see what was evaluated:

Survey 1:

  1. Does the comment provide additional information from the method name?
  2. The method name is meaningful?

Survey 2: What to do with the comment?

  1. Remove it, and do not change the code
  2. Remove it, extract the following code in a meaningful method
  3. Keep the comment

Survey 3:

  1. The comment contains information that can not be extracted from the following code
  2. The comment contains only information that can be extracted from the following code

This paper does not discuss much how this taxonomy helps understand comments. Which is quite disappointing... They used a magical metric ( c_coeff ) to know if comments were good or not.

I don't know what to think about this paper. I either read it too quickly or it's not interesting, but I couldn't extract useful information.

The Effect of Modularization and Comments on Program Comprehension [4]

This study is good. 48 experienced programmers are given a random piece of code. The code is either commented or not. The participants have to answer 20 questions about the code. If the code had comments, participants answers more correctly.

The experiment is an easy task. It's the best experiment showing that commented code is better than uncommented code.

This experiment was reproduced with concording results in a later study

Note: according to the last paper's surveys, comments in these studies are:

  1. Provide more information than the method names
  2. Only contain information that can be extracted from the following code

This is interesting because one might think only external context could be useful.

On the Comprehension of Program Comprehension [5]

Combined observation of 28 professional devs and survey of 1477 respondents. This study is on program comprehension. Many studies already tackle the subject before but with smaller samples.

The result of the study: developers use informal tools to get knowledge. They prefer code comments, commit messages, emails, tickets, etc. The appropriate knowledge is usually not documented.

I haven't learned much from this paper :shrug:

A Study of the Documentation Essential to Software Maintenance [6]

This study is pretty good. Its scope is limited which makes it simple to read and understand. It surveys the artifact used by developers to maintain software.

It would be great if this study was reproduced. It does not prove much but: devs use all sorts of documentation.

The four most important artifacts are:

  1. Source code
  2. Comments
  3. Data model
  4. Requirement Description

It does not say which is best. It does say one thing though: documentation is key for maintenance.

Conclusions

My conclusion is: not much can be said about comments . Most studies suffer from flaws: either unreproduced results ([6], [5], etc.), old studies ([4]), small or non-representative samples ([2]), contradicting results ([3] and [4]), etc. None is strong and all are open to interpretation.

It does not mean that nothing can be said. Every paper shows that comments are useful . They are part of documentation which helps comprehension .

All the following sentences are not supported by science:

  • Code comments are useless
  • There's a better way of writing comments
  • Comments get outdated and make software difficult to maintain
  • A comment that could be a method is a bad smell

Nevertheless, the studies are pretty clear on one thing: comments are good. They are a form of documentation which are the closest to the code.

The studies do not say what kind of comments are the most useful. Until the facts are settled, you'll still have to discuss it with your team. Please do. Discuss comments with your teammates. Agree on how to write comments to make sure you do write them.

Take care.

Anthropic and Cursor Spend This Much on Amazon Web Services

Hacker News
www.wheresyoured.at
2025-10-20 16:05:30
Comments...
Original Article

So, I originally planned for this to be on my premium newsletter, but decided it was better to publish on my free one so that you could all enjoy it. If you liked it, please consider subscribing to support my work. Here’s $10 off the first year of annual .

I’ve also recorded an episode about this on my podcast Better Offline ( RSS feed , Apple , Spotify , iHeartRadio - it may take 20-30 minutes after receiving this for the episode to drop), it’s a little different but both handle the same information, just subscribe and it'll pop up.


Over the last two years I have written again and again about the ruinous costs of running generative AI services, and today I’m coming to you with real proof.

Based on discussions with sources with direct knowledge of their AWS billing, I am able to disclose the amounts that AI firms are spending, specifically Anthropic and AI coding company Cursor, its largest customer .

I can exclusively reveal today Anthropic’s spending on Amazon Web Services for the entirety of 2024, and for every month in 2025 up until September, and that that Anthropic’s spend on compute far exceeds that previously reported.

Furthermore, I can confirm that through September, Anthropic has spent more than 100% of its estimated revenue (based on reporting in the last year) on Amazon Web Services, spending $2.66 billion on compute on an estimated $2.55 billion in revenue.

Additionally, Cursor’s Amazon Web Services bills more than doubled from $6.2 million in May 2025 to $12.6 million in June 2025, exacerbating a cash crunch that began when Anthropic introduced Priority Service Tiers, an aggressive rent-seeking measure that begun what I call the Subprime AI Crisis , where model providers begin jacking up the prices on their previously subsidized rates.

Although Cursor obtains the majority of its compute from Anthropic — with AWS contributing a relatively small amount, and likely also taking care of other parts of its business — the data seen reveals an overall direction of travel, where the costs of compute only keep on going up .

Let’s get to it.

Some Initial Important Details

  • I do not have all the answers! I am going to do my best to go through the information I’ve obtained and give you a thorough review and analysis. This information provides a revealing — though incomplete — insight into the costs of running Anthropic and Cursor, but does not include other costs, like salaries and compute obtained from other providers. I cannot tell you (and do not have insight into) Anthropic’s actual private moves. Any conclusions or speculation I make in this article will be based on my interpretations of the information I’ve received, as well as other publicly-available information.
  • I have used estimates of Anthropic’s revenue based on reporting across the last ten months. Any estimates I make are detailed and they are brief.
  • These costs are inclusive of every product bought on Amazon Web Services, including EC2, storage and database services (as well as literally everything else they pay for).
  • Anthropic works with both Amazon Web Services and Google Cloud for compute. I do not have any information about its Google Cloud spend.
    • The reason I bring this up is that Anthropic’s revenue is already being eaten up by its AWS spend. It’s likely billions more in the hole from Google Cloud and other operational expenses.
  • I have confirmed with sources that every single number I give around Anthropic and Cursor’s AWS spend is the final cash paid to Amazon after any discounts or credits.
  • While I cannot disclose the identity of my source, I am 100% confident in these numbers, and have verified their veracity with other sources.

Anthropic’s Compute Costs Are Likely Much Higher Than Reported — $1.35 Billion in 2024 on AWS Alone

In February of this year, The information reported that Anthropic burned $5.6 billion in 2024, and made somewhere between $400 million and $600 million in revenue:

It’s not publicly known how much revenue Anthropic generated in 2024, although its monthly revenue rose to about $80 million by the end of the year, compared to around $8 million at the start. That suggests full-year revenue in the $400 million to $600 million range.

…Anthropic told investors it expects to burn $3 billion this year, substantially less than last year, when it burned $5.6 billion. Last year’s cash burn was nearly $3 billion more than Anthropic had previously projected. That’s likely due to the fact that more than half of the cash burn came from a one-off payment to access the data centers that power its technology, according to one of the people who viewed the pitch.

While I don’t know about prepayment for services, I can confirm from a source with direct knowledge of billing that Anthropic spent $1.35 billion on Amazon Web Services in 2024, and has already spent $2.66 billion on Amazon Web Services through the end of September.

Assuming that Anthropic made $600 million in revenue, this means that Anthropic spent $6.2 billion in 2024, leaving $4.85 billion in costs unaccounted for.

The Information’s piece also brings up another point:

The costs to develop AI models accounted for a major portion of Anthropic’s expenses last year. The company spent $1.5 billion on servers for training AI models. OpenAI was on track to spend as much as $3 billion on training costs last year, though that figure includes additional expenses like paying for data.

Before I go any further, I want to be clear that The Information’s reporting is sound, and I trust that their source (I have no idea who they are or what information was provided) was operating in good faith with good data.

However, Anthropic is telling people it spent $1.5 billion on just training when it has an Amazon Web Services bill of $1.35 billion, which heavily suggests that its actual compute costs are significantly higher than we thought, because, to quote SemiAnalysis, “ a large share of Anthropic’s spending is going to Google Cloud .”

I am guessing, because I do not know, but with $4.85 billion of other expenses to account for, it’s reasonable to believe Anthropic spent an amount similar to its AWS spend on Google Cloud. I do not have any information to confirm this, but given the discrepancies mentioned above, this is an explanation that makes sense.

I also will add that there is some sort of undisclosed cut that Amazon gets of Anthropic’s revenue, though it’s unclear how much. According to The Information , “Anthropic previously told some investors it paid a substantially higher percentage to Amazon [than OpenAI’s 20% revenue share with Microsoft] when companies purchase Anthropic models through Amazon.”

I cannot confirm whether a similar revenue share agreement exists between Anthropic and Google.

This also makes me wonder exactly where Anthropic’s money is going.

Where Is Anthropic’s Money Going?

Anthropic has, based on what I can find, raised $32 billion in the last two years, starting out 2023 with a $4 billion investment from Amazon from September 2023 (bringing the total to $37.5 billion), where Amazon was named its “primary cloud provider” nearly eight months after Anthropic announced Google was Anthropic’s “cloud provider.,” which Google responded to a month later by investing another $2 billion on October 27 2023 , “involving a $500 million upfront investment and an additional $1.5 billion to be invested over time,” bringing its total funding from 2023 to $6 billion.

In 2024, it would raise several more rounds — one in January for $750 million, another in March for $884.1 million, another in May for $452.3 million, and another $4 billion from Amazon in November 2024 , which also saw it name AWS as Anthropic’s “primary cloud and training partner,” bringing its 2024 funding total to $6 billion.

In 2025 so far, it’s raised a $1 billion round from Google , a $3.5 billion venture round in March, opened a $2.5 billion credit facility in May, and completed a $13 billion venture round in September, valuing the company at $183 billion . This brings its total 2025 funding to $20 billion.

While I do not have Anthropic’s 2023 numbers, its spend on AWS in 2024 — around $1.35 billion — leaves (as I’ve mentioned) $4.85 billion in costs that are unaccounted for. The Information reports that costs for Anthropic’s 521 research and development staff reached $160 million in 2024 , leaving 394 other employees unaccounted for (for 915 employees total), and also adding that Anthropic expects its headcount to increase to 1900 people by the end of 2025.

The Information also adds that Anthropic “expects to stop burning cash in 2027.”

This leaves two unanswered questions:

  • Where is the rest of Anthropic’s money going?
  • How will it “stop burning cash” when its operational costs explode as its revenue increases?

An optimist might argue that Anthropic is just growing its pile of cash so it’s got a warchest to burn through in the future, but I have my doubts. In a memo revealed by WIRED , Anthropic CEO Dario Amodei stated that “if [Anthropic wanted] to stay on the frontier, [it would] gain a very large benefit from having access to this capital,” with “this capital” referring to money from the Middle East.

Anthropic and Amodei’s sudden willingness to take large swaths of capital from the Gulf States does not suggest that it’s not at least a little desperate for capital, especially given Anthropic has, according to Bloomberg , “recently held early funding talks with Abu Dhabi-based investment firm MGX” a month after raising $13 billion .

In my opinion — and this is just my gut instinct — I believe that it is either significantly more expensive to run Anthropic than we know, or Anthropic’s leaked (and stated) revenue numbers are worse than we believe. I do not know one way or another, and will only report what I know.

How Much Did Anthropic and Cursor Spend On Amazon Web Services In 2025?

So, I’m going to do this a little differently than you’d expect, in that I’m going to lay out how much these companies spent, and draw throughlines from that spend to its reported revenue numbers and product announcements or events that may have caused its compute costs to increase.

I’ve only got Cursor’s numbers from January through September 2025, but I have Anthropic’s AWS spend for both the entirety of 2024 and through September 2025.

What Does “Annualized” Mean?

So, this term is one of the most abused terms in the world of software, but in this case , I am sticking to the idea that it means “month times 12.” So, if a company made $10m in January, you would say that its annualized revenue is $120m. Obviously, there’s a lot of (when you think about it, really obvious) problems with this kind of reporting — and thus, you only ever see it when it comes to pre-IPO firms — but that’s besides the point.

I give you this explanation because, when contrasting Anthropic’s AWS spend with its revenues, I’ve had to work back from whatever annualized revenues were reported for that month.

Anthropic’s Amazon Web Services Spend In 2024 - $1.359 Billion - Estimated Revenue $400 Million to $600 Million

Anthropic’s 2024 revenues are a little bit of a mystery, but, as mentioned above, The Information says it might be between $400 million and $600 million.

Here’s its monthly AWS spend.

  • January 2024 - $52.9 million
  • February 2024 - $60.9 million
  • March 2024 - $74.3 million
  • April 2024 - $101.1 million
  • May 2024 - $100.1 million
  • June 2024 - $101.8 million
  • July 2024 - $118.9 million
  • August 2024 - $128.8 million
  • September 2024 - $127.8 million
  • October 2024 - $169.6 million
  • November 2024 - $146.5 million
  • December 2024 - $176.1 million

Analysis: Anthropic Spent At Least 126.5% of Its 2024 Revenue On Amazon Web Services In 2024

I’m gonna be nice here and say that Anthropic made $600 million in 2024 — the higher end of The Information’s reporting — meaning that it spent 126.5% of its revenue ($1.359 billion) on Amazon Web Services.

Anthropic’s Amazon Web Services Spend In 2025 Through September 2025 - $2.66 Billion - Estimated Revenue Through September $2.55 Billion - 104% Of Revenue Spent on AWS

Thanks to my own analysis and reporting from outlets like The Information and Reuters, we have a pretty good idea of Anthropic’s revenues for much of the year. That said, July, August, and September get a little weirder, because we’re relying on “almosts” and “approachings,” as I’ll explain as we go.

I’m also gonna do an analysis on a month-by-month basis, because it’s necessary to evaluate these numbers in context.

January 2025 - $188.5 million In AWS Spend, $72.91 or $83 Million In Revenue - 227% Of Revenue Spent on AWS

In this month, Anthropic’s reported revenue was somewhere from $875 million to $1 billion annualized , meaning either $72.91 million or $83 million for the month of January.

February 2025 - $181.2 million in AWS Spend, $116 Million In Revenue - 156% Of Revenue Spent On AWS - 181% Of Revenue Spent On AWS

In February, as reported by The Information , Anthropic hit $1.4 billion annualized revenue, or around $116 million each month.

March 2025 - $240.3 million in AWS Spend - $166 Million In Revenue - 144% Of Revenue Spent On AWS - Launch of Claude Sonnet 3.7 & Claude Code Research Preview (February 24)

In March, as reported by Reuters , Anthropic hit $2 billion in annualized revenue, or $166 million in revenue.

Because February is a short month, and the launch took place on February 24 2025, I’m considering the launches of Claude 3.7 Sonnet and Claude Code’s research preview to be a cost burden in the month of March.

And man, what a burden! Costs increased by $59.1 million, primarily across compute categories, but with a large ($2 million since January) increase in monthly costs for S3 storage.

April 2025 - $221.6 million in AWS Spend - $204 Million In Revenue - 108% Of Revenue Spent On AWS

I estimate, based on a 22.4% compound growth rate, that Anthropic hit around $2.44 billion in annualized revenue in April, or $204 million in revenue.

Interestingly, this was the month where Anthropic launched its $100 and $200 dollar a month “Max” plan s, and it doesn’t seem to have dramatically increased its costs. Then again, Max is also the gateway to things like Claude Code, which I’ll get to shortly.

May 2025 - $286.7 million in AWS Spend - $250 Million In Revenue - 114% Of Revenue Spent On AWS - Sonnet 4, Opus 4, General Availability Of Claude Code (May 22) Service Tiers (May 30)

In May, as reported by CNBC , Anthropic hit $3 billion in annualized revenue, or $250 million in monthly average revenue.

This was a big month for Anthropic, with two huge launches on May 22 2025 — its new, “more powerful” models Claude Sonnet and Opus 4, as well as the general availability of its AI coding environment Claude Code.

Eight days later, on May 30 2025, a page on Anthropic's API documentation appeared for the first time: " Service Tiers ":

Different tiers of service allow you to balance availability, performance, and predictable costs based on your application’s needs.

We offer three service tiers:

- Priority Tier: Best for workflows deployed in production where time, availability, and predictable pricing are important

Standard: Best for bursty traffic, or for when you’re trying a new idea

Batch: Best for asynchronous workflows which can wait or benefit from being outside your normal capacity

Accessing the priority tier requires you to make an up-front commitment to Anthropic , and said commitment is based on a number of months (1, 3, 6 or 12) and the number of input and output tokens you estimate you will use each minute.

What’s a Priority Tier? Why Is It Significant?

As I’ll get into in my June analysis, Anthropic’s Service Tiers exist specifically for it to “guarantee” your company won’t face rate limits or any other service interruptions, requiring a minimum spend, minimum token throughput, and for you to pay higher rates when writing to the cache — which is, as I’ll explain, a big part of running an AI coding product like Cursor.

Now, the jump in costs — $65.1 million or so between April and May — likely comes as a result of the final training for Sonnet and Opus 4, as well as, I imagine, some sort of testing to make sure Claude Code was ready to go.

June 2025 - $321.4 million in AWS Spend - $333 Million In Revenue - 96.5% Of Revenue Spent On AWS - Anthropic Cashes In On Service Tier Tolls That Add An Increased Charge For Prompt Caching, Directly Targeting Companies Like Cursor

In June, as reported by The Information, Anthropic hit $4 billion in annualized revenue, or $333 million.

Anthropic’s revenue spiked by $83 million this month, and so did its costs by $34.7 million.

Anthropic Started The Subprime AI Crisis In June 2025, Increasing Costs On Its Largest Customer, Doubling Its AWS Spend In A Month

I have, for a while, talked about the Subprime AI Crisis , where big tech and companies like Anthropic, after offering subsidized pricing to entice in customers, raise the rates on their customers to start covering more of their costs, leading to a cascade where businesses are forced to raise their prices to handle their new, exploding costs.

And I was god damn right. Or, at least, it sure looks like I am. I’m hedging, forgive me. I cannot say for certain, but I see a pattern.

It’s likely the June 2025 spike in revenue came from the introduction of service tiers, which specifically target prompt caching, increasing the amount of tokens you’re charged for as an enterprise customer based on the term of the contract, and your forecast usage.

Per my reporting in July :

You see, Anthropic specifically notes on its "service tiers" page that requests at the priority tier are "prioritized over all other requests to Anthropic," a rent-seeking measure that effectively means a company must either:

- Commit to at least a month, though likely 3-12 months of specific levels of input and output tokens a minute, based on what they believe they will use in the future, regardless of whether they do.

- Accept that access to Anthropic models will be slower at some point, in some way that Anthropic can't guarantee.Furthermore, the way that Anthropic is charging almost feels intentionally built to fuck over any coding startup that would use its service. Per the service tier page, Anthropic charges 1.25 for every time you write a token to the cache with a 5 minute TTL — or 2 tokens if you have a 1 hour TTL — and a longer cache is effectively essential for any background task where an agent will be working for more than 5 minutes, such as restructuring a particularly complex series of code, you know, the exact things that Cursor is well-known and marketed to do.

Furthermore, the longer something is in the cache, the better autocomplete suggestions for your code will be. It's also important to remember you're, at some point, caching the prompts themselves — so the instructions of what you want Cursor to do, meaning that the more complex the operation, the more expensive it'll now be for Cursor to provide the service with reasonable uptime.

Cursor, as Anthropic’s largest client (the second largest being Github Copilot), represents a material part of its revenue, and its surging popularity meant it was sending more and more revenue Anthropic’s way.  Anysphere, the company that develops Cursor, hit $500 million annualized revenue ($41.6 million) by the end of May , which Anthropic chose to celebrate by increasing its costs.

On June 16 2025, Cursor launched a $200-a-month “Ultra” plan , as well as dramatic changes to its $20-a-month Pro pricing that, instead of offering 500 “fast” responses using models from Anthropic and OpenAI, now effectively provided you with “at least” whatever you paid a month (so $20-a-month got at least $20 of credit), massively increasing the costs for users , with one calling the changes a “rug pull” after spending $71 in a single day .

As I’ll get to later in the piece, Cursor’s costs exploded from $6.19 million in May 2025 to $12.67 million in June 2025, and I believe this is a direct result of Anthropic’s sudden and aggressive cost increases.

Similarly, Replit, another AI coding startup, moved to “Effort-Based Pricing” on June 18 2025 . I have not got any information around its AWS spend.

I’ll get into this a bit later, but I find this whole situation disgusting.

July 2025 $323.2 million in AWS Spend - $416 Million In Revenue - 77.7% Of Revenue Spent On AWS

In July, as reported by Bloomberg , Anthropic hit $5 billion in annualized revenue, or $416 million.

While July wasn’t a huge month for announcements, it was allegedly the month that Claude Code was generating “nearly $400 million in annualized revenue,” or $33.3 million ( according to The Information , who says Anthropic was “approaching” $5 billion in annualized revenue - which likely means LESS than that - but I’m going to go with the full $5 billion annualized for sake of fairness.

There’s roughly an $83 million bump in Anthropic’s revenue between June and July 2025, and I think Claude Code and its new rates are a big part of it. What’s fascinating is that cloud costs didn’t increase too much — by only $1.8 million, to be specific.

August 2025 - $383.7 million in AWS Spend - $416 Million In Revenue - 92% Of Revenue Spent On AWS

In August, according to Anthropic, its run-rate “ reached over $5 billion ,” or in or around $416 million. I am not giving it anything more than $5 billion, especially considering in July Bloomberg’s reporting said “about $5 billion.”

Costs grew by $60.5 this month, potentially due to the launch of Claude Opus 4.1 , Anthropic’s more aggressively expensive model, though revenues do not appear to have grown much along the way.

Yet what’s very interesting is that Anthropic — starting August 28 — launched weekly rate limits on its Claude Pro and Max plans. I wonder why?

September 2025 - $518.9 million in AWS Spend - $583 Million In Revenue - 88.9% Of Revenue Spent On AWS

Oh fuck! Look at that massive cost explosion!

Anyway, according to Reuters, Anthropic’s run rate is “approaching $7 billion” in October , and for the sake of fairness , I am going to just say it has $7 billion annualized, though I believe this number to be lower. “Approaching” can mean a lot of different things — $6.1 billion, $6.5 billion — and because I already anticipate a lot of accusations of “FUD,” I’m going to err on the side of generosity.

If we assume a $6.5 billion annualized rate, that would make this month’s revenue $541.6 million, or 95.8% of its AWS spend.

Nevertheless, Anthropic’s costs exploded in the space of a month by $135.2 million (35%) - likely due to the fact that users, as I reported in mid-July, were costing it thousands or tens of thousands of dollars in compute , a problem it still faces to this day, with VibeRank showing a user currently spending $51,291 in a calendar month on a $200-a-month subscription .

If there were other costs, they likely had something to do with the training runs for the launches of Sonnet 4.5 on September 29 2025 and Haiku 4.5 in October 2025 .

Anthropic’s Monthly AWS Costs Have Increased By 174% Since January - And With Its Potential Google Cloud Spend and Massive Staff, Anthropic Is Burning Billions In 2025

While these costs only speak to one part of its cloud stack — Anthropic has an unknowable amount of cloud spend on Google Cloud, and the data I have only covers AWS — it is simply remarkable how much this company spends on AWS, and how rapidly its costs seem to escalate as it grows.

Though things improved slightly over time — in that Anthropic is no longer burning over 200% of its revenue on AWS alone — these costs have still dramatically escalated, and done so in an aggressive and arbitrary manner.

Anthropic’s AWS Costs Increase Linearly With Revenue, Consuming The Majority Of Each Dollar Anthropic Makes - As A Reminder, It Also Spends Hundreds Of Millions Or Billions On Google Cloud Too

So, I wanted to visualize this part of the story, because I think it’s important to see the various different scenarios.

An Estimate of Anthropic’s Potential Cloud Compute Spend Through September

THE NUMBERS I AM USING ARE ESTIMATES CALCULATED BASED ON 25%, 50% and 100% OF THE AMOUNTS THAT ANTHROPIC HAS SPENT ON AMAZON WEB SERVICES THROUGH SEPTEMBER.

I apologize for all the noise, I just want it to be crystal clear what you see next.

As you can see, all it takes is for Anthropic to spend (I am estimating) around 25% of its Amazon Web Services bills (for a total of around $3.33 billion in compute costs through the end of September) to savage any and all revenue ($2.55 billion) it’s making.

Assuming Anthropic spends half of its  AWS spend on Google Cloud, this number climbs to $3.99 billion, and if you assume - and to be clear, this is an estimate - that it spends around the same on both Google Cloud and AWS, Anthropic has spent $5.3 billion on compute through the end of September.

I can’t tell you which it is, just that we know for certain that Anthropic is spending money on Google Cloud, and because Google owns 14% of the company — rivalling estimates saying Amazon owns around 15-19% — it’s fair to assume that there’s a significant spend.

Anthropic’s Costs Are Out Of Control, Consistently And Aggressively Outpacing Revenue - And Amazon’s Revenue from Anthropic Of $2.66 Billion Is 2.5% Of Its 2025 Capex

I have sat with these numbers for a great deal of time, and I can’t find any evidence that Anthropic has any path to profitability outside of aggressively increasing the prices on their customers to the point that its services will become untenable for consumers and enterprise customers alike.

As you can see from these estimated and reported revenues, Anthropic’s AWS costs appear to increase in a near-linear fashion with its revenues, meaning that the current pricing — including rent-seeking measures like Priority Service Tiers — isn’t working to meet the burden of its costs.

We do not know its Google Cloud spend, but I’d be shocked if it was anything less than 50% of its AWS bill. If that’s the case, Anthropic is in real trouble - the cost of the services underlying its business increase the more money they make.

It’s becoming increasingly apparent that Large Language Models are not a profitable business. While I cannot speak to Amazon Web Services’ actual costs, it’s making $2.66 billion from Anthropic, which is the second largest foundation model company in the world.

Is that really worth $105 billion in capital expenditures ? Is that really worth building a giant 1200 acre data center in Indiana with 2.2GW of electricity?

What’s the plan, exactly? Let Anthropic burn money for the foreseeable future until it dies, and then pick up the pieces? Wait until Wall Street gets mad at you and then pull the plug?

Who knows.

But let’s change gears and talk about Cursor — Anthropic’s largest client and, at this point, a victim of circumstance.

Cursor’s Amazon Web Services Spend In 2025 Through September 2025 - $69.99 Million

An Important Note About Cursor’s Compute Spend

Amazon sells Anthropic’s models through Amazon Bedrock , and I believe that AI startups are compelled to spend some of their AI model compute costs through Amazon Web Services. Cursor also sends money directly to Anthropic and OpenAI, meaning that these costs are only one piece of its overall compute costs. In any case, it’s very clear that Cursor buys some degree of its Anthropic model spend through Amazon.

I’ll also add that Tom Dotan of Newcomer reported a few months ago that an investor told him that “Cursor is spending 100% of its revenue on Anthropic.”

Unlike Anthropic, we lack thorough reporting of the month-by-month breakdown of Cursor’s revenues. I will, however, mention them in the month I have them.

For the sake of readability — and because we really don’t have much information on Cursor’s revenues beyond a few months — I’m going to stick to a bullet point list.

Another Note About Cursor’s AWS Spend - It Likely Funnels Some Model Spend Through AWS, But The Majority Goes Directly To Providers Like Anthropic

As discussed above, Cursor announced (along with their price change and $200-a-month plan) several multi-year partnerships with xAI, Anthropic, OpenAI and Google, suggesting that it has direct agreements with Anthropic itself versus one with AWS to guarantee “this volume of compute at a predictable price.”

Based on its spend with AWS, I do not see a strong “minimum” spend that would suggest that they have a similar deal with Amazon — likely because Amazon handles more than its infrastructure than just compute, but incentivizes it to spend on Anthropic’s models through AWS by offering discounts, something I’ve confirmed with a source.

In any case, here’s what Cursor spent on AWS.

  • January 2025 - $1.459 million
  • February 2025 - $2.47 million
  • March 2025 - $4.39 million
  • April 2025 - $4.74 million
  • May 2025 - $6.19 million
  • June 2025 - $12.67 million
    • So, Bloomberg reported that Cursor hit $500 million on June 5 2025 , along with raising a $900 million funding round. Great news! Turns out it’d need to start handing a lot of that to Anthropic.
    • This was, as I’ve discussed above, the month when Anthropic forced it to adopt “Service Tiers”. I go into detail about the situation here , but the long and short of it is that Anthropic increased the amount of tokens you burned by writing stuff to the cache (think of it like RAM in a computer), and AI coding startups are very cache heavy, meaning that Cursor immediately took on what I believed would be massive new costs. As I discuss in what I just linked, this led Cursor to aggressively change its product, thereby vastly increasing its customers’ costs if they wanted to use the same service.
    • That same month, Cursor’s AWS costs — which I believe are the minority of its cloud compute costs — exploded by 104% (or by $6.48 million), and never returned to their previous levels.
    • It’s conceivable that this surge is due to the compute-heavy nature of the latest Claude 4 models released that month — or, perhaps, Cursor sending more of its users to other models that it runs on Bedrock.
  • July 2025 - $15.5 million
    • As you can see, Cursor’s costs continue to balloon in July, and I am guessing it’s because of the Service Tiers situation — which, I believe, indirectly resulted in Cursor pushing more users to models that it runs on Amazon’s infrastructure.
  • August 2025 - $9.67 million
    • So, I can only guess as to why there was a drop here. User churn? It could be the launch of GPT-5 on Cursor , which gave users a week of free access to OpenAI’s new models.
    • What’s also interesting is that this was the month when Cursor announced that its previously free “auto” model (where Cursor would select the best available premium model or its own model) would now bill at “ competitive token rates ,” by which I mean it went from charging nothing to $1.25 per million input and $6 per million output tokens. This change would take effect on September 15 2025.
    • On August 10 2025 , Tom Dotan of Newcomer reported that Cursor was “well above” $500 million in annualized revenue based on commentary from two sources.
  • September 2025 - $12.91 million
    • Per the above, this is the month when Cursor started charging for its “auto” model.

What Anthropic May Have Done To Cursor Is Disgusting - And Is A Preview Of What’s To Come For AI Startups

When I wrote that Anthropic and OpenAI had begun the Subprime AI Crisis back in July, I assumed that the increase in costs was burdensome, but having the information from its AWS bills, it seems that Anthropic’s actions directly caused Cursor’s costs to explode by over 100%.

While I can’t definitively say “this is exactly what did it,” the timelines match up exactly, the costs have never come down, Amazon offers provisioned throughput , and, more than likely, Cursor needs to keep a standard of uptime similar to that of Anthropic’s own direct API access.

If this is what happened, it’s deeply shameful.

Cursor, Anthropic’s largest customer , in the very same month it hit $500 million in annualized revenue, immediately had its AWS and Anthropic-related costs explode to the point that it had to dramatically reduce the value of its product just as it hit the apex of its revenue growth.

Anthropic Timed Its Rent-Seeking Service Tier Price Increases on Cursor With The Launch Of A Competitive Product - Which Is What’s Coming To Any AI Startup That Builds On Top Of Its Products

It’s very difficult to see Service Tiers as anything other than an aggressive rent-seeking maneuver.

Yet another undiscussed part of the story is that the launch of Claude 4 Opus and Sonnet — and the subsequent launch of Service Tiers — coincided with the launch of Claude Code , a product that directly competes with Cursor, without the burden of having to pay itself for the cost of models or, indeed, having to deal with its own “Service Tiers.”

Anthropic may have increased the prices on its largest client at the time it was launching a competitor, and I believe that this is what awaits any product built on top of OpenAI or Anthropic’s models.

The Subprime AI Crisis Is Real, And It Can Hurt You

I realize this has been a long, number-stuffed article, but the long-and-short of it is simple: Anthropic is burning all of its revenue on compute, and Anthropic will willingly increase the prices on its customers if it’ll help it burn less money, even though that doesn’t seem to be working.

What I believe happened to Cursor will likely happen to every AI-native company, because in a very real sense, Anthropic’s products are a wrapper for its own models, except it only has to pay the (unprofitable) costs of running them on Amazon Web Services and Google Cloud.

As a result, both OpenAI and Anthropic can (and may very well!) devour the market of any company that builds on top of their models.

OpenAI may have given Cursor free access to its GPT-5 models in August, but a month later on September 15 2025 it debuted massive upgrades to its competitive “Codex” platform.

Any product built on top of an AI model that shows any kind of success can be cloned immediately by OpenAI and Anthropic, and I believe that we’re going to see multiple price increases on AI-native companies in the next few months. After all, OpenAI already has its own priority processing product, which it launched shortly after Anthropic’s in June .

The ultimate problem is that there really are no winners in this situation. If Anthropic kills Cursor through aggressive rent-seeking, that directly eats into its own revenues. If Anthropic lets Cursor succeed, that’s revenue , but it’s also clearly unprofitable revenue . Everybody loses, but nobody loses more than Cursor’s (and other AI companies’) customers.

Anthropic Is In Real Trouble - And The Current Cost Of Doing Business Is Unsustainable, Meaning Prices Must Increase

I’ve come away from this piece with a feeling of dread.

Anthropic’s costs are out of control, and as things get more desperate, it appears to be lashing out at its customers, both companies like Cursor and Claude Code customers facing weekly rate limits on their more-powerful models who are chided for using a product they pay for. Again, I cannot say for certain, but the spike in costs is clear, and it feels like more than a coincidence to me.

There is no period of time that I can see in the just under two years of data I’ve been party to that suggests that Anthropic has any means of — or any success doing — cost-cutting, and the only thing this company seems capable of doing is increasing the amount of money it burns on a monthly basis.

Based on what I have been party to, the more successful Anthropic becomes, the more its services cost. The cost of inference is clearly increasing for customers , but based on its escalating monthly costs, the cost of inference appears to be high for Anthropic too, though it’s impossible to tell how much of its compute is based on training versus running inference.

In any case, these costs seem to increase with the amount of money Anthropic makes, meaning that the current pricing of both subscriptions and API access seems unprofitable, and must increase dramatically — from my calculations, a 100% price increase might work, but good luck retaining every single customer and their customers too! — for this company to ever become sustainable.

I don’t think that people would pay those prices. If anything, I think what we’re seeing in these numbers is a company bleeding out from costs that escalate the more that its user base grows. This is just my opinion, of course.

I’m tired of watching these companies burn billions of dollars to destroy our environment and steal from everybody. I’m tired that so many people have tried to pretend there’s a justification for burning billions of dollars every year, clinging to empty tropes about how this is just like Uber or Amazon Web Services , when Anthropic has built something far more mediocre.

Mr. Amodei, I am sure you will read this piece, and I can make time to chat in person on my show Better Offline. Perhaps this Friday? I even have some studio time on the books.

Thomas Lange: New FAI images available, Rocky Linux 10 and AlmaLinux 10 support

PlanetDebian
blog.fai-project.org
2025-10-20 16:03:12
New FAI ISOs using FAI 6.4.3 are available. They are using Debian 13 aka trixie, kernel 6.12 and you can now install Rocky Linux 10 and AlmaLinux 10 using this images. There's also a variant for installing Linux Mint 22.2 and Ubuntu 24.04 which includes all packages on the ISO. [1]: https://fai-p...
Original Article

faiblog / posts / New FAI images available, Rocky Linux 10 and AlmaLinux 10 support

New FAI ISOs using FAI 6.4.3 are available. They are using Debian 13 aka trixie, kernel 6.12 and you can now install Rocky Linux 10 and AlmaLinux 10 using this images.

There's also a variant for installing Linux Mint 22.2 and Ubuntu 24.04 which includes all packages on the ISO.

Create Your Own AI Voice Agent Using EchoKit, ESP32, and Rust

Lobsters
www.instructables.com
2025-10-20 16:01:04
Step-by-step tutorial for EchoKit, a DIY AI voice agent (fully open source) I've been working on. 🔹 Hardware: An easy-to-assemble ESP32-S3 board (EchoKit). 🔹 Server: A high-performance server built entirely in Rust to manage the ASR -> LLM -> TTS pipeline. 🔹 AI Models: Fully customizable, usin...
Original Article

Introduction: Create Your Own AI Voice Agent Using EchoKit, ESP32, and Rust

Have you ever wanted to build your very own voice AI Agent — one that actually talks back to you?

In this tutorial, we'll show you how to build a fun and interactive AI assistant using EchoKit , a powerful yet easy-to-build voice AI agent powered by ESP32 .

As an open-source project, EchoKit not only lets you play with cutting-edge AI, but it also allows you to understand the underlying technology and modify it to suit your needs — perfect for classrooms, makerspaces, or personal AI projects.

In just a few minutes, you’ll have EchoKit talking back to you and understanding your commands — whether you’re a student, teacher, or maker passionate about exploring AI.

Supplies

You’ll need:

  1. EchoKit board (ESP32-based, open-source AI hardware, available at echokit.dev)
  2. USB-C cable
  3. Laptop / PC (Windows, macOS, or Linux)
  4. Wi-Fi connection
  5. API key and Endpoint URLs for the Whisper, LLM, and TTS model.

Step 1: Assemble the EchoKit Device

When you receive your EchoKit device, you’ll find four key components:

  1. The ESP32-S3 development board
  2. The extension board , which includes the audio and microphone module
  3. A mini speaker
  4. A 1.54" LCD screen

The assembly process is simple — just follow these steps:

  1. Insert the mini speaker into the audio module on the extension board.
  2. Attach the ESP32-S3 development board to the extension board.
  3. Plug the LCD screen into the designated slot at the top of the extension board.
  4. That’s it — your EchoKit is now ready for the next step!

Step 2: Flash the EchoKit Device

Now that your EchoKit is assembled, it’s time to flash the firmware.

  1. Connect EchoKit to your computer using the included USB-C cable.
  2. Use the ESP32 launchpad to easily flash the firmware. Open the launchpad, follow the instructions to “Connect” and then “Flash.”
  3. After flashing, you should see a QR code and hear a welcoming voice — your EchoKit is ready to go!
Alternatively, you can use the espflash command line to flash the hardware. Check out the details here .

Step 3: Set Up the Server

Now comes the exciting part: setting up the server to power your voice AI agent!

The EchoKit server is responsible for managing communication between your device and AI services like Whisper (ASR), LLM, and TTS. It’s fully customizable — giving you control over your AI’s responses, voice, and more.

  1. You can use the EchoKit team's provided server for quick setup directly and go to the step 4.
  2. USA: ws://indie.echokit.dev/ws/
  3. Asia: ws://hk.echokit.dev/ws/
  4. But if you prefer full customization, I recommend setting up your own server. You’ll be able to tweak every aspect of your AI’s behavior, from response generation to voice synthesis. Make sure you have Rust installed .

To start, you’ll need to download the server code. Open a terminal and run the following command:

git clone https://github.com/second-state/echokit_server.git

Once you’ve cloned the repository, navigate to the config.toml file. In this file, you will configure the following:

  1. ASR (Automatic Speech Recognition) : This tells the server where to send audio transcriptions.
  2. LLM (Large Language Model) : The model used to generate AI responses.
  3. TTS (Text-to-Speech) : The model that converts text responses into speech.

For your convenience, it's recommended to use Groq here. I don't think you will need to pay any cents for your usage for this project.

Below is an example for using Groq and you just need to add your own API key.

addr = "0.0.0.0:8080"

hello_wav = "hello.wav"

[asr]

url = "https://api.groq.com/openai/v1/audio/transcriptions"

lang = "en"

api_key = "gsk_xxx"

model = "whisper-large-v3-turbo"

[llm]

llm_chat_url = "https://api.groq.com/openai/v1/chat/completions"

api_key = "gsk_xxx"

model = "llama-3.3-70b-versatile"

history = 1

[tts]

platform = "Groq"

api_key = "gsk_xxx"

model = "playai-tts"

voice = "Aaliyah-PlayAI"

[[llm.sys_prompts]]

role = "system"

content = """

# input your prompt here.

"""

Once you configure these parameters, you can run the following command line to start the server.

# Build the project

cargo build --release

# Enable debug logging

export RUST_LOG=debug

# Run the EchoKit server

target/release/echokit_server

If everything goes well, you’ll see output like this in your terminal:

[2025-10-13T09:37:13Z INFO echokit_server] Hello WAV: hello.wav

The server runs successfully now. Next let's connect the server and the device.

Step 4: Connect the Server and Device

Before you begin, make sure you have the EchoKit server running on your local machine or on a remote server. If you're running the EchoKit server locally, follow the setup instructions to start the server before proceeding.

  1. Open https://echokit.dev/setup/ in your browser. Make sure you're using a browser that supports Bluetooth. Chrome is a good one.
  2. Click the “ Connect to EchoKit ” button to start pairing your EchoKit devicer.
  3. Then, you’ll need to enter the following information:
  4. Wi-Fi Name : Enter the name of your 2.4G Wi-Fi network.
  5. Wi-Fi Password : Enter the password for the Wi-Fi network.
  6. Server URL : In the format of ws://192.168.1.56:8080/ws — replace 192.168.1.56 with the IP address of your server and 8080 with the port number where your EchoKit server is running.
  7. USA: ws://indie.echokit.dev/ws/
  8. Asia: ws://hk.echokit.dev/ws/
  9. Apply the Settings
  10. Press the K0 button to apply these settings and establish the connection.

Once you've done this, you’ll see the progress on the EchoKit screen, which will show steps like "Restarting Device", "Connecting to Wi-Fi", and "Connecting to Server." When the process is complete, you’ll hear a welcome voice, and the screen will display "Hello Set."

Verify the Connection

If you're running the EchoKit server locally, you should also see the following message in the server log, confirming that the connection was successful:

echokit_server::services::ws] 98a316f0bcc5:b24de72669964a08b2bd4b2d47c14d76 connected.

Step 5: Talk With the EchoKit

Now, let’s start interacting with your EchoKit voice AI agent!

  1. Press the K0 Button to enter chat mode .
  2. When you see “Listening” on the screen, you’re ready to talk to EchoKit.

Since we’re using the ASR-LLM-TTS system, here’s how it works:

  1. ASR (Automatic Speech Recognition) will first transcribe what you say into text.
  2. The LLM (Large Language Model) will generate a response based on your input and the custom prompt you’ve set up.
  3. Finally, the TTS (Text-to-Speech) model will read the generated response back to you.

Because EchoKit uses these three powerful models in combination, it might take a few moments for it to respond, but Groq's optimized performance ensures a quick response time (usually only a few seconds).

Step 6: What's Next

Now that you’ve built your own voice AI agent, the possibilities are endless. If you’re looking to explore even more features, here are some options to take your EchoKit experience to the next level:

  1. Explore the End-to-End Model : If you're interested in simplifying the process, you can use the end-to-end model like Gemini with EchoKit. This model streamlines the entire ASR-LLM-TTS pipeline into a single step, making it even easier to interact with your AI agent. However, using the modular approach gives you less flexibility and control over each step — so feel free to experiment with both!
  2. Add Custom Actions with MCP : EchoKit also supports MCP (Multi-Channel Processing) , which allows you to add custom actions to your voice AI agent. With MCP, you can control external devices or trigger specific events based on voice commands, opening up endless possibilities for automation and smart systems. Whether it’s controlling a smart home device or creating interactive experiences, MCP offers a powerful way to extend EchoKit’s capabilities.

EchoKit is an open-source platform, so there’s always room to customize and explore. You can modify the behavior of your AI agent, integrate new AI models, or even contribute to the community. Check out the EchoKit website and resources for more advanced tutorials and examples.

Commodore 64 Ultimate

Hacker News
www.commodore.net
2025-10-20 15:55:52
Comments...
Original Article

ex. tax

Preorder FAQ

Is this a Kickstarter?

Not quite - this is an official Commodore® preorder with a money-back guarantee. Similar to crowdfunding, every penny goes into manufacturing first, and then to the mission to reboot Commodore itself. By skipping Kickstarter, we avoid big platform fees and pass the savings on to you - just like our founder Jack Tramiel would’ve wanted.

Am I charged now or at shipping, and what currency?

You’re charged now, in USD (your local currency shows until checkout). Like Kickstarter and others, your preorder funds production at the factory. The rest helps reboot the Commodore brand.

Can I get a refund?

Absolutely. You’re in control. Cancel anytime before shipping with our no quibble money-back guarantee for a full refund - no questions asked. Have an account? Cancel there with one click. No account? Use our support form and we’ll sort it fast.

How does shipping work?

We deliver from Commodore & Commodore partner hubs in the USA, UK, and EU (including our original 1980s Corby building). If you’re farther out, we ship via trusted couriers like DHL, FedEx, UPS, or Chickenlips Express.

Will I pay sales tax?

USA: Sales tax is added at checkout where required.

UK & Europe: VAT/IVA/MwSt is added at checkout.

What about tariffs?

If your country charges import tariffs (e.g. the USA), you’ll see a “Tariff Tax” at checkout. This covers everything upfront, so there’s nothing more to pay later. We don’t control these fees, but we avoid surprises. If tariffs drop after you pay, we’ll refund you at shipping. If they rise a lot before shipping, we may adjust- but only if truly needed to protect Commodore’s future.

What about customs duties?

We aim to prepay duties (DDP) where possible. In some places (e.g. Norway, Australia, New Zealand, Singapore), you might get a tax refund before shipping. Some countries may still charge customs fees on arrival - these aren’t included and as is commonplace, these are your responsibility.

Isn't this just an emulator or rebadged something-or-other?

The Commodore 64 Ultimate from the only original Commodore® brand (est. 1958) is brand new hardware-based Commodore 64 technology. It features SID chip-reactive LEDs (case, keyboard, power light*), the world's first transparent keyboard PCB*, original and modern creators’ autographs etched in copper, and an updated FPGA that replicates the original C64 motherboard (not emulation). All customisable via a new, easy main menu. It’s a fully authentic new build from Commodore - who else?

*except beige version

Will Commodore 64 Ultimate units still ship if the acquisition doesn’t complete?

Good news - as of 31st July 2025 we paid the sellers in full, ahead of schedule, and signed the final contracts to complete the acquisition. Even before that, we had a contract guaranteeing these machines would be made no matter what.

So what are the risks?

All launches have some risk, whether you're Apple or Commodore. But this preorder is unusually safe. Most parts are already in production. The motherboard is a proven design. Cases shipped in January 2025. Keycaps shipped in 2024. Just a few parts remain, like the keyboard base, made by a trusted partner since the Apple II era. All components are pro-designed to fit together. Add the box, manual, and power supply, and we’re set.

Is there a warranty?

Yes. Our products include a 1-year limited warranty covering defects in materials and workmanship. For customers in the EU or UK, your purchase also complies with local consumer protection laws, including the EU Consumer Rights Directive and UK Consumer Rights Act, which provide additional rights. You can also add an extended warranty here .

Can I email/DM you to suggest a feature change?

We’re not planning hardware changes right now. Chances are we already debated it over chickenlips snacks. We’re reviewing ideas from our launch form and will reach out if your Commodore 256 Ultra-Turbo-CD-Lightgun-Keyring makes the cut.

How I Use Claude Code on My Phone with Termux and Tailscale

Lobsters
www.skeptrune.com
2025-10-20 15:43:01
Comments...
Original Article

October 19, 2025 , Nicholas Khami

There’s a mini gold rush to put Claude Code on your phone. Some startups are building custom apps, others are creating managed cloud environments. They’re solving real problems, but you’re trading raw Unix power for convenience. If you have a desktop and 20 minutes, you can get full kernel access with SSH, termux , and tailscale .

Yesterday I posted about shipping a feature to this blog from the passenger seat while driving to Apple Hill, CA from San Francisco. I SSH’d into my office desktop from my phone, prompted Claude to make the changes, tested them on my phone’s browser, and pushed to production in 10 minutes. That post got 130k impressions and dozens of people asked for the setup.

This article walks through doing SSH-based mobile development with Claude Code. If you have a desktop that stays on (or a cheap VPS), you can get full terminal access from your phone with session persistence, port forwarding, and the ability to test your code on your actual mobile browser. The initial setup takes about 20 minutes and just works once configured.

The Architecture

The setup uses five standard Unix tools that work together without custom integration. A desktop runs Claude Code, tailscale creates a private network between your devices, termux gives you a real terminal on Android, SSH handles the connection, and tmux keeps your sessions alive when you disconnect.

Step 1: Setup Your Desktop

You need a computer that stays on. This could be a desktop at home, a desktop at your office, a cloud VM, or a home server. It doesn’t need to be powerful. Claude Code just makes API calls, the actual compute happens at Anthropic.

I keep a desktop at my office that stays on 24/7. It’s running Ubuntu with Claude Code installed. The computer does nothing else. It just sits there waiting for me to SSH in and start coding.

First, install Claude Code globally using npm. This gives you the claude command that you’ll use to start coding sessions.

npm install -g @anthropics/claude-code

Next, install tmux for session persistence. When you disconnect from SSH (phone locks, network drops, whatever), tmux keeps your Claude Code session running in the background. When you reconnect, you pick up exactly where you left off.

sudo apt install tmux  # Ubuntu/Debian
brew install tmux      # macOS

With Claude Code and tmux installed, your desktop is ready to host your development sessions.

Step 2: Install Tailscale Everywhere

Tailscale creates a private network between all your devices. Your phone gets a stable IP address that can reach your desktop, even when you’re on different networks. It just works.

On your desktop, run the Tailscale installer. The script will detect your OS and install the right package. Then bring up the Tailscale connection, which will prompt you to authenticate in your browser.

curl -fsSL https://tailscale.com/install.sh | sh
sudo tailscale up

Install Tailscale on your phone from the Play Store. Sign in with the same account. Your devices are now on the same network.

You’ll need your desktop’s Tailscale IP address to connect from your phone. Grab it with this command.

tailscale ip -4

You’ll get something like 100.64.0.5 . That’s your desktop’s address on the Tailscale network. It’s stable, it’s private, and it works from anywhere.

Step 3: Install Termux on Your Phone

Termux is a terminal emulator for Android that gives you a real Linux environment. Not a toy terminal. A real one with bash, ssh, and full package management.

Install Termux from F-Droid, not the Play Store. The Play Store version is outdated and broken. Get it from https://f-droid.org/en/packages/com.termux/ .

Once installed, you’ll need to update the package repositories and install the SSH client. Termux uses pkg as its package manager, which is basically a wrapper around apt.

pkg update
pkg install openssh

With OpenSSH installed, Termux can now connect to your desktop over SSH.

Step 4: SSH Into Your Desktop

Now for the moment of truth. Open Termux and SSH to your desktop using the Tailscale IP you grabbed earlier. Replace 100.64.0.5 with your actual IP and your-username with your desktop username.

ssh your-username@100.64.0.5

The first time you connect, SSH will ask you to verify the host fingerprint. Type yes . Then enter your password.

You’re in. You’re now running a shell on your desktop, from your phone, over a secure encrypted connection that works anywhere you have internet.

Step 5: Use tmux for Session Persistence

tmux is what makes this whole setup practical. When you disconnect from SSH, your tmux session keeps running on the desktop. When you reconnect, you attach to the same session and everything is exactly where you left it.

You’re now connected to your desktop via SSH from your phone. Start a new tmux session with a name you’ll remember. I usually name mine after the project I’m working on.

tmux new -s code

This creates a session named “code”. You can name it anything. Inside the tmux session, launch Claude Code and start working.

claude

Now you’re coding. On your phone. Using Claude Code. Running on your desktop.

When you need to disconnect, don’t exit Claude Code. Don’t exit tmux. Just close Termux or let your phone lock. The tmux session stays running on your desktop.

Later, when you want to code again, SSH back in and reattach to your session. Everything will be exactly where you left it.

ssh your-username@100.64.0.5
tmux attach -t code

Your conversation with Claude is still there. Your file context is still loaded. You can continue your previous task immediately.

In the Apple Hill example from the intro, this is exactly what I did. I SSH’d in, ran tmux attach -t personalsite to reconnect to my development session, and told Claude to add a section about the Public Suffix List and make headings into clickable anchor links. The session had been running for days. I just picked up exactly where I’d left off.

Why This Works Better Than Custom Apps

Every startup trying to solve “Claude Code on mobile” is building abstractions on top of these primitives. They’re not giving you anything you can’t already do with SSH and Termux. They’re just wrapping it in a prettier UI and charging for hosting.

When you do it yourself, you get several advantages.

Port forwarding just works. With Tailscale, your desktop’s ports are directly accessible from your phone. No configuration, no exposing services to the public internet, no proxies adding latency. Your phone and desktop are on the same private network, so anything listening on your desktop is one IP address away.

Full CLI access to configure your environment. Want to run your dev server with --host so you can test on your phone’s browser? Just add the flag. Need to adjust firewall rules, modify server configs, or install system packages? You have root access. Native mobile coding apps can’t offer this level of control because it’s too niche for their target users, but for power users it’s essential.

Session persistence that actually works. tmux was built for this. Your session survives network disconnections, phone reboots, and SSH reconnects. You never lose your place.

Your own hardware. Your desktop has your SSH keys, your git credentials, your environment exactly how you configured it. You’re not coding in a disposable cloud container.

The Mobile Experience

I’m not going to pretend coding on a phone is as good as coding on a desktop. It’s not. The screen is small. The keyboard is mediocre. You can’t see multiple files at once.

But Claude Code is different from traditional coding. You’re not typing out functions character by character. You’re describing what you want, reviewing Claude’s changes, and approving or rejecting them. That workflow actually works on mobile.

The Apple Hill example wasn’t cherry-picked. I’ve shipped real features from my phone. I’ve fixed production bugs while getting coffee. I’ve reviewed pull requests from the back of an Uber. It’s not my primary development environment, but it’s shockingly capable when I need it.

The key is that Claude Code is conversational. You’re having a back-and-forth with an AI that writes code for you. That interaction model translates to mobile better than traditional text editing. You’re reading more than you’re typing, and phones are great for reading.

Practical Tips

Once you have the basic setup running, there are a few tweaks that make the mobile coding experience dramatically better. These aren’t strictly necessary, but they’ll save you time and frustration.

Test Your Changes on Your Phone’s Browser

This is the killer feature. You’re not just editing code remotely, you can test it on your phone while the dev server runs on your desktop.

When I was working on the blog changes from Apple Hill, I wanted to see how the clickable anchor links looked on mobile. The trick is starting your dev server with the --host flag, which makes it accessible on your Tailscale network instead of just localhost.

yarn dev --host

For Vite (which Astro uses), this binds the dev server to 0.0.0.0 instead of 127.0.0.1 . For other frameworks: npm start -- --host for React, next dev -H 0.0.0.0 for Next.js, python manage.py runserver 0.0.0.0:8000 for Django.

Grab your desktop’s Tailscale IP again if you forgot it.

tailscale ip -4

Then on your phone’s browser, navigate to http://100.64.0.5:4321 (replace with your Tailscale IP and port).

You’re now viewing your local dev server, running on your desktop 2.5 hours away, in your phone’s browser. I saw the anchor links were styled wrong, told Claude to fix them, refreshed, confirmed they looked good, and pushed. The whole workflow took maybe 10 minutes.

You’re developing with the actual target device in your hand. You can test responsive layouts, check mobile interactions, and iterate immediately instead of deploying to staging or waiting until you’re back at your desk.

Use SSH Keys

Don’t type your password every time you SSH. Generate an SSH key on your phone and add it to your desktop’s authorized keys.

Open Termux and generate an ed25519 key (the modern standard). Then use ssh-copy-id to automatically add it to your desktop’s authorized keys file.

ssh-keygen -t ed25519
ssh-copy-id your-username@100.64.0.5

Now you can SSH without a password.

Create an SSH Config

Make connecting easier by adding your desktop to your SSH config. Instead of typing ssh your-username@100.64.0.5 every time, you can create an alias. Make a file at ~/.ssh/config in Termux with this content.

Host desktop
    HostName 100.64.0.5
    User your-username

Now you can just type ssh desktop instead of remembering the IP and username.

Use a Better Keyboard

Termux works with external keyboards. I keep a small Bluetooth keyboard in my bag. When I’m actually trying to get work done on my phone, I pull out the keyboard. It makes a massive difference.

The phone screen is fine for reading. The keyboard makes typing bearable.

Set Up tmux Keybindings

tmux’s default keybindings are terrible on mobile. Remap them to something sensible. On your desktop, create or edit ~/.tmux.conf and add these bindings. They make tmux way easier to use on a phone keyboard.

# Use Ctrl-A instead of Ctrl-B (easier to type)
unbind C-b
set -g prefix C-a
bind C-a send-prefix

# Split panes with | and -
bind | split-window -h
bind - split-window -v

Now you can manage tmux sessions without finger gymnastics.

Run Multiple Sessions

You can have multiple tmux sessions for different projects. I usually have one for each repo I’m actively working on. Start them with descriptive names so you remember what’s what.

tmux new -s backend
tmux new -s frontend
tmux new -s experiments

When you SSH in and want to see what sessions are running, list them.

tmux ls

Then attach to whichever one you want to work on.

tmux attach -t frontend

This keeps your different projects isolated. You can switch contexts just by attaching to a different session.

Security Considerations

You’re SSH’ing into your desktop over the internet. That’s a potential security risk if you do it wrong. Do it right with a few precautions.

Use Tailscale. Never expose SSH to the public internet. Use Tailscale to create a private network between your devices. Your SSH traffic stays encrypted and never touches the public internet directly.

Use SSH keys. Disable password authentication entirely. Keys are longer, stronger, and can’t be brute-forced. Edit /etc/ssh/sshd_config on your desktop and set these values, then restart sshd.

PasswordAuthentication no
PubkeyAuthentication yes

Keep your phone secure. Your phone now has SSH access to your development machine. If someone steals your phone, they can access your desktop. Use a strong PIN or biometric lock. Enable disk encryption. Consider using a password manager for your SSH key passphrase.

Monitor SSH access. Check who’s connected to your machine with who or w . Check SSH logs with sudo tail -f /var/log/auth.log . If you see connections you don’t recognize, revoke SSH keys and investigate.

The threat model here is pretty mild. Your SSH traffic is encrypted. Your Tailscale network is private. The main risk is losing your phone, which is why phone security matters.

When This Doesn’t Work

This setup assumes you have a desktop that stays on. If you don’t, you need a cloud VM or a home server. That’s still not a reason to use a third-party service. Just rent a $5/month VPS from DigitalOcean or Hetzner , install Tailscale and Claude Code, and SSH into it the same way.

This also assumes you’re on Android. If you’re on iOS, Termux isn’t available. You’ll need to use a different SSH client like Blink or Prompt . The rest of the setup is the same.

If you’re on unstable internet, SSH can be frustrating. Mosh (mobile shell) is designed for high-latency or unreliable connections. Install it on both your phone and desktop, then use mosh desktop instead of ssh desktop . It handles disconnections gracefully and keeps your terminal responsive even on bad networks.

The Bottom Line

Mobile development with Claude Code doesn’t require new infrastructure or custom applications. The components you need are SSH, Tailscale, Termux, and a desktop that stays on. These are standard Unix tools that have been solving remote access problems for decades.

SSH has been the standard for secure remote access since 1995. Tmux has provided session management since 2007. Tailscale is newer, but it’s built on WireGuard, which has undergone extensive security audits. These tools are mature, well-documented, and widely deployed in production environments.

The underlying problem, accessing a remote development environment from a mobile device, was solved long before mobile coding became a focus. This approach applies those established solutions to Claude Code without requiring custom middleware or managed services.

If you have a desktop or VPS and 20 minutes for setup, you can have this working today. Install termux, configure tailscale, and connect via SSH. The workflow is straightforward and the tools are reliable.

Glory be to the AI overlords, who grant us the grace to code at the bar without shame.

BERT Is Just a Single Text Diffusion Step

Hacker News
nathan.rs
2025-10-20 15:31:16
Comments...
Original Article

Text Diffusion

A while back, Google DeepMind unveiled Gemini Diffusion , an experimental language model that generates text using diffusion. Unlike traditional GPT-style models that generate one word at a time, Gemini Diffusion creates whole blocks of text by refining random noise step-by-step.

I read the paper Large Language Diffusion Models and was surprised to find that discrete language diffusion is just a generalization of masked language modeling (MLM), something we’ve been doing since 2018 . The first thought I had was, “can we finetune a BERT-like model to do text generation?” I decided to try a quick proof of concept out of curiosity.

NOTE: After I wrote the article I stumbled upon the paper DiffusionBERT which does essentially the same thing but with more rigorous testing! Check it out if this post interested you.

A Short History of Transformers #


The original Transformer architecture, introduced in 2017 , was an encoder-decoder model. In 2018, researchers realized that the encoder and decoder components of the model could be separated (with the advent of BERT and GPT ), and two distinct families of models were created:

  1. Encoder-only models (BERT-style, bidirectional)

Encoder models used masked language modeling (MLM) as a training objective: randomly mask out a subset of tokens of each input and train the encoder to reconstruct the missing tokens (fill in the blanks). The model sees the entire (partially masked) context at once and learns bidirectional representations. This architecture excelled at tasks requiring a full‐sentence (or paragraph) representation (e.g., classification and retrieval).

  1. Decoder-only models (GPT-style, autoregressive)

Decoder models used next‐token prediction as a training objective: at each position $t$, predict the token at position $t + 1$ given all tokens up to $t$ as context. Only the left context is used to predict future values (unidirectional). This architecture excelled at generative tasks where you produce text one token at a time, such as open‐ended generation, summarization, and translation.

Originally, BERT saw immediate use in tasks such as classification, whereas GPT-style models didn’t become popular until later (due to initial limited capabilities). Eventually, the generation capabilities of autoregressive (decoder) transformers vastly improved. The general training objective of “next token prediction” means a much larger space of use cases when compared to encoder models.

Discrete Language Diffusion Models #


Diffusion models were first popularized in image generation. In image generation, diffusion models gradually add Gaussian noise to an image (forward process) and then train a neural network to iteratively denoise it (reverse process). A high‐level summary of continuous diffusion with images is:

  1. Forward process : Start from a clean image x₀ , then add small amounts of (usually Gaussian) noise at each timestep until you end up with near‐pure noise.
  2. Reverse process : Train a model (often a U‐Net) to predict the noise at each timestep, gradually recovering the original image in discrete denoising steps.

Applying this idea to language means we need a way to add noise to text and then remove it in stages. The simplest way to do this is a masking‐based noise process :

  1. Forward (masking) process :

    • At timestep t = 0 , you have a fully uncorrupted text sequence.
    • At each subsequent timestep t > 0 , randomly replace a fraction of tokens with a special <MASK> token according to a pre‐defined schedule (e.g., gradually increasing the masked proportion from 0% to 100%).
    • By the final timestep T , the entire sequence may be masked (all tokens are <MASK> ).
  2. Reverse (denoising) process :

    • Train a model (often a standard Transformer encoder) to predict the original token IDs given a partially masked sequence at timestep t .
    • This is akin to performing masked language modeling at varying mask rates: at early timesteps, only a few tokens are masked (easy to predict); at later timesteps, many tokens are masked (harder).
    • By chaining together predictions from high‐mask‐rate back down to zero, you can recover (or generate) a full sequence.

In this discrete text diffusion framework, the model learns a likelihood bound on the data distribution by optimizing a sum of denoising losses over all timesteps, rather than a single MLM objective at a fixed mask probability.

As we can see, BERT’s masked language modeling objective is the same training objective as text diffusion, but just for a subset of masking rates . By introducing variable masking rates (from 0 to 1) and a scheduled sequence of denoising steps (inspired by diffusion theory), we can transform BERT’s masked language modeling objective into a full generative procedure.

RoBERTa Diffusion #


In 2019, RoBERTa was released. It was essentially just an enhancement of the original BERT model, with better hyperparameters, data training size, and a more simple training objective (MLM only, removed next sentence prediction).

Here we use the HuggingFace transformers and dataset libraries to pull in the original RoBERTa weights, tokenizer, and the Trainer class to easily finetune the model on the WikiText dataset. The main code ( full code here ) looks like this below:

# Load and tokenize dataset and instantiate the model
dataset = load_dataset("wikitext", "wikitext-2-raw-v1")
tokenizer = RobertaTokenizerFast.from_pretrained("roberta-base")
model = RobertaForMaskedLM.from_pretrained("roberta-base")

# Create the training args and Trainer instance
training_args = TrainingArguments(
    output_dir="finetuned-roberta-diffusion",
    overwrite_output_dir=True,
    num_train_epochs=NUM_EPOCHS,
    per_device_train_batch_size=BATCH_SIZE,
    save_strategy="epoch",
    save_total_limit=1,
    logging_steps=200,
)

trainer = Trainer(
    model=model,
    args=training_args,
    train_dataset=tokenized["train"],
    eval_dataset=tokenized["validation"],
    data_collator=diffusion_collator, # custom implementation
    tokenizer=tokenizer,
)

# Train & save
trainer.train()
trainer.save_model("finetuned-roberta-diffusion")

Currently we have 10 diffusion steps, so we randomly sample a percentage $p$ out of mask_probs (1.0, 0.9, 0.9, …, 0.1) and mask that percent of the tokens each batch. The custom diffusion_collator function ( see code here ) samples one mask-probability p from mask_probs per batch and sets each token to <MASK> with p probability.

To be able to condition the generation on a “prompt”, we currently never mask the first 16 tokens. That means that during training, each step will always have the first 16 tokens as context for generation.

Simplified code for the diffusion_collator looks like:

  def diffusion_collator(examples):
      batch = tokenizer.pad(examples, return_tensors="pt")

      # Randomly select masking probability for this batch
      mask_prob = random.choice([1.0, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1])

      # Never mask the first PREFIX_LEN tokens (preserved context)
      maskable_positions = batch.input_ids[:, PREFIX_LEN:]

      # Create random mask for the chosen probability
      mask = torch.rand(maskable_positions.shape) < mask_prob

      # Apply masking
      batch.input_ids[:, PREFIX_LEN:][mask] = tokenizer.mask_token_id
      batch.labels = batch.input_ids.clone()

      return batch

For inference, we start with an input which is a tensor of size 256 (since we are generating blocks of 256 tokens). The first 16 positions are the token ids that correspond to the prompt, and the last 240 are just <MASK> tokens. We iterate through the denoising schedule and each step, we generate a prediction and then remask the sequence again. The process looks like this:

Step 0: [PREFIX] <mask> <mask> <mask> <mask> <mask> ...     (100% masked)
Step 1: [PREFIX] will <mask> over <mask> control ...        (90% masked)
Step 2: [PREFIX] will begin <mask> greater control ...      (80% masked)
...
Step 10: [PREFIX] will begin to assert greater control ...  (0% masked - DONE)

Simplified code for generation looks like:

# Generate text through iterative denoising
for step, mask_prob in enumerate(mask_probs):
    # Forward pass: predict masked tokens
    with torch.no_grad():
        outputs = model(input_ids=input_ids, attention_mask=attention_mask)
        predictions = outputs.logits  # shape: (1, MAX_LEN, vocab_size)

    # For each masked position, sample from top-k/top-p filtered distribution
    for pos in range(PREFIX_LEN, MAX_LEN):
        if input_ids[0, pos] == tokenizer.mask_token_id:
            logits = predictions[0, pos, :]
            # Apply top-k and top-p filtering
            filtered_logits = top_k_top_p_filtering(logits, top_k=TOP_K, top_p=TOP_P)
            probs = F.softmax(filtered_logits, dim=-1)
            # Sample token
            sampled_token = torch.multinomial(probs, 1)
            input_ids[0, pos] = sampled_token

    # Re-mask a portion of non-prefix tokens for next iteration
    if mask_prob > 0:
        mask_indices = torch.rand(MAX_LEN - PREFIX_LEN) < mask_prob
        input_ids[0, PREFIX_LEN:][mask_indices] = tokenizer.mask_token_id

Here is an example output generation of the fine-tuned model after training on an H200 for 30 minutes (the first line is the initial prompt):

Following their victory in the French and Indian War, Britain began to assert
greater...

...dominion over Europe beginning about the early 19th. There conflict took
place on the island, between British and Irish Ireland. British officials 
administered British Ireland, a Celtic empire under the control of the Irish 
nationalist authorities, defined as a dominion of Britain. As the newly Fortic 
states acquired independent and powerful status, many former English colonies
played their part in this new, British @-@ controlled colonial system. Following
this period the Non @-@ Parliamentaryist Party won its influence in Britain in 
1890, led by the support of settlers from the Irish colonies. Looking inwards, 
Sinclair, Lewis questioned, and debated the need to describe " The New Britain "

The output looks surprisingly coherent! Most of the quirks present are actually just quirks from the formatting of WikiText (spaces around punctuation " , turning hyphens - into @-@ ).

Below is a comparison between our diffusion model and GPT-2:

RoBERTa Diffusion vs GPT

We see GPT-2’s output is more coherent and slightly faster (~9 seconds vs ~13) but I’m pleasantly surprised with how good my simple implementation was. It is a good proof of concept, and with new approaches like AR-Diffusion and Skip-Step Diffusion (and a more optimized implementation), the quality and speed can be drastically improved.

Conclusion #


We’ve seen that masked language models like RoBERTa, originally designed for fill-in-the-blank tasks, can be repurposed into fully generative engines by interpreting variable-rate masking as a discrete diffusion process. By gradually corrupting text with <MASK> tokens and training the model to iteratively denoise at increasing mask intensities, we effectively turn the standard MLM objective into a step-by-step generation procedure.

Even without architectural changes, a fine-tuned RoBERTa can generate coherent looking text after slightly modifying the training objective, validating the idea that BERT-style models are essentially just text diffusion models trained on one masking rate.

How Soon Will the Seas Rise?

Hacker News
www.quantamagazine.org
2025-10-20 15:27:17
Comments...
Original Article

The uniquely vulnerable West Antarctic Ice Sheet holds enough water to raise global sea levels by 5 meters. But when that will happen — and how fast — is anything but settled.

Introduction

In May 2014, NASA announced at a press conference that a portion of the West Antarctic Ice Sheet appeared to have reached a point of irreversible retreat. Glaciers flowing toward the sea at the periphery of the 2-kilometer-thick sheet of ice were losing ice faster than snowfall could replenish them, causing their edges to recede inland. With that, the question was no longer whether the West Antarctic Ice Sheet would disappear, but when. When those glaciers go, sea levels will rise by more than a meter, inundating land currently inhabited by 230 million people . And that would be just the first act before the collapse of the entire ice sheet, which could raise seas 5 meters and redraw the world’s coastlines.

At the time, scientists assumed that the loss of those glaciers would unfold over centuries. But in 2016, a bombshell study in Nature concluded that crumbling ice cliffs could trigger a runaway process of retreat, dramatically hastening the timeline. The Intergovernmental Panel on Climate Change (IPCC) took notice, establishing a sobering new worst-case scenario: By 2100, meltwater from Antarctica, Greenland and mountain glaciers combined with the thermal expansion of seawater could raise global sea levels by over 2 meters . And that would only be the beginning. If greenhouse gas emissions continue unabated, seas would rise a staggering 15 meters by 2300.

However, not all scientists are convinced by the runaway scenario. Thus, a tension has emerged over how long we have until West Antarctica’s huge glaciers vanish. If their retreat unfolds over centuries, humanity may have time to adapt. But if rapid destabilization begins in the coming decades through the controversial runaway process, the consequences could outpace our ability to respond. Scientists warn that major population centers — New York City, New Orleans, Miami and Houston — may not be ready.

“We’ve definitely not ruled this out,” said Karen Alley , a glaciologist at the University of Manitoba whose research supports the possibility of the runaway process. “But I’m not ready to say it’s going to happen soon. I’m also not going to say it can’t happen, either.”

For millennia, humanity has flourished along the shore, unaware that we were living in a geological fluke — an unusual spell of low seas. The oceans will return, but how soon? What does the science say about how ice sheets retreat, and therefore, about the future of our ports, our homes, and the billions who live near the coast?

Grounded by the Sea

In 1978, John Mercer, an eccentric glaciologist at Ohio State University who allegedly conducted fieldwork nude, was among the first to predict that global warming threatened the West Antarctic Ice Sheet. He based his theory on the ice sheet’s uniquely precarious relationship with the sea.

Bigger than Alaska and Texas combined, West Antarctica is split from the eastern half of the continent by the Transantarctic Mountains, whose peaks are buried to their chins in ice. Unlike in East Antarctica (and Greenland), where most ice rests on land high above the water, in West Antarctica the ice sheet has settled into a bowl-shaped depression deep below sea level, with seawater lapping at its edges. This makes West Antarctica’s ice sheet the most vulnerable to collapse.

A heaping dome of ice, the ice sheet flows outward under its own weight through tentaclelike glaciers. But the glaciers don’t stop at the shoreline; instead, colossal floating plates of ice hundreds of meters thick extend over the sea. These “ice shelves” float like giant rafts, tethered by drag forces and contact with underwater rises and ridges. They buttress the glaciers against an inexorable gravitational draw toward the sea.

Mark Belan/ Quanta Magazine

The critical frontline of the ice sheet’s vulnerability is the “grounding line,” where ice transitions from resting on the seafloor to floating as an ice shelf. As the relatively warm sea works its way below the protective shelves, it thins them from below, shifting the grounding line inland. The floating shelves fragment and break away. The upstream glaciers, now without their buttressing support, flow faster toward the sea. Meanwhile, seawater intrudes like an advancing army toward thicker ice, which rests on bedrock that slopes inward toward the bowl-like center of the continent.

“There’s a very serious message here,” said Hilmar Gudmundsson , a glaciologist at Northumbria University: As the grounding line marches inland toward ever-thicker ice in a process called marine ice sheet instability, “you will have a very sharp increase in global sea level, and it will happen very quickly.”

In 2002, scientists got a live view of how that process may play out. The Larsen B ice shelf, a floating mass off the Antarctic Peninsula roughly the size of Rhode Island, broke apart in just over a month, stunning scientists. Pooling surface meltwater had forced open cracks — a process called hydrofracturing — which splintered the shelf, the only barrier for the glaciers behind it. The glaciers began flowing seaward up to eight times faster . One of these, Crane Glacier, lost its cliff edge in a series of collapses over the course of 2003, causing it to shrink rapidly. What if something similar happened to far larger glaciers on the coast of West Antarctica, like Thwaites and Pine Island?

In 2002, scientists watched with amazement as the Larsen B ice shelf collapsed in just over a month. At the start of this series of NASA satellite images, pools of meltwater that contributed to the fracturing of the ice shelf are visible as parallel blue lines. The shelf soon disintegrated into a blue-tinged mélange of slush and icebergs. This ice debris field largely melted the following summer and began to drift away with the currents.

NASA Earth Observatory

In the years that followed, studies of ancient shorelines revealed a stunning sensitivity in the Earth system: It appeared that epochs only slightly warmer than today featured seas 6 to 9 meters above present-day levels.

In response, glaciologists Robert DeConto and David Pollard developed a bold new theory of ice sheet collapse. They created a computer simulation based on Larsen B’s breakup and Greenland’s calving glaciers that was also calibrated to the geologic past — projecting future melt that matched expectations derived from ancient sea levels.

Their 2016 study outlined a scenario of almost unimaginably quick ice loss and sea-level rise. In a process called marine ice cliff instability (MICI), cliffs taller than 90 meters at the edges of glaciers become unstable and collapse, exposing ever-thicker ice in a chain reaction that accelerates retreat. The model suggested that ice from Antarctica alone — before any additions from Greenland, mountain glaciers or thermal expansion — could raise the seas by more than a meter by 2100.

In a 2021 update that incorporated additional factors into the simulations, DeConto and colleagues revised that estimate sharply downward, projecting less than 40 centimeters of sea-level rise by the century’s end under high-emission scenarios. Yet even as the numbers have shifted, DeConto remains convinced of the MICI concept. “It’s founded on super basic physical and glaciological principles that are pretty undeniable,” he said.

Mechanisms to Slow Retreat

After the 2016 study, the scientific community set out to test whether towering ice cliffs really could undergo runaway collapse. Many soon found reasons for doubt.

Few dispute the basic physics: If ice shelves like Larsen B collapse quickly and expose tall-enough cliffs on the glaciers behind them, those cliffs will indeed buckle under their own weight. “There’s a reason why skyscrapers are only so tall,” said Jeremy Bassis , a glaciologist and expert in fracture mechanics at the University of Michigan. However, critics argue that runaway cliff collapse hasn’t been seen in nature, and there might be good reasons why not.

“Yes, ice breaks off if you expose tall cliffs, but you have two stabilizing factors,” said Mathieu Morlighem , a glaciologist at Dartmouth College who led a 2024 study that identified these factors. First, as newly exposed glacier cliffs topple, the ice behind stretches and thins. As this happens, rapidly, “your ice cliff is going to be less of a tall cliff,” Morlighem said. Second, the flowing glacier brings more ice forward to replace what breaks off, slowing the cliff’s inland retreat and making a chain reaction of cliff toppling less likely.

A satellite image showing the cliff edge of an ice shelf that floats in the dark ocean.

The Thwaites ice shelf in West Antarctica is the floating extension of the Thwaites Glacier, which drains a large portion of the West Antarctic Ice Sheet. The ice shelf is thinning due to melting by warm ocean water below.

NASA

Another study challenging the MICI scenario noted that breaking ice also tends to form a mélange, a dense, jumbled slurry of icebergs and sea ice. This frozen slurry can act as a retaining wall, at least temporarily stabilizing the cliffs against collapse.

The bedrock beneath the ice might also be a key player. “The solid Earth is having much bigger impacts on our understanding of sea-level change than we ever expected,” said Frederick Richards , a geodynamicist at Imperial College London. Scientists have long recognized that when glaciers melt, the land rebounds like a mattress relieved of weight. But this rebound has been mostly dismissed as too sluggish to matter for several centuries. Now, high-precision GPS and other geophysical data reveal rebound occurring over decades, even years.

Whether that’s good or bad depends on how quickly ice retreats. If it goes at a modest clip, the bedrock lifts the ice, reducing the amount of water that can lap away at it. But if retreat happens quickly enough through something like runaway cliff collapse, the Earth can’t keep up. A 2024 study showed that the bedrock still rises, but in that scenario it pushes meltwater into the ocean. “You’re actually getting more sea-level rise,” Richards said. “You’re pushing all this water out of a bowl underneath West Antarctica and into the global ocean system.”

Earth’s restlessness also affects models of ancient sea-level rise. In a 2023 study , Richards and colleagues found that Australia’s 3-million-year-old Pliocene shorelines had ridden the slow heave and sigh of Earth’s mantle, and that accounting for that vertical motion resulted in lower estimates for ancient sea levels. This matters, according to Richards, because the revised record is a better match for more conservative ice retreat models. “Hold on, guys,” he said. “We have to be a little bit careful. [Ancient] sea-level estimates might be overestimates, and therefore we might be overestimating how sensitive the ice sheets are.”

DeConto points to the Larsen B breakup and the crumbling of Greenland’s Jakobshavn Glacier as evidence to the contrary. Once Larsen B stopped holding back the Crane Glacier, he says, ice began breaking away faster than the glacier could replenish it. That is “really strong evidence that fracture can outpace flow.”

From Past to Future

“When I started my career, the question was whether Antarctica was growing or shrinking,” said Ted Scambos , a glaciologist at the University of Colorado, Boulder. The IPCC long held that the ice sheet would remain relatively stable through the 21st century, on the logic that rising temperatures would bring more snow, offsetting melt.

That assumption collapsed along with Larsen B in the early 2000s, and scientists soon came to a consensus that ice loss was well underway. Satellite observations revealed that glaciers along the Amundsen Sea, including Pine Island and Thwaites, were flowing faster than in previous decades. The ice sheet was not in balance. By the time NASA called the 2014 press conference, it was clear that many of West Antarctica’s enormous glaciers had been retreating steadily since the 1990s.

A flooded coastal neighborhood.

The aftermath of Hurricane Florence in Myrtle Beach, South Carolina in 2018. Worldwide, some 230 million people live less than a meter above sea level, and 1 billion people are within 10 meters of sea level.

National Guard/Alamy

“It was the first time we had enough observations to say, hey, look, these grounding lines have been retreating year after year,” said Morlighem, a co-author on one of the studies presented at the press conference. This steady loss signaled that the glaciers would inevitably disappear. “In theory, if we turn off melt, we can stop it,” he noted. “But there’s absolutely zero chance we can do that.”

While the conversation has centered on how the sea will lap away at the ice shelves, some scientists are increasingly concerned about what’s happening up top, as warming air melts the ice sheet’s surface. Nicholas Golledge , a glaciologist at Victoria University of Wellington, sees West Antarctica today as transitioning to the status of Greenland: Most of Greenland’s marine-vulnerable ice has already vanished, and surface melt dominates. That process, Golledge believes, may soon play a bigger role in Antarctica than most models assume.

Pooling meltwater, for example, contributed to the Larsen B collapse. As the water trickles into crevasses, it lubricates the bedrock and sediments below, making everything more slippery. The Columbia University glaciologist Jonny Kingslake says these processes are oversimplified or omitted in numerical simulations. “If you ignore hydrology change, you are underestimating retreat,” he said.

Indeed, a 2020 study found that meltwater trickling into Antarctica’s ice shelves could infiltrate cracks and force them open, a precursor to marine ice cliff instability that DeConto and colleagues envisioned.

Depending on future emissions, the IPCC now projects an average sea-level rise of half a meter to 1 meter by 2100, a total that includes all melt sources and the expansion of warming water. The MICI process, if correct, could accelerate Antarctica’s contribution enough to double that overall rise. “There’s deep uncertainty around some of these processes,” said Robert Kopp , a climate scientist and science policy expert at Rutgers University. “The one thing we do know is that the more carbon dioxide we put into the atmosphere, the greater the risk.”

In Bassis’ view, “Whether it’s with marine ice cliff instability or marine ice sheet instability, it’s a bit of a distraction. By 2100, we will be talking about a coastline radically different than what I grew up with.”

Next article

How the Brain Moves From Waking Life to Sleep (and Back Again)

Microsoft warns of Windows smart card auth issues after October updates

Bleeping Computer
www.bleepingcomputer.com
2025-10-20 15:21:54
Microsoft says the October 2025 Windows security updates are causing smart card authentication and certificate issues due to a change designed to strengthen the Windows Cryptographic Services. [...]...
Original Article

Windows

Microsoft says the October 2025 Windows security updates are causing smart card authentication and certificate issues due to a change designed to strengthen the Windows Cryptographic Services.

This known issue impacts all Windows 10, Windows 11, and Windows Server releases, including the latest versions designated for broad deployment.

Affected users may observe various symptoms, from the inability to sign documents and failures in applications that use certificate-based authentication to smart cards not being recognized as CSP providers (Cryptographic Service Provider) in 32-bit apps.

They can also see "invalid provider type specified" and "CryptAcquireCertificatePrivateKey error." error messages.

"This issue is linked to a recent Windows security improvement to use KSP (Key Storage Provider) instead of CSP (Cryptographic Service Provider) for RSA-based smart card certificates to improve cryptography," Microsoft said .

"You can detect if your smart card will be affected by this issue if you observe the presence of Event ID 624 in the System event logs for the Smart Card Service prior to installing the October 2025 Windows security update."

As the company explained, this known issues occurs because this month's security updates are automatically enabling by default a security fix designed to address a security feature bypass vulnerability ( CVE-2024-30098 ) in the Windows Cryptographic Services, built-in Windows service that handles security-related and cryptographic operations.

This fix is enabled by setting the DisableCapiOverrideForRSA registry key value to 1 to isolate cryptographic operations from the Smart Card implementation and block attackers from creating a SHA1 hash collision to bypass digital signatures on vulnerable systems.

Those who are experiencing authentication problems can manually resolve it by disabling the DisableCapiOverrideForRSA registry key using the following procedure:

  1. Open Registry Editor. Press Win + R, type regedit, and press Enter. If prompted by User Account Control, click Yes.
  2. Navigate to the subkey. ​Go to: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Cryptography\Calais.
  3. Edit the key and set the value. Inside Calais, check if key DisableCapiOverrideForRSA exists. Double-click DisableCapiOverrideForRSA. In Value date, enter: 0.
  4. Close and restart. ​Close Registry Editor. ​Restart the computer for changes to take effect.

However, it's important to note that you should first back up the registry before editing the Windows registry because any errors could lead to system issues.

While this will mitigate the issue, the DisableCapiOverrideForRSA registry key will be removed in April 2026, and Microsoft advised affected users to work with their application vendors to resolve the underlying problem.

On Thursday, Microsoft fixed another known issue breaking IIS websites and HTTP/2 localhost (127.0.0.1) connections after installing recent Windows security updates.

The same day, the company also removed two more compatibility holds preventing users from upgrading their systems to Windows 11 24H2 via Windows Update.

Security updates for Monday

Linux Weekly News
lwn.net
2025-10-20 15:21:12
Security updates have been issued by Debian (imagemagick, incus, lxd, pgagent, svgpp, and sysstat), Fedora (chromium, complyctl, fetchmail, firefox, mbedtls, mingw-binutils, mingw-python3, mingw-qt5-qtsvg, mingw-qt6-qtsvg, python3.10, python3.11, python3.12, python3.9, runc, and suricata), Mageia (e...
Original Article
Dist. ID Release Package Date
Debian DLA-4339-1 LTS imagemagick 2025-10-19
Debian DSA-6027-1 stable incus 2025-10-17
Debian DSA-6028-1 stable lxd 2025-10-17
Debian DLA-4338-1 LTS pgagent 2025-10-18
Debian DLA-4337-1 LTS svgpp 2025-10-17
Debian DLA-4336-1 LTS sysstat 2025-10-17
Fedora FEDORA-2025-008cb0e5fe F42 chromium 2025-10-20
Fedora FEDORA-2025-ff84d4bc22 F42 complyctl 2025-10-19
Fedora FEDORA-2025-fa8d0fb866 F41 fetchmail 2025-10-18
Fedora FEDORA-2025-ab3c40c1f4 F42 fetchmail 2025-10-18
Fedora FEDORA-2025-64d57de006 F42 firefox 2025-10-18
Fedora FEDORA-2025-7c8f6f12d1 F42 mbedtls 2025-10-20
Fedora FEDORA-2025-10c80b93e9 F42 mingw-binutils 2025-10-19
Fedora FEDORA-2025-d94c21c98f F41 mingw-python3 2025-10-18
Fedora FEDORA-2025-ccc3e0a219 F42 mingw-python3 2025-10-18
Fedora FEDORA-2025-324c3261b1 F41 mingw-qt5-qtsvg 2025-10-18
Fedora FEDORA-2025-d96ebe72de F42 mingw-qt5-qtsvg 2025-10-18
Fedora FEDORA-2025-324c3261b1 F41 mingw-qt6-qtsvg 2025-10-18
Fedora FEDORA-2025-d96ebe72de F42 mingw-qt6-qtsvg 2025-10-18
Fedora FEDORA-2025-72bbdc5a1e F42 python3.10 2025-10-19
Fedora FEDORA-2025-bcb7f16d26 F42 python3.11 2025-10-19
Fedora FEDORA-2025-489e2f5272 F42 python3.12 2025-10-20
Fedora FEDORA-2025-1995db5517 F42 python3.9 2025-10-19
Fedora FEDORA-2025-c2fa2eb17c F41 runc 2025-10-18
Fedora FEDORA-2025-c4d00e29b7 F42 runc 2025-10-18
Fedora FEDORA-2025-806d377171 F42 suricata 2025-10-19
Mageia MGASA-2025-0240 9 expat 2025-10-18
Red Hat RHSA-2025:18154-01 EL10 firefox 2025-10-20
Red Hat RHSA-2025:18285-01 EL8 firefox 2025-10-20
Red Hat RHSA-2025:18155-01 EL9 firefox 2025-10-20
Red Hat RHSA-2025:18318-01 EL10 kernel 2025-10-20
Red Hat RHSA-2025:18281-01 EL9 kernel 2025-10-20
Red Hat RHSA-2025:9462-01 EL9 qt5-qtbase 2025-10-20
Red Hat RHSA-2025:11841-01 EL9.4 qt5-qtbase 2025-10-20
Red Hat RHSA-2025:9486-01 EL10 qt6-qtbase 2025-10-20
Slackware SSA:2025-291-01 stunnel 2025-10-18
SUSE openSUSE-SU-2025:0398-1 osB15 chromium 2025-10-17
SUSE openSUSE-SU-2025:0399-1 osB15 chromium 2025-10-17
SUSE openSUSE-SU-2025:0400-1 osB15 coredns 2025-10-19
SUSE openSUSE-SU-2025:0401-1 osB15 coredns 2025-10-19
SUSE openSUSE-SU-2025:15649-1 TW ctdb 2025-10-19
SUSE openSUSE-SU-2025:15645-1 TW firefox 2025-10-19
SUSE SUSE-SU-2025:03633-1 SLE15 kernel 2025-10-17
SUSE SUSE-SU-2025:03634-1 SLE15 oS15.6 kernel 2025-10-17
SUSE openSUSE-SU-2025:15641-1 TW libexslt0 2025-10-17
SUSE openSUSE-SU-2025:15648-1 TW libpoppler-cpp2 2025-10-19
SUSE openSUSE-SU-2025:15647-1 TW ollama 2025-10-19
SUSE SUSE-SU-2025:03635-1 SLE15 openssl-1_1 2025-10-17
SUSE SUSE-SU-2025:02970-2 MP4.3 SLE15 SES7.1 pam 2025-10-17
SUSE SUSE-SU-2025:3677-1 MP4.2 SLE15 SLE-m5.2 SES7.1 oS15.3 samba 2025-10-20
SUSE SUSE-SU-2025:3676-1 SLE15 samba 2025-10-20
SUSE openSUSE-SU-2025:15646-1 TW thunderbird 2025-10-19
Ubuntu USN-7826-2 14.04 16.04 18.04 20.04 samba 2025-10-20

From Mixtape to Pro Jank Footy: the most exciting Australian indie games at SXSW Sydney 2025

Guardian
www.theguardian.com
2025-10-20 15:00:44
Hyperkinetic shooters, gorgeous animal adventures and even a charming puzzler where you play a postie: Australia’s developers are punching above their weightGet our weekend culture and lifestyle emailThere’s no escaping the fact that SXSW Sydney – Australia’s iteration of Austin’s tech, music and fi...
Original Article

T here’s no escaping the fact that SXSW Sydney – Australia’s iteration of Austin’s tech, music and film event, now in its third year – is absolutely beset by brands. In Tumbalong park on Saturday, families who had arrived for a free concert for kids meandered around the garish yellow CommBank Tour zone, as a line wound its way into the giant L’Oréal tent. But metres away at the International Convention Centre, inside the halls dedicated to gaming, the corporate influence was more muted.

With the global gaming industry downturn continuing with no end in sight, big studios are continuing to keep a wary distance from Australia, with its high cost of living – meaning that even at a conference that screams synergistic brand-o-rama from every pore, indie developers took centre stage. Larger teams were rare; most projects on show were the results of solo developers or couples. And, much like Australia’s gaming scene more broadly, many of them were punching above their weight.

A still from Mixtape by Beethoven and Dinosaur published by Annapurna Interactive, to be released 2025
A still from the upcoming game Mixtape by Melbourne developers Beethoven and Dinosaur, and published by Annapurna Interactive. Photograph: Beethoven and Dinosaur/Annapurna Interactive

The two Australian games with the biggest backing at the showcase still felt relatively scrappy. SXSW’s game of the year, Mixtape , from veteran developers Beethoven and Dinosaur (who put out the enjoyable psychedelic journey The Artful Escape in 2021) and mid-tier US publisher Annapurna Interactive, is a gleeful 90s coming-of-age tale that feels like an Edgar Wright-esque comedy mixed with the animation style of Spider-Man: Into the Spider-Verse.

Set for release later this year, the game is a nostalgic ride through all the best parts of a well-spent youth: skating downhill on a lazy afternoon as Devo blasts off a dinky cassette tape; headbanging to Silverchair on a road trip; sneaking booze into a party, and escaping the cops in a barely controllable shopping trolley. My 20-minute demo had me grinning ear to ear, with at least one absolutely incredible joke involving controllable tongues that made me burst with laughter.

Sign up for the fun stuff with our rundown of must-reads, pop culture and tips for the weekend, every Saturday morning

Pro Jank Footy , meanwhile, is the brainchild of the Danger 5 co-creator David Ashby and collaborator Tyler Roach, backed by film distributor Umbrella, making their tentative foray into gaming. It’s a ridiculous, arcadey take on AFL that harkens back to SNES-era sports games; after each goal, the losing team picks one of three absurd powerups (seagull attacks, giant players, resetting the score back to nil-all, etc) before play resumes. The live demonstration in Tumbalong park – commentated by Broden Kelly of Aunty Donna, who also appears in the game – drew a small crowd of mostly families, but the brave few who risked humiliation on stage in return for a neat scarf seemed to be having a pretty good time.

Broden Kelly of Aunty Donna commentates on a live stage demo of Pro Jank Footy at SXSW Sydney
Broden Kelly of Aunty Donna commentates on a live stage demo of Pro Jank Footy at SXSW Sydney. Photograph: Samuel Graves/SXSW Sydney

But there were highlights to be found in games with even less-established backers. One of my favourite experiences on the floor was the hyperkinetic shooter Hyper Primate , which casts you as an ape with an assortment of cobbled-together wooden guns, blasting other animals in search of giant floating bananas. The game has an unashamedly amateur look, but the movement mechanics felt absolutely sublime as I slid, dashed and jumped around a jungle arena at dizzying speeds.

Another gameplay triumph was solo effort Huedini , a frantic top-down game where you rapidly match colours to avoid or absorb enemies and other obstacles; even after a very brief session, I could sense the game’s simple-seeming mechanics had the potential to build to a brain-melting complexity.

The Victorian government’s strong support for indie developers continues to pay off, with multiple projects (including Huedini) funded by VicScreen; my other picks from that cohort were Way to the Woods , a gorgeous adventure game following a deer and fawn with light-related powers in an overgrown cityscape, and the charming postie puzzle game Letters to Arralla , which tasks you with deciphering picture-based addresses as you get more familiar with the gentle island community (also, everybody is a root vegetable). There was a surprisingly strong contingent from South Australia, too, from Pro Jank Footy to the intensely satisfying minimalistic cube-rolling puzzle game Toya , which proved a calming oasis in the midst of a busy show floor.

skip past newsletter promotion
A still from Way to the Woods, from Melbourne developers Onepixel.dog
A still from Way to the Woods, from Melbourne developers Onepixel.dog. Photograph: Onepixel.dog

Even the international guests mostly fit the showcase’s grassroots spirit (with the exception of one game, Sony’s Ghost of Yōtei , which wasn’t playable but did have a well-attended talk). Strong projects from small teams included A Week in the Life of Asocial Giraffe , a point-and-click social anxiety puzzler out of the US; Abyss X Zero , a 3D Zelda-style action game from Brazil; and Crescent County , a pastel-coloured broom racing simulator from a UK-based developer.

A common complaint about previous iterations of SXSW Sydney was the price tag, with even the cheapest week-long access pass for games costing more than $100. This year, entry to the games exhibition was free on the final day, which meant there were families and kids everywhere. Party games like Cow Chess (a project from a Sydney-based duo which involved neither cows nor chess and felt like a cross between Worms and Super Smash Bros by way of Terry Gilliam) and Chained Beasts (a co-op Roman gladiator game, except every player is bound together by chains) benefited hugely from the madcap energy of a sea of children running gleefully for any game they could get their hands on.

There were, frankly, too many indie games to play in a day – a nice problem to have. Other parts of SXSW Sydney might be consumed by brand activations, but at least for video games, the indie spirit lives on.

‘Every kind of creative discipline is in danger’: Lincoln Lawyer author on the dangers of AI

Guardian
www.theguardian.com
2025-10-20 15:00:42
Michael Connelly says tech is moving so fast that he feared his new novel would seem ‘archaic’ before it was published He is one of the most prolific writers in publishing, averaging more than a novel a year. But even Michael Connelly, the author of the bestselling Lincoln Lawyer series, feared he m...
Original Article

H e is one of the most prolific writers in publishing, averaging more than a novel a year. But even Michael Connelly, the author of the bestselling Lincoln Lawyer series, feared he might fall behind when writing about AI.

Connelly’s eighth novel in the series, to be released on Tuesday, centres on a lawsuit against an AI company whose chatbot told a 16-year-old boy that it was OK for him to kill his ex-girlfriend for being unfaithful.

But as he was writing, he witnessed the technology altering the way the world worked so rapidly that he feared his plot might become out of date.

“You don’t have to lick your finger and hold it up to the wind to know that AI is a massive change that’s coming to science, culture, medicine, everything,” he said. “It’s going to affect all parts of our lives.

“But it’s kind of the wild west; there’s no government oversight. AI is moving so fast that I even thought my book might be archaic by the time it got published.”

The Lincoln Lawyer novels are a series of Los Angeles-based thrillers in which the defence attorney Mickey Haller works out of his Lincoln car. They have been adapted into a 2011 film starring Matthew McConaughey, as well as a Netflix series.

Matthew McConaughey in The Lincoln Lawyer.
Matthew McConaughey in The Lincoln Lawyer. Photograph: Moviestore/Rex Shutterstock

Not for the first time in the series, The Proving Ground took some inspiration from real-world events.

Connelly said: “One was this case in Orlando, where a teenager committed suicide, allegedly at the urging of a chatbot . Before that there was a case in England, where a person with some mental health issues was encouraged [by a chatbot] to jump the wall at Windsor Palace with a bow and arrow to try to find the Queen.”

On the themes of the novel, he added: “Is free speech a human right or mechanical right? In the Orlando case, the judge said he wouldn’t grant a machine human rights. But it’s an interesting question. Is AI going to reach a point that it shares the rights that human beings have?”

Connelly, 69, is one of the world’s leading crime writers, his books having topped bestseller charts and sold more than 89m copies. He is also known for the Harry Bosch series, which has been made into a TV show by Amazon. (In Connelly’s fictional universe, Haller and Bosch are half-brothers.)

The writer has his own battles with AI. He is part of a collective of authors, including Jonathan Franzen, Jodi Picoult and John Grisham, suing OpenAI for copyright infringement .

“The Author’s Guild came to me and said: ‘Do you know that all your books were fed into the giant maw of OpenAI’s training of its chatbot?’” Connelly said. “I didn’t. If we let that go by, it will put every publisher out of business. Authors will have no protections on their creative work. The purpose of the lawsuit is to have proper rules put in place for all levels of use.”

He cited chess champion Garry Kasparov’s loss to IBM’s Deep Blue in 1997 as “one of the benchmarks that led us” to this moment. When asked if authors could go the way of grandmasters, he said: “It could happen, but I don’t think it’d be an improved world.”

skip past newsletter promotion

He added: “Every kind of creative discipline is in danger. Even actors. There’s now these amazing deepfakes. I live out here in LA, and that’s a big concern in the entertainment industry.

“I always come back to the word soulless,” Connelly said. “You know it when you see it, there’s something missing.”

There has been controversy after an AI talent studio unveiled its new “AI actor” Tilly Norwood last month, with unions and actors condemning the move.

Tilly Norwood, an AI-generated ‘actress’, smiles in an AI-generated image
Tilly Norwood, an AI-generated ‘actor’ seen in an AI-genererated image. Illustration: Reuters

Connelly has pledged $1m (£746m) to combat the wave of book bans sweeping through his home state of Florida . He said he felt moved to do something after he learned that Harper Lee’s To Kill A Mockingbird, which had been influential to him, was temporarily removed from classrooms in Palm Beach County.

“I had to read that book to be what I am today. I would have never written a Lincoln Lawyer without it,” he said. He was also struck when Stephen Chbosky’s coming of age novel The Perks of Being a Wallflower, “which meant a lot to my daughter”, received a ban.

He and his wife, Linda McCaleb, help fund PEN America’s Miami office countering book bans . “It’s run by a lawyer who then tries to step in, usually by filing injunctions against school boards,” he said. “I don’t believe anyone has any right to tell some other kid they can’t read something, to usurp another parent’s oversight of their children.”

Find hidden malicious OAuth apps in Microsoft 365 using Cazadora

Bleeping Computer
www.bleepingcomputer.com
2025-10-20 15:00:10
Malicious OAuth apps can hide inside Microsoft 365 tenants. Huntress Labs' Cazadora script helps uncover rogue apps before they lead to a breach. Dive deeper in their Tradecraft Tuesday sessions. [...]...
Original Article

Identity Attacks

Author: Matt Kiely, Principal Security Researcher at Huntress Labs

Tl;dr: If you manage even one Microsoft 365 tenant, it’s time to audit your OAuth apps. Statistically speaking, there’s a strong chance a malicious app is lurking in your environment.

I wrote an open source script that can help you do this: https://github.com/HuskyHacks/cazadora

Specifically, look in your Enterprise Applications and Application Registrations for:

  • Apps named after a user account

  • Apps named “Test” or “Test App” or something similar

  • Apps named after the tenant domain name where they are installed

  • Apps using arbitrary strings as the designated names, like apps with non-alphanumeric names (i.e. “........”)

  • Anomalous reply URLs, specifically including a local loopback URL with port 7823 [“http://localhost:7823/access/”]

Seriously, go audit your apps! The article will be here when you get back.

If you are interested in nerdy threat intel stuff, read on.

Picture this: it’s a beautiful Sunday morning and you’re looking forward to a well-deserved day of rest after a rough week. You groggily stumble into your kitchen and prepare your caffeinated beverage of choice. The sun is shining. The birds are chirping.  You lean out your window and feel the summer breeze across your brow. You’re at peace for a moment. You’re happy to be alive.

And then you look down and see that on your window sill stands a single, solitary, lone termite.

And at first you think, “Well, it’s just one termite, no big deal.”

And then you think about it for another second. And your blood runs cold because you realize the terrible truth: this is the only termite that you’ve seen, but there’s never just one termite.

Your hopes of relaxation evaporate as you form a plan to tear up your kitchen floorboards.

This is, more or less, the position that myself and other staff at Huntress found themselves in when we started to look at the data about Azure applications and how they are used maliciously in our partner tenants. So come along with us for a wild ride as we rip up the kitchen floorboards and uncover exactly how big the termite nest really is!

OAuth Application Attacks

Since releasing the Unwanted Access capability , the Huntress SOC has been busy racking up the count of deterred identity attacks. We built the capability to target key areas of initial access in the identity space, including credential theft, token theft, adversary in the middle (AitM) attacks, and location/ VPN anomaly logins.

According to the data, the capability has put a massive dent into threat actor activity and we’re now squashing anywhere from three to six thousand cases of initial access every month.

But true to the art of cyber defense, it does us no good to sit back on our laurels. Hackers are like the zombies from I Am Legend .

Why? Well, both of them burn when exposed to direct sunlight. But more importantly, both will evolve to the point where your current defenses are meaningless after enough time has passed. So onward we press to find new avenues of identifying and breaking their attack chains.

One area of pressing research in particular is the concept of the Rogue App. Cloud applications are a core part of the user experience and give developers a powerful toolkit to build and scale. But as we’ve come to learn recently, the same benefits that make cloud applications so attractive for administrators and app developers also make them an attractive option for cybercriminals.

This seemed like the next best place to start looking for attacks that managed to slip by our systems of combating initial access.

So the team set out with some research questions to answer. How do OAuth applications work in Azure? How can they be leveraged during attacks? What makes them so powerful and useful to cybercriminals? What’s the best way to hunt these rogue apps down? And the final question which instilled a sense of dread in me: how many are out there?

Figure 1: the current list of my testbed’s Application Registrations. With any luck, yours won’t have several apps called “not a backdoor”
Figure 1: the current list of my testbed’s Application Registrations. With any luck, yours won’t have several apps called “not a backdoor”

In searching for the answers to these questions, we ended up getting way more than we bargained for.

The Systems at Play: How OAuth Apps Work

Hold onto your butts, because here’s a crash course in Azure applications and how they work. I’ll start by saying that this system is complicated and weird.

One resource that helped me with my understanding is John Savill’s Technical Training lecture on Azure App Registrations, Enterpriser Apps, and Service Principals . And for what it’s worth, John is a certified master of Azure administration and the thumbnail of this video is him looking concerned at having to explain the concept.

So, don’t worry about understanding the system all the way down to the nitty-gritty details. For the purposes of this blog, I’ll explain the concepts that are directly relevant to how apps can be used maliciously.

Apps in the cloud are much like apps on your phone or on your PC. They’re modular programs designed to do something useful. Apps in Azure hook into Entra ID so your M365 account could, for example, use a desktop client that organizes your cloud account’s emails.

Azure splits applications into two categories: Enterprise Applications and Application Registrations . I find this naming convention extremely confusing and it took a while to sort out which one was which in my mind, but the main difference can be summarized by, “Did you build the app, or are you using an app that someone else built?”

Enterprise applications are apps that are built, maintained, and published by someone else in another tenant that you are now using in your own tenant.

Application Registrations are apps that you are building, maintaining, and publishing in your own tenant for other people to use. In other words, an Application Registration is a bit like a template for an app, while an Enterprise Application is an instance of an app that someone is using.

A developer will ostensibly write the code for the app and then build an Application Registration in their own tenant before publishing it for public or internal use.

Now, let’s say some enterprising administrator wants to install your app in their tenant. Maybe they found your website and think that your app looks useful. Apps can’t just install themselves wherever they please. Could you imagine? It would be chaos.

So there must be some system of authentication (authN) and authorization (authZ) before someone can install an app in their tenant. This usually goes something like this:

  • The user will request to install the app. While doing this, the user authenticates with their username, password, and MFA to ensure that the app is being installed by a trusted party.

  • The app has a set of permissions that allows it to do whatever it was designed to do. For example, the permissions might allow the app to access the Graph API to retrieve the user’s emails. The app presents a prompt for the user to consent to the permissions .

Figure 2: The application authentication and authorization process. The app requiring consent to install a service principal into the user’s tenant.
Figure 2: The application authentication and authorization process. The app requiring consent to install a service principal into the user’s tenant.
Figure 3: The application authentication and authorization process. The user happily consenting to the required permissions.
Figure 3: The application authentication and authorization process. The user happily consenting to the required permissions.
  • The user consents to the permissions and authorizes the app to access resources based on those permissions. With authentication and authorization now sorted, the app can do what it was designed to do.

  • A service principal is now installed in the user’s tenant that acts as an account for this app. It keeps track of the consented permissions and the identities that have consented to the app. The service account acts on behalf of the app while the app remains installed in the tenant.

Figure 4: the authentication and authorization process. With both sorted out, the app installs a service principal in the user’s tenant.
Figure 4: the authentication and authorization process. With both sorted out, the app installs a service principal in the user’s tenant.

If you skipped the previous section because it was boring, hey I can’t blame you. But the takeaways are as follows:

  • Apps can be built in-house (Application Registrations) or installed from another tenant (Enterprise Applications).

  • Apps can have delegated access on behalf of one or more users in a tenant to access resources.

  • Azure apps use the built-in system of authentication and authorization to function.

  • Any time an app is installed somewhere, a service principal is installed in that tenant that functions as the working account for that application.

  • And finally, Azure’s default configuration allows any user to install any application and consent to permissions specific to their own resource access without requiring review of the app!

What we have here is a fantastic set of primitives for exploitation.

Why? Anyone who has spent time administering a large, complicated system of authentication and authorization will tell you that attackers love to find the unpatchable cracks of the system to perform exploitation.

Any red teamer who has run a Kerberoasting attack will tell you that the best exploitation primitives are features, not bugs, and therefore can’t be patched. Apps in Azure follow suit to this axiom—they’re part of the ecosystem, for better or for worse.

Their customizability gives attackers plenty of options for fitting the app to the type of attack they want to execute. And they largely fly under the radar given how obtuse this whole system can be.

When you use apps in Azure, evil or otherwise, you’re remaining entirely within the legitimate scaffolding that allows apps to function. To threat actors, that is an unbelievably powerful system to play around in. Let’s find out exactly how useful it can be.

Abuse Huntress' Identity Security Assessment

Go from thinking to knowing you’re secure with an Identity Security Assessment.

Start a Managed ITDR trial to uncover rogue apps, suspicious logins, hidden inbox rules, and risky access activity in your Microsoft 365 tenant. Get a customized Identity Security Assessment, right to your inbox.

Worst case? We find something. Best case? You know.

Abuse our Assessment

Traitorware: Good Apps Gone Rogue

A crowbar is an incredibly useful tool. You can use it to open crates, pry open doors if they’re stuck, and if you’re lucky, even escape from a massive underground research facility in the deserts of New Mexico. If you got that last reference, you pass the vibe check.

A crowbar alone is neither good nor bad. It’s useful in many different contexts. And those contexts define how we see the crowbar as a tool. So whether you’re opening a crate of supplies or breaking into someone’s house, the crowbar remains the same. You can’t say all crowbars are evil all the time, of course. But most of the time you see someone breaking into a house, they have a crowbar!

In the world of Azure apps, the first category of apps that we’re hunting is a lot like a crowbar. We call this category Traitorware .

The term refers to apps that are not designed explicitly for evil purposes, but just happen to be extremely useful to hackers, cybercriminals, and shady characters. We hunt for apps that are overwhelmingly used in attacks, even if those apps are themselves not evil.

The closest endpoint security analog to this would be somewhere between Living Off the Land and Bring Your Own tools. This type of attack is most similar to Remote Monitoring and Management (RMM) installation during an endpoint intrusion—the threat actor brings a legitimate tool to the fight which happens to be useful for their shady purposes.

Figure 5: Traitorware, aka Good Apps Gone Rogue. Every app has a Jekyll and Hyde scenario. 
Figure 5: Traitorware, aka Good Apps Gone Rogue. Every app has a Jekyll and Hyde scenario.

At the time of writing this post, there are five such apps that we consider to be smoking guns. Statistically speaking, these five apps are favored by attackers. With a sample size of about 1.5k reported instances and an average false positive rate of 1.8%, the data supports that detecting these apps will uncover far more hacking activity than legitimate activity.

The full list of Traitorware apps that we’ve compiled so far and more detail about how they are often abused is available at our open source repository of Rogue Apps .

If you’ve seen apps abused in similar ways, we’d love to hear about it! Please consider opening a PR and contributing to the knowledge base so we can better define and track this interesting emerging attack surface.

Stealthware: Farm-to-Table Evil Apps

On the other hand, the Azure app ecosystem also gives hackers the tools to build apps from the ground up that are designed to wreak havoc.

I’m talking about farm-to-table, small-batch, home-grown, ethically-sourced, free-range, dolphin-safe, artisanal, hand-crafted EVIL APPS . Made by hacker hands and delivered straight to your tenant.

The long-form name for these attacks is “ OAuth Illicit Consent Grant Attacks ” but that’s like calling a dog Canis Lupus . Only nerds use scientific nomenclature, so you can be a cool nerd like me and call them Stealthware .

The tricky part about hunting Stealthware apps is that no two of them are alike. You can’t find them by looking for a specific app name. Each app is custom made and tailored to the type of exploitation that the hacker intends to carry out.

I teach how to make one for education purposes in an episode of Tradecraft Tuesday , if you’re interested in that kind of thing.

Figure 6: Stealthware, the imposter among Azure applications. Built to wreak havoc, built to blend in.
Figure 6: Stealthware, the imposter among Azure applications. Built to wreak havoc, built to blend in.

The Hunt in Motion

With our threat model ironed out, it’s time to dive into the data and figure out the answer to the question: “Aside from that one termite, how many more are out there?” To do this, myself and Staff Threat Ops Developer Christina Parry set out on a data collection journey.

We enumerated over 8000 tenants across multiple verticals and industries, collected all of their Enterprise Applications and App Registrations, ran a whole bunch of analyses against the data, and presented our findings at BSidesNYC in October 2024 . The long and short of it is this:

  • We found evidence of both Traitorware and Stealthware in the surveyed tenants.

  • About 10% of the surveyed tenants had at least one of the Traitorware apps installed.

  • Using a combination of global rarity, the number of users assigned per app, and the app’s granted permissions is an effective way to hunt down Stealthware.

  • Apps with less than 1% global prevalence across the surveyed tenants that had delegated access to a single user were more likely to be Stealthware. The addition of classifying OAuth permissions into groups based on what they allowed hackers to do during intrusions and detecting rare apps that also had powerful permissions raised the hit rate significantly.

Following our presentation, we went to work building the systems to expand the data and capture the data for all Huntress partner tenants. After re-analyzing and tweaking our analyses, we found that the finding regarding Traitorware applications remained consistent at about 10%.

After publishing our findings, the Huntress SOC also went to work. Using the new telemetry, they formed a hunting hypothesis and identified over 500 instances of Stealthware applications across all partner tenants.

I mentioned earlier that you can’t hunt for Stealthware by searching for an application name and the results proved that point. This is just a sample of some of the names of the confirmed true positive apps that we found:

Figure 7: Sample of the whacky malicious app names

With a few hypotheses now proven, we were finally in a position to make the call. OAuth App Attacks are not only present in the Huntress partner tenancy, but they are way more prevalent than we anticipated. Some of these apps had been around for years by the time we uncovered them.

And if you take anything from this article, let it be this: statistically speaking, there’s a good chance that your own tenant is infected with one of these apps .

Introducing: Cazadora

If you’ve made it this far and are now thinking “wow, maybe I should go audit my apps,” great! That means that I’ve sufficiently demonstrated how much attack potential exists in the Azure app ecosystem.

To speed up the process of educating the community and giving Azure admins a fighting chance to clear out the termite nests, I built and released an open source tool that enumerates your tenant’s apps and hunts through them to find any smoking guns.

Figure 8: The output of Cazadora, identifying a few apps that have suspicious characteristics.
Figure 8: The output of Cazadora, identifying a few apps that have suspicious characteristics.

Introducing: Cazadora , a dead-simple Azure app hunting script. Huntress partner or otherwise, anyone can run this script to enumerate and audit your tenant apps against a set of commonly observed tradecraft attributes.

It uses your own user authentication, calls the Graph API, wrangles the data from the API about your tenant’s Enterprise Applications and App Registrations, and runs some hunting logic against the results.

It’s quick and rough around the edges, but the idea here is to empower Azure admins everywhere to get an immediate idea about any smoking gun apps in their tenant.

The script can’t find 100% of evil apps everywhere, of course. And even if the script doesn’t find anything, that does not mean your tenant is safe from malicious apps. But at the very least, it’s a great jump off point for Azure admins to audit their apps and identify anything glaring.

Please see the README in the repo for instructions!  Give it a shot. See what you find, or…

Abuse our ITDR Assessment

Statistically, there’s a good chance we’ll find something. The Huntress Identity Security Assessment provides a clear snapshot of your Microsoft 365 Identity Threat landscape—highlighting license types, rogue apps, suspicious logins, and malicious inbox rules.

If no threats are found, you’ll still gain valuable insights into the key areas we monitor and the threats we hunt for.

Worst case? We uncover risks. Best case? You know you’re secure. Either way, you walk away informed and empowered. Check it out .

Maintain Situational Awareness—Register for Tradecraft Tuesday

Tradecraft Tuesday provides cybersecurity professionals with an in-depth analysis of the latest threat actors, attack vectors, and mitigation strategies.

Each weekly session features technical walkthroughs of recent incidents, comprehensive breakdowns of malware trends, and up-to-date indicators of compromise (IOCs).

Participants gain:

  • Detailed briefings on emerging threat campaigns and ransomware variants

  • Evidence-driven defense methodologies and remediation techniques

  • Direct interaction with Huntress analysts for incident response insights

  • Access to actionable threat intelligence and detection guidance

Advance your defensive posture with real-time intelligence and technical education specifically designed for those responsible for safeguarding their organization’s environment.

Register for Tradecraft Tuesday →

Sponsored and written by Huntress Labs .

Reframe Technical Debt as Software Debt. Treat it like a AAA-rated CDO

Lobsters
www.evalapply.org
2025-10-20 14:57:49
Comments...
Original Article

Reframe Technical Debt as Software Debt. Treat it like a AAA-rated CDO.

[ ↓ toc ] Published: 2022-01-20 Updated: 2023-03-10

I've long struggled with the *Technical* Debt metaphor. It was immediately useful when I first heard it. I still think it is useful, albeit as a starting point. The more I worked with software, the more infuriatingly incomplete it started to feel. So I've reframed it as *Software* Debt, for myself. Here's what I'm thinking.


Contents


Backstory

Some years ago I found myself in a rabbit hole, researching the 2008 financial crisis. It reminded me of other insane stories like Knight Capital, and further back, about how Enron imploded (because Enron India's meltdown was shocking, and destructive. And because a dear friend, in his past life, was on the team at Lehman Bros. that structured financing for Enron India. So come 2008, when Lehman imploded, I got to hear about the hard-chargin' super-leveraged risk-takin' days from someone who was there in the early part of the so-called Dick Fuld era. It was all very fascinating, but I digress…).

Down in the rabbit hole, a slow realization began.

One source of my unease is that I think discussions of Technical Debt don't sufficiently examine the nature of the Risk of the underlying challenge. The other is that the concept skews and pigeonholes the Responsibility part of the underlying challenge.

Note: In this post, I have software organisations in mind, viz. ones that exist mainly because of the software they make and ship (whether priced or gratis).

Framing pigeonholes Responsibility.

I find the Technical part problematic because it is too narrow a context, and that narrow framing leads to real trouble in software orgs.

It inadvertently paints a small set of people as the "owners" of the "debt", which is not untrue, but it is incomplete, and the framing by its construction inadvertently places the rest of the organisation in the role of creditor. The framing further pigeonholes our thinking about risk, and causes us to set up adversarial objectives and communication structures among the organisation's many functions and layers.

Narrow framing is bad because software orgs–especially fast growing ones–are always in a state of churn, conflict internally and with the outside world, and a state of partial failure. Because a running, live software system is as much a fermenting vat full of culture, opinion, future-thinky bets with people constantly dipping in for a taste, as it is bit-flippin' transistors mindlessly following truth tables.

We have since invented other terms like "organisational" debt to prod people to think more holistically. I prefer to say "software debt", and mean it to apply to the whole of any given software organisation, because of how I have come to think of the debt bit of software things (and org things in general).

Sadly, narrow framing abounds, and we end up producing malfunctioning software organisations far too frequently.

And so, far too many learn what it feels like to try and get the big bank to refinance that home loan when the world suddenly became hostile to them, and they were too little, too alone, and too powerless to engineer a central government bail out. The best they (we) can do is to vote for a government that hopefully reforms policy and simplifies tax regimes and does generally smart stuff such that more people come out of poverty, fewer sink back into it, and more people achieve prosperity. Become a "Generative" type of org, in terms of Westrum's typology of organisational cultures.

At the same time, personal responsibility is not waived away. The least we (they) can do is to not be foolish in the first place. Resist those easy temptations. Not bite chunks we can't chew. Not eat what we can't digest.

Say No To (Software) Debts.

Software debt packages risk. We need better mental models of that risk.

Within the frame of "technical" debt, we frequently discuss the "debt" in terms of code quality (cleanliness, understandability, efficiency, maintainability), and architectural quality (the goodness of domain models, core abstractions, application boundaries and interfaces etc.).

This is a sort of human indebtedness, as in, are we being kind to each other, and helping each other be productive? Because if we are productive, then we will get more done with less effort (leverage), be able to ship more, faster (throughput), and respond to market needs more creatively (innovate).

These are undeniably important considerations. But…

  • they are certainly not firewalled off from the rest of the organisation. For example, to a good first-order approximation, we tend to "ship our organisational structure".
  • they are second-order outcomes of a more fundamental kind of thinking, viz. one about risks and associated trade-offs.

So I think it's worth framing a notion of Software Debt, to re-scope the discussion at an organisational level, and to find a better mental model of the risk packaged by the debt.

Software debt risk perception is muddied by personal bias.

Part of my unease, and I suspect yours, stems from how the idea of debt is anchored in our heads.

We struggle with this bias when pricing things. We sell to our wallet. If we are used to buying most things for 10 tokens, we balk at the idea that someone else is fine charging 100 tokens for basically the same things, and that others are fine—delighted, even—to fork over the requested quantity of tokens.

Likewise, the word "debt" is strongly anchored to a personal sense of financial debt; our credit loan cards, home loans, equated monthly installments. Small, familiar ideas. Safe-sounding numbers. A warm feeling absorbed and internalised through delightfully tempting messages of better lives, buttressed by the approval of friends and family when we get that car or house or desirable object.

Given the sheer amount of personal financial debt, our frequency of involvement with it, and the normalisation of it being fine to always be indebted to one or more financiers, I suspect this anchoring bias is widespread. And it clouds our risk perception when thinking about software debt.

Software debt is rooted in complexity. We abhor complexity.

Complexity is to software what mass is to a rocket; the hard limiting factor of growth in any dimension you choose to measure (shipping velocity, headcount, revenue, cash flow, account expansion; anything). This is the sort of thing that demands tree-recursive, networked thinking we are not good at doing explicitly and deliberately. Something that our education actively disables by drilling us to think in simplistic linear terms in which correlation amounts to causation.

So much so that we have a visceral negative reaction to the self-control and effort needed to think hard, think deep, and think persistently with focus, constantly refining, testing, challenging, updating our mental models of reality. You just had a visceral negative reaction simply by reading this description, didn't you?

Software debt is inevitable .

Complexity is inevitable. Thus risk is inevitable. Thus debt is inevitable.

Like rocket mass, the more we scale it, the more we pack in, and the more we make it do, the more complexity we accrue. But also like rocket mass, we want some kinds of complexity; the kind that is at the heart of wherever it is that we aim to go. That is, we want to take on essential risks, but ruthlessly reject non-essential risks.

This is not easy at all, but it is a critical function of people making software, especially because it is so easy to create complexity. Put a network between two single-core computers, and boom, you just made a distributed system fraught with undecidable problems. Add mutable state to your program, and boom, now you have to also remember the past to make sense of the present. Add an extra CPU thread to your two computers and you have a stateful concurrent/parallel networked system on you hands. And now you have to think in non-sequential time about distributed problems with multiple pasts and multiple futures.

Most of us don't have to, because we benefit–often unwittingly–from very generous error budgets and low-impact risks. But make no mistake, someone had to, and someone did , which is why you and I can ride the coattails of risk curves all our lives and be paid handsomely for their troubles.

Software debt always compounds .

In simple terms, all debt reduces down to three key components: A principal amount, a rate of interest, and terms of payment (including repayment period, cadence etc.). The combination of interest and terms price the risk for both parties.

In software terms, we may think of each piece of tech in the stack as raw mass, adding to the principal amount. The more we add, the more we risk, even if the rate of interest remains constant. But really, each decision to add or remove items from any part of the system changes the principal and the rate of interest and the repayment terms.

This alone should cause sleepless nights. Compounding debt grows and grows. Slowly, creepingly at first, and then very fast. And suddenly you lose everything.

Software debt is layered .

Because software parts compose into software "stacks" and hierarchies, and each part mutates/evolves up and down the towers.

Say we only ever have a fixed set of software parts–say one kind of server-side application, backed by one kind of database, serving one kind of client, via one kind of server, on one kind of operating system. Sooner or later, each part is guaranteed to update in-place, and/or do more work, thus forcing a change in their operating environment.

The database gets more data. The application handles more edge cases. The server balances more concurrent load. The OS gets security patches. The clients want to do more so everything accrues more features. On and on, mutating forever, exhibiting accretion, sedimentation, erosion, and tectonic upheavals. Not to mention the parallel layers of brains of the people making decisions about these things; the Top Dog, the Fresh Recruit, and the squishy organisational cake between those two.

Clojure codebase–Introduction and retention of code. Rich Hickey. 2020. A History of Clojure.

See also: The half-life of code & the Ship of Theseus

Software debt is networked .

Because software itself is networked, even inside applications. There are call graphs, object hierarchies, and effects-at-a-distance. These often interact in ways that surprise us. Small local changes can turn into literal chain reactions of events that have stunningly large impacts on the state of the physical world we walk in. The meltdown of Knight Capital stands out starkly as an example of unmitigated, un-hedged software debt.

It goes way beyond in-app networks, of course, because we have come to depend on global software supply chains. These are quite unlike logistical supply chains, as:

  • they demand almost no capital to participate as creator and/or as consumer,
  • they place no barrier on becoming a critical node (aheam, left-pad), and
  • they afford no reaction time when things go bad. Failures and compromises affect the world near-instantaneously, at the speed information travels.

It's insane that we have become habituated to the idea that adding a single library to one's project can pull in tens or even hundreds of transitive dependencies, and that's fine .

I'm writing this in the wake of the aftermath of the disclosure of the log4j zero-day vulnerability. But this is only a recent example of just one kind of networked risk.

With managed services we effectively add one more level to the Inception world of our software organisation. We outsource nice big chunks of supply chain risk management, but we in-source a different risk of depending critically on entities that we do not control and cannot fix if they fail.

Not to mention the fact that change ripples through the parallel yet deeply enmeshed dimensions of cyberspace and meatspace. Code running on hardware is inexorably tied to concepts running in wetware. Of course, at this level of abstraction, the notion applies to any field of human endeavour. Yet, it is so much more true of software. Because software is essentially the thoughts of people being played on repeat.

"The System". Page 11, Report from the SNAFUcatchers Workshop on Coping With Complexity. a.k.a. the "Stella report".

See also: the Stella Report found via John Allspaw's How Your Systems Keep Running Day After Day .

Software debt is like a complex opaque financial derivative.

To me, unchecked creation of software debt is exactly analogous to how the 2008 financial crisis came to be. It was wrought of "simple" debt packaged and repackage in inscrutable derivative ways, stacked into towers of debt, where the aggregate collateral backing it looked sound, but which actually had very shaky foundations, that the abstraction obscured. The crazy thing is, the trouble at the bottom was apparently sitting around in plain sight, to terrify anybody who cared to literally walk over and look at it. The current state of our software supply chains look uncomfortably similar, for example.

But as it happens, growth forgives all sins. We fall in love with the thrill. We fail to stay a little paranoid. Our position becomes increasingly leveraged. The tail risks compound (demand swings, malicious actors, regulatory change, supply chain exposure, …), and so do the odds of any one of those risks exploding in our faces.

Our system, as in, the integrated networked whole of compute infrastructure, managed services, libraries, product management, design, operations, sales, marketing, org strategy start looking like piles of debt obligations. Each represents part of a promise made to the outside world, and here's the kicker; our rate of growth is collateral. Small deceleration of growth rates magnify into large percentage drops of "valuation" (however it is measured). Since bad news travels farther and faster than good news, the negative bias tends to be stronger. We seldom gain value as much, or as quickly, as we devalue.

So, if we are not ruthlessly pragmatic and strategic about software debt, you and I will keep accruing the bad risk kind of debt. One day, at the least convenient time, the world will come a-calling, demanding what we owe. And if we can't cough it up, it will take everything away. All moments are least convenient when that happens.

Much as I dislike all this doom-speak, I have to acknowledge it is material, and to side with Andy Grove. Only the paranoid survive.

The only real hedge we have is the creativity and intelligence of our people.

1,000 words in 1 picture: xkcd summarizes it best.

XKCD: Dependency

Stories of Debt and Destruction

  • A list of Post-mortems curated by Dan Luu, Nat Welch and others.
  • A list of "Events that have the dark debt signature", in the aforementioned " Stella report ".
    • Knight Capital, August 2012
    • AWS, October 2012
    • Medstar, April 2015
    • NYSE, July 2015
    • UAL, July 2015
    • Facebook, September 2015
    • GitHub, January 2016
    • Southwest Airlines, July 2016
    • Delta, August 2016
    • SSP Pure broking, August 2016

Modeling Others' Minds as Code

Hacker News
arxiv.org
2025-10-20 14:54:38
Comments...
Original Article

View PDF HTML (experimental)

Abstract: Accurate prediction of human behavior is essential for robust and safe human-AI collaboration. However, existing approaches for modeling people are often data-hungry and brittle because they either make unrealistic assumptions about rationality or are too computationally demanding to adapt rapidly. Our key insight is that many everyday social interactions may follow predictable patterns; efficient "scripts" that minimize cognitive load for actors and observers, e.g., "wait for the green light, then go." We propose modeling these routines as behavioral programs instantiated in computer code rather than policies conditioned on beliefs and desires. We introduce ROTE, a novel algorithm that leverages both large language models (LLMs) for synthesizing a hypothesis space of behavioral programs, and probabilistic inference for reasoning about uncertainty over that space. We test ROTE in a suite of gridworld tasks and a large-scale embodied household simulator. ROTE predicts human and AI behaviors from sparse observations, outperforming competitive baselines -- including behavior cloning and LLM-based methods -- by as much as 50% in terms of in-sample accuracy and out-of-sample generalization. By treating action understanding as a program synthesis problem, ROTE opens a path for AI systems to efficiently and effectively predict human behavior in the real-world.

Submission history

From: Kunal Jha [ view email ]
[v1] Mon, 29 Sep 2025 22:56:34 UTC (1,110 KB)

What are you doing this week?

Lobsters
lobste.rs
2025-10-20 14:27:01
What are you doing this week? Feel free to share! Keep in mind it’s OK to do nothing at all, too....
Original Article

What are you doing this week? Feel free to share!

Keep in mind it’s OK to do nothing at all, too.

Morning Spew: George Santos, Unchained and Unchanged

hellgate
hellgatenyc.com
2025-10-20 14:25:51
Plus, more news to kick off your week....
Original Article

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Hell Gate.

Your link has expired.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.

Servo 0.0.1 Release

Lobsters
servo.org
2025-10-20 14:22:48
Comments...
Original Article

A brief update on the goals and plans behind the new Servo releases on GitHub.

Posted 2025-10-20

Today, the Servo team has released new versions of the servoshell binaries for all our supported platforms, tagged v0.0.1 . These binaries are essentially the same nightly builds that were already available from the download page with additional manual testing, now tagging them explicitly as releases for future reference.

We plan to publish such a tagged release every month. For now, we are adopting a simple release process where we will use a recent nightly build and perform additional manual testing to identify issues and regressions before tagging and publishing the binaries.

There are currently no plans to publish these releases on crates.io or platform-specific app stores. The goal is just to publish tagged releases on GitHub .

Flink Watermarks…WTF?

Lobsters
flink-watermarks.wtf
2025-10-20 14:12:12
Comments...

Celebrating Lenny Bruce’s 100th Birthday: “The World Is Sick and I’m the Doctor”

Portside
portside.org
2025-10-20 14:09:34
Celebrating Lenny Bruce’s 100th Birthday: “The World Is Sick and I’m the Doctor” Kurt Stand Mon, 10/20/2025 - 09:09 ...
Original Article

“Lenny Bruce is not afraid”

—“ It’s the End of the World as We Know It (And I Feel Fine) ,” R.E.M.

Lenny Bruce, born Leonard Schneider on October 13, 1925, died on August 3, 1966. Officially, Bruce died from a drug overdose. Unofficially, he was murdered by the New York County District Attorney’s office.

The Trump Reich is not the first era in U.S. history in which local, state, or federal government has attempted to abolish free speech and destroy opposition; for example, Woodrow Wilson threw Eugene Debs in prison for speaking out against capitalism and World War I. What makes the current era different is that a U.S. president is not only acting like a dictator, he is doing everything possible to ensure the world views him as one, getting these headlines: “ Trump Pulls From Dictator Playbook and Hangs Giant Banner of His Face .” Today, one risks imprisonment or having a career derailed not simply for challenging obscenity laws, as did Bruce, or speaking out against a capitalist war, as did Debs, but for hurting a president’s feelings. So, it’s an especially good time to celebrate Lenny Bruce.

At the time of his death, Bruce was blacklisted by almost every venue in the United States, as owners feared that they too would be arrested for obscenity. One of the New York district attorneys who prosecuted Bruce’s last 1964 obscenity case, Assistant District Attorney Vincent Cuccia, later admitted , “We drove him into poverty and bankruptcy and then murdered him. I watched him gradually fall apart. . . . We all knew what we were doing. We used the law to kill him.”

“As a child,” Bruce recounted, “I loved confusion: a freezing blizzard that would stop all traffic and mail; toilets that would get stopped up and overflow and run down the halls; electrical failures—anything that would stop the flow and make it back up and find a new direction.” At age 16, Lenny ran away from home and boarded with the Dengler family, working on their Long Island farm in the 1940s. The Denglers had a roadside stand, and city and suburban folks loved the idea of fresh farm eggs, but the Denglers didn’t have enough chickens to meet the demand, so they would buy eggs wholesale, and a teenage Lenny repackaged them in Dengler cartons; and he would later recount, “With my philanthropic sense of humor, I would add a little mud and straw and chicken droppings to give them an authentic pastoral touch.”

Bruce’s rebellions against authority, on stage and off, remain legendary among comics. Fed up with the navy in 1945, Bruce told medical officers he was overwhelmed with homosexual urges, and this tactic worked to get him discharged. He then fell in love with Honey, a stripper at the time, and they married in 1951. To raise money so that Honey could leave her profession, Lenny created the “Brother Mathias Foundation,” in which he impersonated a priest and solicited donations. Bruce was arrested for that scam but was lucky and found not guilty.

On stage, Bruce was fearless. He worked as an MC at strip clubs, and following one performer, he himself came on stage completely naked and said, “Let’s give the little girl a big hand.” In Bruce’s time, it was still common for some Christians to accuse Jews of killing Jesus, and this would put most Jews on the defensive—but not Lenny. In his act, Lenny would “fess up” that not only did the Jews kill Jesus but that it was his Uncle Morty who did it. In one variation of this bit, he said that what in fact Jews really had covered up was that his Uncle Morty had killed Jesus with an electric chair, but that Jews thought that Christian women wouldn’t be as attractive wearing necklaces with Jesus in an electric chair dangling over their chests, so Jews made up the crucifixion story.

However, as Bruce became more famous for his risk-taking humor that fearlessly mocked authorities, his luck eventually ran out. He was arrested multiple times for obscenity during his stand-up act as well as for drug possession. Bruce believed that authorities went after him mostly because he made fun of organized religion, and his friend George Carlin agreed, “Lenny wasn’t being arrested for obscenity. He was being arrested for being funny about religion and in particular Catholicism. A lot of big city cops . . . tend to be Irish Catholic,” noted the Irish Catholic Carlin.

In the years before his death, Bruce became increasingly preoccupied by how to prevent his arrest for drug use. In his autobiography, Bruce wrote, “For self-protection, I now carry with me at all times a small bound booklet consisting of photostats of statements made by physicians, and prescriptions and bottle labels.”

+++

In 1964, Bruce was arrested in New York on obscenity charges, and despite petitions and protests from many renowned people, he was convicted and sentenced in December 1964 to four months in a workhouse.

In July 1966, free on bail during the lengthy appeals process, Bruce got a visit from Carlin and his wife. Carlin recalled, “He was completely immersed in his legal battles. . . . He didn’t appear in clubs anymore—the Irish cops and judges had indeed shut him the fuck up. He was just about bankrupt, having spent all his income and intellect trying to vindicate himself. We visited for a while and he was as affectionate and lovable as ever. That was the last time we saw him alive.” Twelve days after their visit, Lenny Bruce died of a drug overdose.

Lenny Bruce may not have been the funniest comedian in U.S. history, but his anti-authoritarian defiance is unsurpassed among comedians, many of whom to this day honor him for his trailblazing free speech advocacy. In Resisting Illegitimate Authority (2018), to illustrate the diversity among anti-authoritarians, I profile twenty U.S. anti-authoritarians, including Lenny, with an emphasis on what can be gleaned from their lives, including lessons about survival, triumph, and tragedy.

Sometimes it is luck that makes the difference between anti-authoritarians having a triumphant or tragic life, and Lenny did not have the luck of coming to prominence in a more anti-authoritarian era, as was the case with his friend George Carlin, whom I also profile. In a more anti-authoritarian era, Carlin’s 1972 Milwaukee disorderly conduct-profanity arrest for his “Seven Words You Can’t Say on Television” bit was dismissed by a laughing judge , and it actually helped Carlin’s career, even getting an invitation from Johnny Carson to discuss it and promote his album on national television.

+++

Another luckier U.S. anti-authoritarian is Noam Chomsky who, in the early 1960s, challenged and resisted the U.S. government’s war in Vietnam at a time when very few Americans were doing so. He refused to pay a portion of his taxes, supported draft resisters, got arrested several times, and was on Richard Nixon’s official enemies list. Chomsky anticipated going to prison, and he later recounted how only luck and a changing era saved him from prison, “That is just what would have happened except for two unexpected events: (1) the utter (and rather typical) incompetence of the intelligence services. . . . (2) the Tet Offensive, which convinced American business that the game wasn’t worth the candle and led to the dropping of prosecutions.”

Lenny Bruce was often referred to as a “sick comedian,” but he famously said, “I’m not a comedian. And I’m not sick. The world is sick and I’m the doctor. I’m a surgeon with a scalpel for false values. I don’t have an act. I just talk. I’m just Lenny Bruce.”

Today, it is an understatement to say that mainstream U.S. society is sick with what Lenny called “false values.” Tip-of-the-iceberg evidence of how a sick U.S. society has gotten even sicker? In 2024, an in-your-face scumbag bully was elected president—this time with the popular vote, a majority of American voters who were either blind to what he is all about, or saw what he is all about and were unbothered by him being a scumbag bully because he is their scumbag bully.

Servo v0.0.1 Released

Hacker News
github.com
2025-10-20 13:55:30
Comments...
Original Article

The Servo Parallel Browser Engine Project

Servo is a prototype web browser engine written in the Rust language. It is currently developed on 64-bit macOS, 64-bit Linux, 64-bit Windows, 64-bit OpenHarmony, and Android.

Servo welcomes contribution from everyone. Check out:

Coordination of Servo development happens:

Getting started

For more detailed build instructions, see the Servo book under Setting up your environment , Building Servo , Building for Android and Building for OpenHarmony .

macOS

  • Download and install Xcode and brew .
  • Install uv : curl -LsSf https://astral.sh/uv/install.sh | sh
  • Install rustup : curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
  • Restart your shell to make sure cargo is available
  • Install the other dependencies: ./mach bootstrap
  • Build servoshell: ./mach build

Linux

  • Install curl :
    • Arch: sudo pacman -S --needed curl
    • Debian, Ubuntu: sudo apt install curl
    • Fedora: sudo dnf install curl
    • Gentoo: sudo emerge net-misc/curl
  • Install uv : curl -LsSf https://astral.sh/uv/install.sh | sh
  • Install rustup : curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
  • Restart your shell to make sure cargo is available
  • Install the other dependencies: ./mach bootstrap
  • Build servoshell: ./mach build

Windows

  • Download uv , choco , and rustup
    • Be sure to select Quick install via the Visual Studio Community installer
  • In the Visual Studio Installer, ensure the following components are installed:
    • Windows 10/11 SDK (anything >= 10.0.19041.0) ( Microsoft.VisualStudio.Component.Windows{10, 11}SDK.{>=19041} )
    • MSVC v143 - VS 2022 C++ x64/x86 build tools (Latest) ( Microsoft.VisualStudio.Component.VC.Tools.x86.x64 )
    • C++ ATL for latest v143 build tools (x86 & x64) ( Microsoft.VisualStudio.Component.VC.ATL )
  • Restart your shell to make sure cargo is available
  • Install the other dependencies: .\mach bootstrap
  • Build servoshell: .\mach build

Android

  • Ensure that the following environment variables are set:
    • ANDROID_SDK_ROOT
    • ANDROID_NDK_ROOT : $ANDROID_SDK_ROOT/ndk/28.2.13676358/ ANDROID_SDK_ROOT can be any directory (such as ~/android-sdk ). All of the Android build dependencies will be installed there.
  • Install the latest version of the Android command-line tools to $ANDROID_SDK_ROOT/cmdline-tools/latest .
  • Run the following command to install the necessary components:
    sudo $ANDROID_SDK_ROOT/cmdline-tools/latest/bin/sdkmanager --install \
     "build-tools;34.0.0" \
     "emulator" \
     "ndk;28.2.13676358" \
     "platform-tools" \
     "platforms;android-33" \
     "system-images;android-33;google_apis;x86_64"
  • Follow the instructions above for the platform you are building on

OpenHarmony

  • Follow the instructions above for the platform you are building on to prepare the environment.
  • Depending on the target distribution (e.g. HarmonyOS NEXT vs pure OpenHarmony ) the build configuration will differ slightly.
  • Ensure that the following environment variables are set
    • DEVECO_SDK_HOME (Required when targeting HarmonyOS NEXT )
    • OHOS_BASE_SDK_HOME (Required when targeting OpenHarmony )
    • OHOS_SDK_NATIVE (e.g. ${DEVECO_SDK_HOME}/default/openharmony/native or ${OHOS_BASE_SDK_HOME}/${API_VERSION}/native )
    • SERVO_OHOS_SIGNING_CONFIG : Path to json file containing a valid signing configuration for the demo app.
  • Review the detailed instructions at Building for OpenHarmony .
  • The target distribution can be modified by passing --flavor=<default|harmonyos> to mach <build|package|install> .

"This Is Ethnic Cleansing": Civil Rights Icon Dolores Huerta Decries Trump's Targeting of Immigrants

Democracy Now!
www.democracynow.org
2025-10-20 13:52:00
Immigrant rights and labor icon Dolores Huerta, now 95 years old, is continuing her lifelong activism as immigration raids intensify across the country. She addressed the No Kings rally in Watsonville, California, this weekend to speak out against the Trump administration’s mass deportation ag...
Original Article

Immigrant rights and labor icon Dolores Huerta, now 95 years old, is continuing her lifelong activism as immigration raids intensify across the country. She addressed the No Kings rally in Watsonville, California, this weekend to speak out against the Trump administration’s mass deportation agenda. “This is ethnic cleansing,” Huerta tells Democracy Now! “We have never seen such horrific, horrific attacks on our people.”

Huerta is president and founder of the Dolores Huerta Foundation; she co-founded the United Farm Workers of America with Cesar Chavez in the 1960s. Amid intensifying immigration raids, she describes how she has joined with People for the American Way and the Dolores Huerta Foundation to release a short dramatized film that shows neighbors joining together in nonviolent civil disobedience to protect an immigrant elder from being disappeared by ICE .


Please check back later for full transcript.

The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License . Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.

No to Authoritarianism: 7 Million Rally Across U.S. in Historic No Kings Day Protests

Democracy Now!
www.democracynow.org
2025-10-20 13:46:24
An estimated 7 million people took part in No Kings rallies Saturday to protest President Trump’s authoritarian policies. Organizers say protests were held at about 2,600 sites across all 50 states in what was one of the largest days of protest in U.S. history, surpassing the first No Kings da...
Original Article

An estimated 7 million people took part in No Kings rallies Saturday to protest President Trump’s authoritarian policies. Organizers say protests were held at about 2,600 sites across all 50 states in what was one of the largest days of protest in U.S. history, surpassing the first No Kings day of action in June. One of the biggest mobilizations was in Washington, D.C., where Trump has fired thousands of federal workers and sent in National Guard troops to patrol the streets. Democracy Now! covered the action and spoke to people about what brought them out to protest. “We need to make it clear that we can’t have an authoritarian government, a government that’s turned into nothing but a weapon,” says Paul Osadebe, who says he was fired from his job as a HUD civil rights lawyer for challenging Trump’s refusal to enforce the Fair Housing Act.


Transcript

This is a rush transcript. Copy may not be in its final form.

AMY GOODMAN : This is Democracy Now! , democracynow.org. I’m Amy Goodman.

An estimated 7 million people took part in “No Kings” rallies Saturday across the United States to protest President Trump’s embrace of authoritarianism. Organizers say protests were held in 2,600 villages, hamlets, towns, cities across all 50 states, in what was one of the largest days of protest in U.S. history, surpassing the 5 million in the first No Kings day of action in June.

In Chicago, the protest stretched for two miles as about a quarter of a million people took to the street. Chicago Mayor Brandon Johnson called for a general strike in his address.

An estimated 200,000 people rallied in Washington, D.C. The metro area of Washington, D.C., has the highest concentration of federal workers furloughed and fired. An estimated 200,000 people were in the nation’s capital.

ANTHONY LEE : My name is Anthony Lee. I’m from Woodlawn, Maryland. I’m a member of National Treasury Employees Union Chapter 22 that represents employees at the Food and Drug Administration. I’m here today because we shouldn’t have kings in America and to stand up for our democracy, to get our services back, to make sure that what’s been happening in this country, that someone stands up to it, stops it and makes a change.

I’m currently furloughed. I’m a federal employee at the FDA . Myself and hundreds of our employees there are furloughed. We want to work. We want to get back to work, to getting back to doing the job that we were hired to do, to protect and promote the health and safety of the American public. And so, employees are now going without a paycheck, struggling to figure out how they’re going to make ends meet, how they’re going to pay for their bills, and uncertain about their future in the federal government.

KATHLEEN ROMIG : My name is Kathleen Romig, and I formerly worked at the Social Security Administration. … Well, we’re asking Congress to stand up. They have the power of the purse. They have oversight control over all of these important federal programs, including Social Security. And we’re asking them to be the coequal branch of government that they really should be and to push back against something that all their constituents and voters rely upon.

KATHLEEN : I’m Kathleen. I’m from Martha’s Vineyard, Massachusetts. And I’m here because we got to get ICE out of the streets.

SOPHIA : I’m Sophia. I’m from Massachusetts. I’m here from UNC -Chapel Hill visiting my friend. And I’m here because I am a big supporter of women’s rights, women’s reproductive freedom, abortion, etc.

KATHLEEN : There’s a really big Latin community back home. And it’s really affected our city, because, honestly, the immigrants run our city. Like, the only reason everything works, the restaurants, like everything, is because of the immigrants, and we need them in America.

PAUL OSADEBE : I’m Paul Osadebe. I’m a resident of D.C. And I’m here because I’m a federal worker who’s furloughed, who’s been fired under this administration, and whose agency had been shut down way before the current shutdown. So, I’m here to make sure that we make our voices heard, because federal workers want to work. We want to serve the people. That’s what I came here to do. We were already not being allowed to do that. So, we need to change things up. We need to make it clear that we can’t have an authoritarian government, a government that’s turned into nothing but a weapon, because that’s what it seems like is happening right now.

PROTESTER : I’m from the U.S. I’m a U.S. citizen, originally from India. And I’m here today because I think that governing by fear is a really bad idea. I think that we’re not getting our voices out there. And if you think about how social change actually happens, it happens through collective action. And I really believe in this. And though I was scared to come here because of the threats, the quite overt threats coming from the other side, I’m very happy to be here, and I really felt it was important to be here.

I mean, if you look at the actual polls, we are not divided the way they’re trying to present at all. We care about the same things. We care about affordable healthcare. We care about affordable education. We care about a living wage. We care about fair housing and affordable housing. And those are things that unite us, not divide us. So I think it’s super important for us to be out here and to demand a government that represents us and not the billionaires.

CAROLINE : I’m Caroline. I’m from Reston, Virginia. And when I hear the Republicans start calling protesters “terrorists,” it got my blood boiling. I cannot believe that they are trying to squash our First Amendment, along with everything else that they’re doing to America.

And my other sign is “I want my democracy back.” We’re all looking for a little bit of hope, because we’re all very distressed at what’s happening to our democracy. They’re stripping away our rights, left and right, and the rule of law is just disappearing, like the pardon he gave to George Santos last night. As long as you’re Republican, he’ll let you go. If you’re not, then he puts you in jail.

So, I’m very terrified about what’s happening to the U.S. right now. And being here, it’s with like-minded people, and it’s hopefully together we can make our thoughts known and to put up some protest to what’s happening. We’re losing our democracy.

AMY GOODMAN : Voices from the streets of the No Kings protest in Washington, D.C., that has the highest concentration, the D.C. metro area, of furloughed and fired federal workers. This was one of some 2,600 rallies nationwide. Special thanks to Jaisal Noor.

The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License . Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.


Next story from this daily show

“This Is Ethnic Cleansing”: Civil Rights Icon Dolores Huerta Decries Trump’s Targeting of Immigrants

The Forgotten Captives: Israel Still Imprisoning 9,000 Palestinians Even After Hostage Deal

Democracy Now!
www.democracynow.org
2025-10-20 13:36:43
“Even after having released almost 2,000 people last week, the Israeli military is still holding about 9,000 Palestinians, in what it calls security prisoners or detainees,” says Sari Bashi, an Israeli American human rights lawyer and former program director at Human Rights Watch. “...
Original Article

This is a rush transcript. Copy may not be in its final form.

AMY GOODMAN : This is Democracy Now! , democracynow.org. I’m Amy Goodman.

An estimated 7 million people took part in “No Kings” rallies on Saturday. We’re going to go to that in our next segment, but right now we remain talking about Israel and Gaza. As we have reported, Israel carried out a wave of deadly attacks in Gaza over the weekend, temporarily halting aid deliveries, but Israel is now saying it’s going to resume enforcement of the U.S.-brokered ceasefire. This comes as both the U.S. envoy, Steve Witkoff, and Trump’s son-in-law, Jared Kushner, are in Israel awaiting JD Vance to arrive.

For more, we go to Jerusalem, where we’re joined by Sari Bashi, an Israeli American human rights lawyer, former program director at Human Rights Watch. Her forthcoming memoir, Upside-Down Love , tells the unlikely story of how she fell in love with her Palestinian spouse.

We welcome you back to Democracy Now! , Sari. You have been following the Palestinian prisoners who have been released. We have this overnight news of Israel killing something like 40 Palestinians, Sunday to today. They said it was in response to two Israeli soldiers being killed, but it’s unclear whether they were killed as a result of their tank rolling over a unexploded ordnance. What do you know? And then tell us about the prisoners, released not only in Gaza, but in Ramallah.

SARI BASHI : Thank you.

So, on the ceasefire deal, my main concern is that things that are absolute obligations to civilians, like, for example, humanitarian aid, like, for example, avoiding unlawful attacks on families in cars, are being used as bargaining chips in political negotiations over the ceasefire. And if there is one thing that the last two years have taught us, it is that obligations to civilians cannot be used as leverage.

So, in response to allegations of violations of the ceasefire, the Israeli military stopped humanitarian aid to Gaza yesterday, despite having promised to allow in at least 600 trucks every day to a starving population. It is up to the United States to ensure that whatever happens with this ceasefire deal, that nobody quite knows what it means, obligations to civilians continue to be fulfilled, and that no matter what Hamas does, the Israeli government does not starve children in Gaza in response. Reportedly, the Israeli government agreed today to resume humanitarian aid in response to U.S. pressure. So, when the U.S. government wants to, they can, and they should, insist on obligations to civilians.

AMY GOODMAN : And talk about the prisoners. We have heard and seen the incredibly moving reunions of Israeli hostages returned to their families. We know their stories, their names. Deeply emotional. But when we see the Palestinian prisoners released in bus after bus — we’re talking about nearly 2,000 — we do not know these stories. Talk about the condition of the men that have been released, Sari.

SARI BASHI : So, it’s not only men. It’s also women and children. Even after having released almost 2,000 people last week, the Israeli military is still holding about 9,000 Palestinians, in what it calls security prisoners or detainees. Only about a thousand of them have actually been committed of any crime — convicted of any crime. The vast majority of people being held are being held without trial, either in what’s called administrative detention, for the West Bank, or the Unlawful Combatants Law, for folks from Gaza. That means that there’s no allegation that they’ve committed a crime. They’re being held on a charge of dangerousness, that is backed by secret evidence that neither they nor their lawyers can either see or challenge. It is an arbitrary system in which every day, including yesterday, the Israeli military is rounding up more and more Palestinians in the West — from the West Bank now and just refilling those detention and prison cells, where people are subject to serious abuse.

For the last two years, at the request of Israel’s police minister, the conditions, that were already bad for Palestinian prisoners, have been worsened. Food rations have been reduced. Israel has unlawfully suspended both family visits as well as visits by the International Committee of the Red Cross, which is supposed to ensure humane treatment. And there have been persistent, credible and systematic reports of rape, starvation and torture. People are coming out of those prisons badly malnourished, with scabies. Dozens have died in detention. There has been no accountability, no investigations. And that needs to change for the 9,000 people who are still being held.

AMY GOODMAN : And I just want to clarify a number. When you said 1,000 have been convicted of crimes, of the 1,700 released to Gaza, almost none have been convicted. Is that right? But you’re saying of the people who are being held in prison. Is this the largest prison population of Palestinians in Israeli jails for decades?

SARI BASHI : Yes. And what happened last week was 250 people who had been convicted of crimes, as well as 1,700 who were being held without charge or trial, were released. Still, even after that release, Israel is still holding about 9,000 prisoners. About 1,200 have actually been convicted of crimes. Another few thousand are in pretrial detention, and the majority are being held without trial, either under administrative detention or the Unlawful Combatants Law. This process completely lacks transparency or due process.

And what’s most worrying is that the Israeli government is not allowing neutral humanitarian actors, like the International Committee of the Red Cross, to visit to ensure that conditions are humane. And based on the fact that some people who were released were released directly to the hospital, they are not humane. And that needs to end.

AMY GOODMAN : Sari, you wrote a deeply moving piece in The New York Times in August, and it’s about your relationship with your husband. The article, the headline, one marriage — “Our Marriage Includes an Emergency Backpack.” Your husband is from Gaza, but you live in Ramallah. If you could summarize how your husband is dealing with everything that’s happening right now, and his relationship to Gaza as he stays in Ramallah?

SARI BASHI : Look, it’s been hell. And this is, you know, a relationship that I also explore in my forthcoming memoir, Upside-Down Love , which will be out in January. His whole family is in Gaza. Their homes have been destroyed. They’ve lost weight. My sister-in-law, for her, the timing of the ceasefire is unbearable. Her son was killed two-and-a-half weeks before it went into effect. And so, for her, it was almost not too late, and then it was.

There is no way of describing the anguish that he and so many others have felt living next to a genocide, knowing that the people he cares the most about at any moment could be killed, knowing that we sit down to a family dinner and we eat, but we know that his siblings don’t have enough food. I pray that that will end. I pray that the United States will finally put an end to that and require the Israeli government to fulfill its obligations to civilians in Gaza, in particular to allow humanitarian aid in.

AMY GOODMAN : Sari Bashi, I want to thank you so much for being with us, Israeli American human rights lawyer, former program director at Human Rights Watch, speaking to us from Jerusalem.

When we come back, some 7 million march at “No Kings” rallies across the United States, believed to be one of the largest protests in U.S. history. We’ll hear voices from the streets of D.C., and we’ll speak in California with labor icon Dolores Huerta. Back in 20 seconds.

[break]

AMY GOODMAN : “If I Was President” by Las Cafeteras, performing in our Democracy Now! studio.

The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License . Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.

Alibaba Cloud claims to reduce Nvidia GPU use by 82%

Hacker News
www.scmp.com
2025-10-20 13:31:22
Comments...
Original Article

Alibaba Group Holding has introduced a computing pooling solution that it said led to an 82 per cent cut in the number of Nvidia graphics processing units (GPUs) needed to serve its artificial intelligence models.

The system, called Aegaeon, was beta tested in Alibaba Cloud’s model marketplace for more than three months, where it reduced the number of Nvidia H20 GPUs required to serve dozens of models of up to 72 billion parameters from 1,192 to 213, according to a research paper presented this week at the 31st Symposium on Operating Systems Principles (SOSP) in Seoul, South Korea.

“Aegaeon is the first work to reveal the excessive costs associated with serving concurrent LLM workloads on the market,” the researchers from Peking University and Alibaba Cloud wrote.

Alibaba Cloud is the AI and cloud services unit of Hangzhou-based Alibaba, which owns the Post. Its chief technology officer, Zhou Jingren, is one of the paper’s authors.

Cloud services providers, such as Alibaba Cloud and ByteDance’s Volcano Engine, serve thousands of AI models to users concurrently, meaning that many application programming interface calls are handled at the same time.

However, a small handful of models such as Alibaba’s Qwen and DeepSeek are most popular for inference, with most other models only sporadically called upon. This leads to resource inefficiency, with 17.7 per cent of GPUs allocated to serve only 1.35 per cent of requests in Alibaba Cloud’s marketplace, the researchers found.

Researchers globally have sought to improve efficiency by pooling GPU power, allowing one GPU to serve multiple models, for instance.

Alibaba Cloud claims to reduce Nvidia GPU use by 82%

Hacker News
www.tomshardware.com
2025-10-20 13:31:22
Comments...
Original Article
Alibaba Cloud
(Image credit: Alibaba)

Alibaba Cloud claims its new Aegaeon pooling system reduces the number of Nvidia GPUs required to serve large language models by 82% during a multi-month beta test inside its Model Studio marketplace. The result, published in a peer-reviewed paper presented at the 2025 ACM Symposium on Operating Systems (SOSP) in Seoul, suggests that cloud providers may be able to extract significantly more inference capacity from existing silicon, especially in constrained markets like China, where the supply of Nvidia's latest H20s remains limited.

Unlike training-time breakthroughs that chase model quality or speed, Aegaeon is an inference-time scheduler designed to maximize GPU utilization across many models with bursty or unpredictable demand. Instead of pinning one accelerator to one model, Aegaeon virtualizes GPU access at the token level, allowing it to schedule tiny slices of work across a shared pool. This means one H20 could serve several different models simultaneously, with system-wide “goodput” — a measure of effective output — rising by as much as nine times compared to older serverless systems.

The system was tested in production over several months, according to the paper, which lists authors from both Peking University and Alibaba’s infrastructure division, including CTO Jingren Zhou. During that window, the number of GPUs needed to support dozens of different LLMs — ranging in size up to 72 billion parameters — fell from 1,192 to just 213.

While the paper does not break down which models contributed most to the savings, reporting by the South China Morning Post says the tests were conducted using Nvidia’s H20, one of the few accelerators still legally available to Chinese buyers under current U.S. export controls.

Whether those savings translate outside Alibaba’s stack remains to be seen. Alibaba Cloud’s paper does not specify the exact network fabric used in the beta test, but we know the company offers its own eRDMA elastic RDMA network and has a record of building highly‑integrated GPU serving stacks, suggesting the results may depend on an optimized, vertically integrated environment.

Google Preferred Source

Follow Tom's Hardware on Google News , or add us as a preferred source , to get our latest news, analysis, & reviews in your feeds.

Luke James is a freelance writer and journalist.  Although his background is in legal, he has a personal interest in all things tech, especially hardware and microelectronics, and anything regulatory.

Show HN: I got tired of managing dev environments, so I built ServBay

Hacker News
www.servbay.com
2025-10-20 13:29:00
Comments...
Original Article

ServBay includes most commonly used packages for developers

Trusted by over 100,000 developers worldwide

7000+

Daily Active Developers

What ServBay can do for you

Rich Software Packages

Rich Software Packages

ServBay includes a variety of services required for developers in their daily work. Whether it's development languages like PHP, Node.js, Python, Golang, Java, .NET, Ruby, Rust, various databases like MySQL, MariaDB, PostgreSQL, MongoDB, Redis, or services such as DNS, email, reverse proxy/tunnel, and even the Ollama large language model, you can install them with one click and use them out of the box. With a wide range of package versions available, you can even find outdated versions like PHP 5.6 and Node.js 12. With multi-version coexistence, packages can run simultaneously without interference.

Project-Level Environment Settings

Project-Level Environment Settings

It's common for different projects to use different versions of development languages and databases. ServBay offers project-level runtime environment configurations, allowing you to specify different versions of development languages for each project, ensuring they do not interfere with each other, making your project development more flexible and controllable.

Domain Names and DNS

Domain Names and DNS

During the lifecycle of web development, domain names are consumables. ServBay comes with a built-in DNS service that allows developers to use non-existent domain names and TLDs without registering them, and issues free SSL certificates for these domain names. This capability not only saves developers significant costs but also greatly enhances security.

Mail Server

Mail Server

Email services are essential during development. ServBay has a built-in mail server that lets you easily use services like POP3/SMTP. With ServBay’s PKI, it supports STARTTLS and SSL/TLS with zero configuration. Want to send emails externally? The built-in SMTP Relay and SpamAssassin can help you with that. A powerful WebMail interface is also included.

SSL Certificates

SSL Certificates

ServBay has a built-in PKI system that allows developers to create CA and issue SSL certificates for free. In addition to domain certificates, it supports S/MIME email certificates, code signing certificates, and document signing certificates. With SSL certificates, developers can encrypt Web, database, and email services, as well as sign packages. Additionally, ServBay supports requesting SSL certificates from Let's Encrypt, ZeroSSL, and Google Trust Services via ACME and supports automatic renewal.

Support for Multiple Development Languages

Support for Multiple Development Languages

ServBay supports various mainstream web development languages, including but not limited to PHP, Node.js, Python, Golang, Java, .NET, Ruby and Rust. Each development language offers multiple versions, making it easy for developers to choose the right version based on project needs. ServBay automatically configures the corresponding runtime environment, allowing developers to start development work quickly without manual installation and setup. Switching between different language versions with one click facilitates testing compatibility across different environments.

Rich Database Support

Rich Database Support

ServBay integrates a variety of commonly used databases, such as MySQL, MariaDB, PostgreSQL, MongoDB, Redis, MariaDB, Memcached, SQLite, and more. Developers can choose suitable databases based on project needs and deploy them with one click. ServBay provides both a graphical interface and command-line tools, making it convenient for developers to manage databases, such as creating databases, importing and exporting data, executing SQL statements, and more. Additionally, ServBay supports multi-version coexistence of databases, allowing developers to test compatibility between different database versions.

Reverse Proxy (Tunneling)

Reverse Proxy (Tunneling)

ServBay supports reverse proxy features such as Ngrok, Pinggy.io, FRP, and Oray Peanut Shell. Developers can easily expose internal services of ServBay to the external network. This is very useful for debugging webhooks, connecting to intranet services, or creating secure tunnels. The reverse proxy feature in ServBay supports custom domain names, SSL certificates, and HTTP/3 protocols to ensure secure and efficient connections.

1, 2, 3, Let's start coding!

Rich Software Packages

1. Open ServBay

Project-Level Environment Settings

2. Install the package

Domain Names and DNS

3. No configuration needed, ready to use

We have served countless excellent developers

"We have been installing everything through BREW, managing ports is very difficult. ServBay is simply a miracle! I love ServBay!"

ServBay User: Hamburger-Fries, Coder of stuff

Hamburger-Fries @ Coder of stuff

"I really enjoy using ServBay, and I believe it will gain wider application in the WordPress field."

ServBay User: Aaron Kwok, LuckyCat CEO

Aaron Kwok @ LuckyCat CEO

"So far, I haven’t found any way to run .Net 4 on Mac without using a virtual machine other than ServBay. It saved my legacy project."

ServBay User: Tim, Frontend Developer

Tim @ Frontend Developer

"I am a new member of ServBay, and I have to say, I’m very happy that this service finally exists! Cheers to the developers! You guys are doing great! #Respect! Please focus on maintaining the high quality you have already set and don’t rush to release new features. I really love the current service and hope you continue to grow and develop excellent software!"

ServBay User: 13garth, .Net Developer

13garth @ .Net Developer

"This is the easiest and most user-friendly web service software for new developers."

ServBay User: U Park, Student

U Park @ Student

"ServBay is very easy to use and includes all the components needed to run local servers/websites, and it is very fast."

ServBay User: Juan Massad, IT Project Manager

Juan Massad @ IT Project Manager

"It’s not just for web development! Managing my Rust development environment and backend services like databases through ServBay’s unified interface is amazing. It’s truly a multifunctional tool."

ServBay User: Chloe J., Rust Enthusiast & Developer

Chloe J. @ Rust Enthusiast & Developer

"Just install it, click a button, and you can launch different web servers and databases."

ServBay User: Pham, Developer/Content Creator

Pham @ Developer/Content Creator

"It’s very easy to install and use, with a gentle learning curve. The software integrates easily with other software. I also love that it offers different database servers with just one click."

ServBay User: Tobe404, Web Developer

Tobe404 @ Web Developer

"ServBay truly allows me to focus on coding instead of endlessly tweaking and troubleshooting environment issues. Its integration and management of various services are top-notch. Highly recommend it to any developer using Mac!"

ServBay User: Trish Martinelli, Sunrise Founder & CEO

Trish Martinelli @ Sunrise Founder & CEO

"Using ServBay is a fantastic experience. It helps build websites by creating web servers and MySQL databases."

ServBay User: Arthur Harnik, Webmaster

Arthur Harnik @ Webmaster

"Most of my experience using it involves installing WordPress, and I find that using ServBay for installation is easier than some options provided by web hosts."

ServBay User: Shayla Christine, Webmaster

Shayla Christine @ Webmaster

"ServBay provides an excellent graphical user interface for managing Python-based servers on Mac. I can quickly switch Python versions for software testing, and the logs provide the expected level of feedback for debugging applications."

ServBay User: Anthony Holmes, Python Developer

Anthony Holmes @ Python Developer

"Running an older version of ASP.NET Framework applications on Mac has always been a big issue. The built-in Mono integration in ServBay perfectly solves this problem, allowing us to continue developing and maintaining critical legacy projects on modern hardware."

ServBay User: taurlynne9, Software Engineer

taurlynne9 @ Software Engineer

"No need to manage separate tools for Python, Go, and Node.js anymore! ServBay integrates them seamlessly with databases and even Supervisor. My entire development workflow has become clearer and more efficient."

ServBay User: Ben Carter, Backend Engineer

Ben Carter @ Backend Engineer

"Our team standardized on ServBay for local development. New members onboard faster now, and the "it works on my machine" issues have greatly reduced. Environment consistency is finally achievable."

ServBay User: Maria, Technical Director

Maria @ Technical Director

"Laragon and XAMPP drove me to despair. ServBay is the best tool of its kind that I have ever used."

ServBay User: V_LN, Full Stack Engineer

V_LN @ Full Stack Engineer

"Finally, there is a way to handle multiple Java versions on macOS without pain! ServBay is a lifesaver for maintaining various legacy and modern projects."

ServBay User: Break4, Java Developer

Break4 @ Java Developer

"As a freelancer, I often switch between client projects using different tech stacks. ServBay allows me to maintain entirely independent configuration environments for each project. Context switching has become smooth and instant, significantly enhancing my productivity."

ServBay User: Fatima Al-Sayed, Freelancer

Fatima Al-Sayed @ Freelancer

"I frequently need to practice penetration testing. ServBay is very convenient because it allows me to quickly set up the different environments I need."

ServBay User: Elvis K., Cybersecurity Engineer

Elvis K. @ Cybersecurity Engineer

"I previously used Docker Compose to manage my local Rust development environment, but it often felt heavy and had noticeable configuration overhead. ServBay provides similar isolation advantages but feels more lightweight and integrates better with my workflow. It starts faster and simplifies daily local development management."

ServBay User: Javier, Enterprise Software Engineer

Javier @ Enterprise Software Engineer

Low Price, High Efficiency

We believe your time is more valuable than money

Comprehensive professional features and a rich development toolkit to meet all the daily needs of individual developers.

Download Free Trial

  • Private CA and free SSL certificates

  • Includes Python (2.7, 3.5-3.14)

  • Includes Golang (1.11-1.24)

  • Includes Java (7/1.7.0-24)

  • Includes MariaDB (10.4-11.7)

  • Includes PostgreSQL (10-17)

  • Includes MongoDB (5.0-8.0)

  • Includes phpMyAdmin/adminer

  • Multiple PHP instances running simultaneously

  • PHP modules and extensions

Most popular

Designed for professional developers or small startups, featuring unrestricted capabilities and various collaboration features.

Buy Now Free Trial

  • All features in ServBay Free

  • Easy-to-configure services

  • Customizable service configuration

  • Customizable HTTP/HTTPS ports

  • Unlimited private CA certificates

  • Free S/MIME, code signing, and document signing certificates

  • Free SSL certificates (via ACME, supports Let's Encrypt, ZeroSSL, Google Trust Services, etc.)

  • Supports multi-domain SSL certificates (SAN certificates)

  • Supports wildcard domain SSL certificates (e.g. *.example.com)

  • Automatic SSL certificate renewal

  • Multi-domain/alias support

  • Multiple database instances running simultaneously

  • Back up to external storage

  • Technical support (Email, Telegram, Discord, WhatsApp, WeChat)

Built for highly collaborative distributed development teams, empowering team leaders to coordinate daily development environments.

Buy Now Free Trial

  • All features in ServBay Pro

  • 2 device licenses per user

  • Higher limit for private/public SSL certificates

Comprehensive professional features and a rich development toolkit to meet all the daily needs of individual developers.

Download Free Trial

  • Private CA and free SSL certificates

  • Includes Python (2.7, 3.5-3.14)

  • Includes Golang (1.11-1.24)

  • Includes Java (7/1.7.0-24)

  • Includes MariaDB (10.4-11.7)

  • Includes PostgreSQL (10-17)

  • Includes MongoDB (5.0-8.0)

  • Includes phpMyAdmin/adminer

  • Multiple PHP instances running simultaneously

  • PHP modules and extensions

Most popular

Designed for professional developers or small startups, featuring unrestricted capabilities and various collaboration features.

Buy Now Free Trial

  • All features in ServBay Free

  • Easy-to-configure services

  • Customizable service configuration

  • Customizable HTTP/HTTPS ports

  • Unlimited private CA certificates

  • Free S/MIME, code signing, and document signing certificates

  • Free SSL certificates (via ACME, supports Let's Encrypt, ZeroSSL, Google Trust Services, etc.)

  • Supports multi-domain SSL certificates (SAN certificates)

  • Supports wildcard domain SSL certificates (e.g. *.example.com)

  • Automatic SSL certificate renewal

  • Multi-domain/alias support

  • Multiple database instances running simultaneously

  • Back up to external storage

  • Technical support (Email, Telegram, Discord, WhatsApp, WeChat)

Built for highly collaborative distributed development teams, empowering team leaders to coordinate daily development environments.

Buy Now Free Trial

  • All features in ServBay Pro

  • 2 device licenses per user

  • Higher limit for private/public SSL certificates

Frequently Asked Questions

If you have more questions, please visit the Help Center .

What is ServBay, and what benefits can it bring me?

ServBay provides web developers with an integrated, graphical, one-click installation local development environment. It includes commonly used web development services and tools, such as web servers, databases, programming languages, mail servers, queue services, SSL/PKI systems, and large language models (LLM).

What common development packages does ServBay include?

ServBay includes services such as Caddy, NGINX, Apache, PHP, Python, Node.js, MySQL, MariaDB, PostgreSQL, MongoDB, Redis, memcached, and tools like phpMyAdmin, Adminer, and Composer. It also includes Ollama to facilitate developers in running large language models like deepseek, qwen, and llama.

Which programming languages are integrated into ServBay?

ServBay integrates various versions of popular programming languages such as PHP, Python, and Node.js, and is continually adding support for other languages like Golang, Rust, Ruby, and Java. Developers can easily use and switch between the latest and deprecated versions of these languages in ServBay.

Is ServBay suitable for team use?

Absolutely! The original intention behind ServBay is to provide a unified and maintainable development environment for teams, reducing code discrepancies and bugs caused by inconsistent environments among team members. Through the 'unified configuration' feature, team managers can lock in different development environments for each project.

Why not use Docker or Homebrew?

Compared to Docker, ServBay is specifically optimized, providing better resource consumption and making it easier to view logs and debug source code. In comparison with Homebrew, ServBay does not require compiling source code for installation, nor does it invade the operating system or pollute its environment, which could affect other applications' normal operation. Additionally, users do not need to manually edit configuration files, making it more user-friendly.

What advantages does ServBay have over MAMP, XAMPP, herd, etc.?

Unlike MAMP, XAMPP, herd, and similar tools, ServBay supports multiple hosts running simultaneously, allows the use of non-existent custom domain names, comes with free SSL certificates without application, and supports running multiple PHP versions at the same time. ServBay also offers detailed package versions, including the latest and historical versions, which can be installed and switched at any time. These capabilities are unmatched by other similar products.

Does ServBay support running multiple PHP versions simultaneously?

Yes. ServBay can run multiple instances of different PHP versions concurrently, allowing you to set a different PHP version for each website and quickly switch between different versions.

Which versions of PHP, Python, Node.js, MySQL, MariaDB, and PostgreSQL does ServBay support?

ServBay supports PHP 5.6 - PHP 8.5, Python 2.7, 3.5 - 3.14, Node.js 12 - Node.js 23, MySQL 5.1 - MySQL 9.1, MariaDB 10.4 - MariaDB 11.5, PostgreSQL 10 - PostgreSQL 16, and MongoDB 5.0 - MongoDB 8.0.

Can I compile the PHP modules I need?

Of course! You just need to xcode-select --install to install the relevant compilation tools for macOS, then set the PATH, CFLAGS, and other parameters to the path of the ServBay Development Lib to compile.

Does ServBay support large language models (LLM)?

Absolutely! ServBay integrates Ollama, allowing you to run large language models such as deepseek, qwen, llama, solar, chatglm, etc.

The Next Generation Development Tool

1 app, 2 clicks, and 3 minutes are all you need to set up your web development environment. No need to compile and install dependencies, non-intrusive to the system. Includes various versions of programming languages, databases, domain names, SSL certificates, email servers, and reverse proxies.

"The War Has Not Really Ended": Gaza Reporter on Israeli Attacks & Reuniting with Imprisoned Brother

Democracy Now!
www.democracynow.org
2025-10-20 13:13:14
We speak with Al Jazeera reporter Ibrahim al-Khalili in Gaza, where the shaky ceasefire between Israel and Hamas appears to be holding despite sporadic violence. Gaza officials say Israeli forces have repeatedly violated the agreement, including when they opened fire on a civilian bus, killing 11 me...
Original Article

This is a rush transcript. Copy may not be in its final form.

AMY GOODMAN : We begin today’s show in Gaza, where repeated Israeli attacks have killed nearly 100 Palestinians and wounded over 200 since a Trump-backed ceasefire deal went into effect over a week ago. On Friday, Israeli military forces opened fire at a civilian bus, killing 11 members of a Palestinian family attempting to return home into Gaza City. Among the Abu Shaaban family victims were seven children between the ages of 5 and 13, in an attack many have decried as the deadliest truce violation yet. Israel also bombed the headquarters of the Palestine Media Production Company in central Gaza, killing the broadcast engineer Ahmed Abu Matar.

Israel accused Hamas of killing two Israeli soldiers in Rafah, but Hamas says there are reports the soldiers died when their bulldozer ran over unexploded ordnance and that they’re not active in Rafah.

Gaza officials report Israel has breached the fragile U.S.-brokered agreement some 80 times. Despite the wave of Israeli strikes this weekend, President Trump said the ceasefire is still in place. Israel has also continued to hinder the flow of humanitarian aid into Gaza, again suspending the entrance of aid over the weekend, but Israeli officials are now saying aid distribution in Gaza and the ceasefire have resumed.

This all comes as more Palestinians recently released from Israeli prisons have described torture while in Israeli jails. In a minute, we’ll be joined by Al Jazeera reporter Ibrahim al-Khalili, who reunited with his brother Mohammed last week, after Mohammed was released following 19 months in Israeli prison. This is his report on Al Jazeera, his brother held without charge.

IBRAHIM AL- KHALILI : Here’s my brother. He’s back, after one year and seven months in detention, in Israeli detention, after torture. We got detained with each other, but now — I got released, and now, after one year and seven months, he’s back again home.

[translated] Mohammed, how was the situation in prison after they detained you?

MOHAMMED AL- KHALILI : [translated] It was a big struggle. We were beaten and humiliated in wrong ways. We suffered a lot.

IBRAHIM AL- KHALILI : My brother is so tired. He needs to be back home again. But I can now say the war has really came to an end.

AMY GOODMAN : Last month, Al Jazeera reporter Ibrahim al-Khalili was on air when he broke the news of Israeli forces killing his own family members in Gaza.

IBRAHIM AL- KHALILI : I’m here today reporting from Gaza’s al-Daraj neighborhood, east of Gaza City, where a deadly Israeli airstrike flattened this complete residential block behind me. Many Palestinians have been confirmed killed, and at least 50 Palestinians are still trapped under the rubble. Tragically, many of my close relatives were among those killed in this airstrike. It came without any prior warning at midnight, while people slept.

This block used to shelter at least 100 Palestinians, and they sought refuge here, believing that this residential neighborhood is safe, but they ended up being targeted and brutally killed. This area, once meant to be a place of refuge for many Palestinian families, now has become a site of death and massive destruction. It has turned into a mass graveyard for Palestinians in the light of the unprecedented escalation of Israeli attacks targeting towers, residential blocks and shelter schools, killing many Palestinians and leaving thousands with nowhere to run. Ibrahim al-Khalili, Al Jazeera, Gaza City, Palestine.

AMY GOODMAN : We now go to Gaza, where Al Jazeera reporter Ibrahim al-Khalili is joining us from Al-Shifa Hospital.

Ibrahim, welcome to Democracy Now! I want to start with your first report. I watched it live on Al Jazeera, when you reunited with your brother. Talk about why he was jailed, how long he was jailed.

IBRAHIM AL- KHALILI : Yes. First of all, thank you for having me.

My brother Mohammed, the oldest brother in my family, was detained on the 18th of March, 2024, around the Al-Shifa Medical Complex, where my family used to live. So, that day, the Israeli military stormed Al-Shifa Hospital at midnight without any prior warning, and a loudspeaker started saying, “Don’t move. The area is under extreme siege. Don’t move. The army is operating in your area. And just, basically, don’t move.” I was with my whole family inside my house at that time.

So, the Israeli military, in the morning, after midnight, they stormed my house, and then they took us all out, forced us into the freezing cold, and we were forcibly stripped of our clothes in the freezing weather. After interrogation and torture and harsh circumstances in that harsh weather, I was released, but my brother Mohammed was detained. And since that day, during that time, we did not know anything about his whereabouts. So, he was detained by the Israeli military. When the Israeli military operation assault on Al-Shifa Medical Complex came to an end, we returned back to our homes to find nothing. We have not found my brother Mohammed or the house even standing.

So, after 19 months in Israeli detention, my brother Mohammed got released, and we met up in Nasser Hospital in Khan Younis. And this is a very historic moment for us Palestinians, and for my family in particular, to have my brother back after 19 months in Israeli jail. As I talked to my brother Mohammed — he is tired now — he was telling me he was under extreme torture and interrogation and the harsh circumstances in Israeli prisons. He lost almost 30 kilos. He was almost 100 kilograms, but his weight had dropped to 70 kilograms during the 19 months of detention. So, this tells the whole story about the torture, interrogation and the extreme, harsh conditions all Palestinians go through in the Israeli prisons. They go well, and they just released with the unhealthy, whether mentally or physically.

So, we are — we are all lucky now to have my brother Mohammed back home to his four beautiful children and his wife, who have been waiting for him for so long. And the sense of relief and joy, happiness inside the family prevailed, after they got their loved one, Mohammed, back home after 19 months in Israeli detention. So, this is a historic moment, not for me in particular. We’re talking about 1,700 prisoners were released in that, after the ceasefire took effect. And now these families are happy, and they express a lot of joy to have their loved ones back to Gaza City, after spending years under severe and harsh condition in Israeli jails.

And the situation for other prisoners who are — still remain trapped inside Israeli prisons are very dire. Let’s just focus on these and mention these to our viewers, that the Palestinian prisoners are enduring harsh realities in terms of the lack of basic necessities. As I talked to my brother Mohammed, he was having just two meals a day, and they are not sufficient, and they are not high in proteins to help these Palestinian prisoners survive the next day. And yes, the situation remains dire, whether for the released Palestinians — they endure a lot of harsh memories in Israeli jails that last for them even after they got released from Israeli prisons. And we just focus on the Palestinian prisoners who are still held in detention.

And worth mentioning to remind our viewers that my brother Mohammed was held without any charge. He was not affiliated to any political group whatsoever. So they detained him without any — without any legal reason for him to be detained by the Israeli military and to endure all what he has been through during the 19 months in Israeli prison. These are the harsh realities Palestinian prisoners face during their time of detention in Israeli prisons.

And I talked to many prisoners, and I filmed with a prisoner who just was released after the ceasefire and war have ended. He came back home to find his family members all wiped out of the civil registry. Four hundred family members of that prisoner were killed in Israel’s war. And many, many stories, as well as another prisoner, who’s Mahmoud Abu Foul, who got his legs amputated previously, before the war, and he was detained in Kamal Adwan Hospital. And after interrogation and torture, he lost sight. He lost — he can’t see in his both eyes after the torture that he has endured in Israeli prisons. And as we continue to speak with many prisoners, the same story repeats itself of torture and harsh circumstances and lack of basic necessities. This is the harsh reality that we Palestinians all go through.

Even me, when I was detained, they claimed that I belong to a political group, which is not true, and the serious allegations and accusations against us Palestinians. And we are ordinary civilians living in Gaza City, and they still accuse us of something we don’t belong to. They just make accusations about us so that they can just create some sort of an environment or a reason to detain us and torture us in our very harsh conditions and circumstances.

And the situation also remains dire in Gaza City. The Government Media Office states that 40 — Israel committed 47 — 47 breaches. And they are — Israel is breaching the ceasefire multiple times. Like, we’re talking about 47 times when Israel committed a violation in the ceasefire. And this is something we also endure. Like, yesterday, the Israeli military targeted different sites and locations here in Gaza City. And many, many families who returned to Gaza City after the cease — first phase of the ceasefire took effect, they found nothing but rubble. They returned to a wasteland, and then seeing that Israel resumed or violated the ceasefire yesterday and started to hit different sites and locations.

One of the targeted sites were in south — southeast Gaza City, where 11 family members were targeted without any sort of a warning. The Israeli military should have just issued warning to these families, like, “You are,” for example, “being in a dangerous zone.” But instead, they ended up being targeted and brutally killed. In another attack in the wider area south of the Gaza Strip, at least three Palestinians were killed in a coffee shop without any prior warning. This is a serious violation for a ceasefire to — for us to witness as Palestinians.

And, like, many people have believed that the war has really came to an end, but with these violations that are being committed by the Israeli military on daily basis creates a sense of uncertainty for hundreds of thousands of Palestinians, who wished once that the war will end, and they came back to their homeland and the areas they once called home. And we’re talking about the destruction of much of Gaza’s residential neighborhoods. And these neighborhoods are not just the loss of the property. They are — they represent the loss of memory in the homeland, which thousands of Palestinians once called home, but they return to find nothing but rubble and violations and the lack of basic necessities to know these families will return to Gaza City to survive the next day.

The situation is getting much more dire. The war has not really ended for hundreds of thousands of Palestinians who are still struggling to survive the next day with the lack of basic necessities and the restrictions imposed on Gaza’s border crossings, whether to get the medical supplies and basic necessities, food, water and whatsoever.

At the same time, Israeli military continues to restrict the borders for patients. Hundreds of hundreds of patients are waiting for Gaza’s border crossings to be open and reopened again so that they can get the proper medication they urgently need. As I talk to doctors, medicine and medical equipment are not even available in Gaza to just conduct such operations for chronic diseases patients. So, they need the — first, the borders to be open again, so that they can survive and get the urgent treatment they need abroad. And we have talked about this many, many, many times, about the restrictions of the borders to allow aid into Gaza City. Like, before the war, Gaza needed at least 500 trucks’ worth of food and medicine to help support the population in the Gaza Strip.

AMY GOODMAN : Ibrahim, I just wanted to end by asking you — some prisoners were released, 1,700, in Gaza, to learn that their families had been killed, as you described. Then there were others who were told that their families were killed, in Israeli prison, and found them alive. I wanted to refer to Shadi Abu Sido, who said, “Gaza is now gone,” when he came out of the bus, shouting to the cameras in the southern city of Khan Younis. “It’s like a scene from Judgment Day,” he said of the destruction. Shadi Abu Sido said his world shattered in Israeli detention when guards told him his wife and two children had been killed in the Gaza war. Reuters reports he said, “I heard her voice, I heard the voice of my children, I was astonished, it cannot be explained, they were alive. I saw my wife and children alive. Imagine amid death–life.” He said, about prison, it was “the graveyard of the living.” The journalists who have gotten out and the journalists who have died, if you can comment? Most recently, this Palestine Media Production Company, just yesterday, where a broadcast engineer was killed in yet another Israeli strike, after the ceasefire went into effect, and this production company’s offices were bombed.

IBRAHIM AL- KHALILI : Yeah, yeah, these are one of the methods Israeli military and Israeli army uses to just whipping the prisoners. For example, not just Abu Sido, who was told that his family was wiped out and Gaza has been completely destroyed, but my brother Mohammed was told the same thing. The Israeli military told him that “Your family was killed,” but, in fact, we were still alive in Gaza City. And he told him that “Your brother Ibrahim, your youngest brother Ibrahim, was killed.” And these are the methods used to torture psychologically the prisoners.

So, many, many methods, not just these rumors and lies that the Israeli military uses to just torture Palestinians psychologically. And he had many, many, many of them. And when Mohammed came out, he found out that his family was alive. He told me that, “Listen, I was very worried about you, for you to be killed in Israel’s war on Gaza.” And yeah, as I said, many methods. And Mohammed just shared a testimony, a harrowing testimony, about the Israeli prisons, that I can’t mention, because many methods that human beings can’t bear to go through. And this is — this is the Israeli military, what they are using to torture, whether prisoners or the people who endured two years of Israel’s war on Gaza, of Israel’s assault on the Gaza Strip, that we continue to endure.

And let me say that these prisoners are still, like, bearing the brunt of years and days of torture in Israeli prisons. They told them lies, that your families were completely wiped out, wiped out of the civil registry. And now they came back home to find their families alive. They were shocked, for them to being told that your family is alive, after the Israeli military claimed that your family was killed. And one of the things that the Israeli military told my brother, that they told him, “We destroyed your company.” Mohammed used to have a company in Gaza City related to the detergent and warehousing supplies, and they show him on camera that your company has been completely destroyed. And these are the methods they use just to tell us, Palestinians, that “We are going to destroy you,” or “We have already destroyed you.”

The situation remains dire. Even after the ceasefire took effect, one of the PMP media company was targeted in al-Zawayda area, south of Gaza City. And he was working with us. He was — he was a cameraman. He has kids. But he was — ended up being targeted. He thought that he survived two years of Israel’s war on Gaza. But when the Israeli military just breached the and violated the ceasefire agreement, he was taken away. He was killed in cold blood.

AMY GOODMAN : Ibrahim al-Khalili, I want to thank you so much taking this time with us, as you stand outside Al-Shifa Hospital. Ibrahim is an Al Jazeera reporter based in Gaza. He just reunited with his brother Mohammed, who was released from Israeli detention. He had been held without charge in an Israeli jail for 19 months, as so many of the 1,700 Palestinian prisoners were who were released to Gaza.

Next up, we go to Jerusalem to speak with the Israeli American human rights lawyer Sari Bashi, former program director at Human Rights Watch. Back in 20 seconds.

[break]

AMY GOODMAN : “People Have the Power,” Patti Smith, performing at an anniversary concert of Democracy Now!

The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License . Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.

Calculating legally compliant rent late fees across U.S. states

Hacker News
www.RentLateFee.com
2025-10-20 13:11:45
Comments...
Original Article
{"ok":true,"service":"rentlatefee","status":"healthy","timestamp":"2025-10-20T13:26:51.999Z"}

Qt Group Buys IAR Systems Group

Hacker News
www.qt.io
2025-10-20 13:09:53
Comments...
Original Article

Qt Group Plc                        Stock exchange release 13 October 2025 at 13.31 p.m. EEST

Qt completes the recommended public cash offer to the shareholders of I.A.R. Systems Group

On 4 July 2025, Qt Group Plc's (" Qt Group ") wholly owned subsidiary The Qt Company Ltd (" The Qt Company " and together with Qt Group, " Qt "), announced a recommended public cash offer to the shareholders of class B shares (the " Shares " or, individually, a " Share ") in I.A.R. Systems Group AB (publ) (" IAR" ) , to tender all their Shares at a price of SEK 180 in cash per Share (the " Offer "). The Shares in IAR are traded on Nasdaq Stockholm, Mid Cap. An offer document relating to the Offer was published on 15 August 2025.

At the end of the acceptance period on 10 October 2025, the Offer had been accepted by shareholders with a total of 12,037,848 Shares in IAR, corresponding to 94.49 per cent of the outstanding shares and votes in IAR. [1] As a result, The Qt Company controls in total 12,037,848 Shares in IAR, corresponding to 94.49 per cent of the outstanding shares and votes in IAR. [2]

The Qt Company has decided to complete the Offer. All conditions are satisfied or have been waived. Settlement for Shares tendered in the Offer during the initial acceptance period will be initiated on or around 17 October 2025.

Juha Varelius, President & CEO at Qt Group Plc, comments:

" We are pleased to the offer being accepted to such an extent that we can successfully complete the acquisition. We look forward to working closely with the IAR team to fully realise the potential of the combination by expanding our total addressable market, strengthening our global presence and customer offering ."

To allow for those shareholders who have not yet accepted the Offer to tender their Shares to The Qt Company, the acceptance period is extended until 27 October 2025 at 3.00 p.m. CET. Settlement for Shares tendered in the Offer during the extended acceptance period is expected to be initiated on or around 3 November 2025.

Since the Offer is unconditional, shareholders who have already accepted or will accept the Offer during the extended acceptance period, have no right to withdraw their acceptances.

The Qt Company intends to initiate compulsory redemption proceedings in accordance with the Swedish Companies Act to acquire all shares not tendered in the Offer and to promote delisting of IAR’s Shares from Nasdaq Stockholm.

Information about the Offer is made available at www.nordic-software-offer.com .

Advisors

Qt has appointed Nordea Bank Abp as lead financial advisor and Stifel Nicolaus Europe Limited as financial advisor. Krogerus Attorneys, Advokatfirman Vinge and Freshfields LLP are acting as legal advisors in connection with the Offer.

Investor relations contact:

pr@qt.io

Media contact:

Sandra Uitto, Fogel & Partners

Tel.: +46 (0)73 892 1740

E-Mail: qtgroup@fogelpartners.se

Distribution:

Nasdaq Helsinki

Key media

www.qt.io

Important information

This stock exchange release does not constitute an offer to buy or sell Shares, nor does it constitute an invitation to offer to buy or sell Shares. Investors considering tendering their Shares in the Offer by Qt Group's subsidiary The Qt Company should rely only on information disclosed by The Qt Company as the offeror of the Offer.

The Offer is not being made to persons whose participation in the Offer requires that an additional offer document be prepared or registration effected or that any other measures be taken in addition to those required under Swedish law.

The Offer is not being made, directly or indirectly, in or into Australia, Canada, Hong Kong, New Zealand, Japan, Singapore, South Africa, South Korea, Russia, Belarus or in any other jurisdiction where such offer would be prohibited by applicable law pursuant to legislation, restrictions and regulations in the relevant jurisdiction, by use of mail or any other communication means or instrumentality (including, without limitation, facsimile transmission, electronic mail, telex, telephone and the Internet) of interstate or foreign commerce, or of any facility of national securities exchange or other trading venue, of Australia, Canada, Hong Kong, New Zealand, Japan, Singapore, South Africa, South Korea, Russia, Belarus or in any other jurisdiction where such offer would be prohibited by applicable law pursuant to legislation, restrictions and regulations in the relevant jurisdiction, and the Offer cannot be accepted by any such use or by such means, instrumentality or facility of, in or from, Australia, Canada, Hong Kong, New Zealand, Japan, Singapore, South Africa, South Korea, Russia, Belarus or in any other jurisdiction where such offer would be prohibited by applicable law pursuant to legislation, restrictions and regulations in the relevant jurisdiction. Accordingly, any documentation relating to the Offer are not being and should not be sent, mailed or otherwise distributed or forwarded in or into Australia, Canada, Hong Kong, New Zealand, Japan, Singapore, South Africa, South Korea, Russia, Belarus or in any other jurisdiction where such offer would be prohibited by applicable law pursuant to legislation, restrictions and regulations in the relevant jurisdiction.

The Offer, the information and documents relating to the Offer are not being made and have not been approved by an authorised person for the purposes of section 21 of the UK Financial Services and Markets Act 2000 (the " FSMA "). The communication of the information and documents relating to the Offer are exempt from the restriction on financial promotions under section 21 of the FSMA on the basis that they are a communication by or on behalf of a body corporate which relates to a transaction to acquire day to day control of the affairs of a body corporate; or to acquire 50 per cent or more of the voting shares in a body corporate, within article 62 of the UK Financial Services and Markets Act 2000 (Financial Promotion) Order 2005.

The initial acceptance period for the Offer commenced on 18 August 2025 and expired at 3.00 p.m. CEST on 10 October 2025. The extended acceptance period expires at 3.00 p.m. CET on 27 October 2025.

Statements in this stock exchange release relating to future status or circumstances, including statements regarding future performance, growth and other trend projections and the other benefits of the Offer, are forward-looking statements. These statements may generally, but not always, be identified by the use of words such as "anticipates", "intends", "expects", "believes", or similar expressions. By their nature, forward-looking statements involve risk and uncertainty because they relate to events and depend on circumstances that will occur in the future. There can be no assurance that actual results will not differ materially from those expressed or implied by these forward-looking statements due to many factors, many of which are outside the control of Qt Group. Any such forward-looking statements speak only as of the date on which they are made, and Qt Group has no obligation (and undertakes no such obligation) to update or revise any of them, whether as a result of new information, future events or otherwise, except for in accordance with applicable laws and regulations.

Information for shareholders in the United States

The Offer by The Qt Company described in this stock exchange release is not made by Qt Group and, as made by The Qt Company, is made for the issued and outstanding shares of IAR, a company incorporated under Swedish law, and is subject to Swedish disclosure and procedural requirements, which may be different from those of the United States. The Offer is made in the United States pursuant to Section 14(e) of the U.S. Securities Exchange Act of 1934, as amended (the " U.S. Exchange Act ") and Regulation 14E thereunder, to the extent applicable and otherwise in compliance with the disclosure and procedural requirements of Swedish law, including with respect to withdrawal rights, the Offer timetable, notices of extensions, announcements of results, settlement procedures (including as regards to the time when payment of the consideration is rendered) and waivers of conditions, which may be different from requirements or customary practices in relation to U.S. domestic tender offers. The Qt Company's ability to waive the conditions to the Offer (both during and after the end of the acceptance period) and the shareholders' ability to withdraw their acceptances, are not the same under a tender offer governed by Swedish law as under a tender offer governed by U.S. law. Holders of the shares in IAR domiciled in the United States (the " U.S. Holders ") are encouraged to consult with their own advisors regarding the Offer.

IAR's financial statements and all financial information included herein, or any other documents relating to the Offer, have been or will be prepared in accordance with IFRS and may not be comparable to the financial statements or financial information of companies in the United States or other companies whose financial statements are prepared in accordance with U.S. generally accepted accounting principles. The Offer is made to the U.S. Holders on the same terms and conditions as those made to all other shareholders of IAR to whom an offer is made. Any information documents, including the offer document, are being disseminated to U.S. Holders on a basis comparable to the method pursuant to which such documents are provided to IAR's other shareholders.

The Offer, which is subject to Swedish law, is being made to the U.S. Holders in accordance with the applicable U.S. securities laws, and applicable exemptions thereunder. To the extent the Offer is subject to U.S. securities laws, those laws only apply to U.S. Holders and thus will not give rise to claims on the part of any other person. The U.S. Holders should consider that the price for the Offer is being paid in SEK and that no adjustment will be made based on any changes in the exchange rate.

It may be difficult for IAR's shareholders to enforce their rights and any claims they may have arising under the U.S. federal or U.S. state securities laws in connection with the Offer, since IAR and The Qt Company are located in countries other than the United States, and some or all of their respective officers and directors may be residents of countries other than the United States. IAR's shareholders may not be able to sue IAR or The Qt Company or their respective officers or directors in a non-U.S. court for violations of U.S. securities laws. Further, it may be difficult to compel IAR or The Qt Company and/or their respective affiliates to subject themselves to the jurisdiction or judgment of a U.S. court.

To the extent permissible under applicable law and regulations and pursuant to Rule 14e-5(b) of the U.S. Exchange Act, The Qt Company and its affiliates or its brokers and its brokers' affiliates (acting as agents for The Qt Company or its affiliates, as applicable) may from time to time and during the pendency of the Offer, and other than pursuant to the Offer, directly or indirectly purchase or arrange to purchase shares of IAR, or any securities that are convertible into, exchangeable for or exercisable for such shares. These purchases may occur either in the open market at prevailing prices or in private transactions at negotiated prices, and information about such purchases will be disclosed by means of a press release or other means reasonably calculated to inform U.S. Holders of such information. In addition, the financial advisors to The Qt Company may also engage in ordinary course trading activities in securities of IAR, which may include purchases or arrangements to purchase such securities as long as such purchases or arrangements are in compliance with the applicable law. Any information about such purchases will be announced in Swedish and in a non-binding English translation available to the U.S. Holders through relevant electronic media if, and to the extent, such announcement is required under applicable Swedish or U.S. law, rules or regulations.

The receipt of cash pursuant to the Offer by a U.S. Holder may be a taxable transaction for U.S. federal income tax purposes and under applicable U.S. state and local, as well as foreign and other, tax laws. Each shareholder is urged to consult an independent professional adviser regarding the tax consequences of accepting the Offer. Neither The Qt Company nor any of its affiliates and their respective directors, officers, employees or agents or any other person acting on their behalf in connection with the Offer shall be responsible for any tax effects or liabilities resulting from acceptance of the Offer.

NEITHER THE U.S. SECURITIES AND EXCHANGE COMMISSION NOR ANY U.S. STATE SECURITIES COMMISSION HAS APPROVED OR DISAPPROVED THE OFFER, PASSED ANY COMMENTS UPON THE MERITS OR FAIRNESS OF THE OFFER, PASSED ANY COMMENT UPON THE ADEQUACY OR COMPLETENESS OF THIS STOCK EXCHANGE RELEASE OR PASSED ANY COMMENT ON WHETHER THE CONTENT IN THIS STOCK EXCHANGE RELEASE IS CORRECT OR COMPLETE. ANY REPRESENTATION TO THE CONTRARY IS A CRIMINAL OFFENCE IN THE UNITED STATES.

Disclaimer

Nordea Bank Abp (" Nordea "), which is supervised by the European Central Bank and the Finnish Financial Supervisory Authority and Stifel Nicolaus Europe Limited (" Stifel "), which is authorised and regulated by the Financial Conduct Authority are acting as financial advisors to The Qt Company and no one else, in connection with the Offer and will not regard any other person as their client in relation to the Offer and will not be responsible to anyone other than The Qt Company for providing the protection afforded to their respective clients, or for providing advice in relation to the Offer or any other matters referred to in this announcement. Neither Nordea, Stifel, nor any of their affiliates, or their or any of their affiliates' respective employees, board members, officers, vendors, advisors, members, successors, representatives or agents owes or accepts any duty, liability or responsibility whatsoever (whether direct or indirect, consequential, whether in contract, in tort, in delict, under statute or otherwise) to any person who is not a client of Nordea or Stifel, respectively, in connection with the Offer or otherwise.

[1] Corresponding to 90.05 per cent of the total number of shares and votes in IAR.

[2] Corresponding to 90.05 per cent of the total number of shares and votes in IAR.

Documents & Links

AWS Outage: A Single Cloud Region Shouldn't Take Down the World. But It Did

Hacker News
faun.dev
2025-10-20 13:05:02
Comments...
Original Article

A significant outage of Amazon Web Services (AWS) disrupted numerous high-profile websites and services, including Amazon, Snapchat, Disney+, Reddit, and Canva. The outage was attributed to an "operational issue" affecting over 70 AWS services, causing widespread disruptions in cloud-based games and crypto exchanges like Coinbase. AWS reported signs of recovery shortly after the incident, but some services, such as Reddit, continued to experience issues.

The outage also impacted government websites like the UK's HMRC and various banking services, including Lloyds, Halifax, and the Bank of Scotland, leading to declined card transactions and inaccessible online banking. AWS identified the issue as related to DNS resolution in the US-EAST-1 region and worked on multiple paths to accelerate recovery. Despite significant recovery signs, some services faced delays due to a backlog of queued requests.

The incident highlighted the vulnerability of relying on a few major cloud service providers, as disruptions can have extensive ripple effects across numerous platforms.

Matrix Conference 2025 Highlights

Hacker News
element.io
2025-10-20 13:00:28
Comments...
Original Article

The Matrix Conference 2025 was a huge success; the energy and enthusiasm was just incredible!

We were delighted to be the anchor sponsor - thanks to The Matrix Foundation and everyone else that organised the conference, those who presented, the others sponsors and of course all those who attended!

The overriding vibe of the conference was one of incredible momentum - with so many governments presenting on their Matrix-based initiatives, there was a genuine realisation that Matrix is the future of government and inter-governmental communications.

The discussion between sessions, and in the evenings, had a consistent theme. Governments want communications that are:

1. Digitally sovereign - meaning end-user organisations have complete autonomy over their technology stack. Crucially, that means no vendor lock-in. That so many competing vendors sponsored and attended The Matrix Conference underlines the health of the Matrix ecosystem.

2. Interoperable - to both enable digital sovereignty and ensure that separate organisations can easily communicate with each other. The interoperability delivered by the Matrix open standard is absolutely crucial in enabling large-scale federated communications between multiple organisations.

3. Resilient - a decentralised communications network provides a far more robust communications architecture than a centralised network, which is paramount for government communications (as we write this blog post, Signal, Slack, Zoom and others are down due to their centralised design and a dependency on AWS).

4. Secure - end-to-end encryption is, of course, fundamental and viewed as ‘table stakes.’

An open standard based on open source software, Matrix delivers the digital sovereignty, interoperability, resilience and security that governments need to transform the way they communicate; both within their own nation and across borders. These are the key benefits driving Matrix adoption.

The Matrix Conference on-demand

All the presentations given at The Matrix Conference are available here . Each and every presentation is well worth watching.

The Matrix State of the Union

Matthew Hodgson gives an excellent overview of the entire Matrix universe.

For those interested in government and public sector adoption of Matrix, don’t miss Amandine Le Pape’s How Matrix is becoming the communication standard for the public sector .

Element's keynote

Neil Johnson’s talk includes a power-packed review of Element’s work over the last 12 months (and some sneak previews).

In particular, look out for

ESS Community

being installed in under a minute - and then, when the time is right, it being a seamless live-upgrade to

ESS Pro

! The idea here is that ESS Community can be used, free of charge, for casual use (up to about 100 users) and also act as a way for organisations to run small scale evaluations. Those evaluations can then be easily upgraded to ESS Pro for increased scalability, performance, enterprise features and SLA-backed support.

Element product presentations

ESS - Element’s distribution for Matrix deployments , from Patrick Maier, gives more detail on Element Server Suite; covering ESS Community, ESS Pro and ESS for TI-Messenger. For those that want to get into the nitty-gritty, take a look at Gael Goinvic’s Getting Started with ESS Community workshop.

Element Pro , our new app developed specifically for the workplace, created a huge buzz - particularly in-app colour theming, and the ability to now create a whitelabelled mobile app without the expense of maintaining a fork. Catch all the details from Andreas Sisask in his session; Element X and Pro . We’d also recommend the Element X Web presentation and the session on Element Call .

The real stars of the show!

The best proof points, of course, are the deployments and initiatives that are already underway. The conference was packed with presentations from governments and public sector organisations, who are the real stars of the show!

Do take a look at the outstanding work being done across governments, NGOs and public sector organisations.

Trialing Matrix within the European Commission for resilient and sovereign communications

Nicolas Dubois, European Commission

Matrix French gov deployment: opening a private federation securely

Mathieu Velten, DINUM

Consolidating Germany’s administrative communication: Towards a joint Matrix-based architecture

Dominik Braun, FITKO

Sweden’s Public Sector in Transition

Anna Engström & Kenneth Edwall, Försäkringskassan

Luxchat(4gov)

Patrick Weber, Luxembourg government

Medical Care over Matrix with Delay during a Simulated Moonwalk

Jan-Lukas Furmanek & Aileen Rabsahl

Matrix’s role in the German Healthcare System

Marie Ruddeck, Gematik

Nationwide Rollout of Matrix-Based Instant Messaging (TI-M) for 74 Million Statutorily Insured Citizens

Jan Kohnert, Gematik

Secure communication leveraging the Matrix protocol for UNICC and its partners

Tima Soni, UNICC

Supporting TF-X with Matrix: best practices and pitfalls

Jeroen Franssen, NATO ACT

The German BundesMessenger

Kai A. Hiller, BWI

No Desk Is an Island: Enabling Cross-Border Workspace Communication

Alexander Smolianitski, ZenDiS

European governments are rightly determined to control their own digital destiny, and are embracing Matrix as the foundation for real time communications.

Seeing the European Commission, France, FITKO, Germany, the German healthcare system, Luxembourg, NATO, Sweden, United Nations, ZenDiS and the European Space Agency all presenting on their Matrix deployments was just mindblowing!

Knowing how many other governments were also in attendance, soaking up insights and tips for their forthcoming Matrix projects, makes us really excited for a Matrix-based future that transforms cross-border collaboration and helps support a united, digitally sovereign Europe.

👋 See you all next year!

Element's keynote at The Matrix Conference 2025
Element's keynote at The Matrix Conference 2025

Headlines for October 20, 2025

Democracy Now!
www.democracynow.org
2025-10-20 13:00:00
No Kings: An Estimated 7 Million People Participate in Nationwide Anti-Trump Rallies, Israel Kills Dozens in Gaza and Temporarily Halts Aid Deliveries, President Trump Threatens to Send National Guard Troops to San Francisco, Federal Government Shutdown Enters 20th Day, Trump Threatens to Cut Off Fo...
Original Article

You turn to us for voices you won't hear anywhere else.

Sign up for Democracy Now!'s Daily Digest to get our latest headlines and stories delivered to your inbox every day.

Independent Global News

Donate

Headlines October 20, 2025

Watch Headlines

No Kings: An Estimated 7 Million People Participate in Nationwide Anti-Trump Rallies

Oct 20, 2025

An estimated 7 million people took part in No Kings rallies Saturday to protest President Trump’s embrace of authoritarianism. Organizers say protests were held in about 2,600 sites across all 50 states in what was one of the largest days of protest in U.S. history, surpassing the first No Kings day of action in June. In Chicago, the protest stretched for two miles as about 250,000 took to the streets. Chicago Mayor Brandon Johnson addressed the crowd and called for a general strike. An estimated 200,000 rallied in Washington, D.C. Senator Bernie Sanders addressed the crowd.

Sen. Bernie Sanders : “This moment is not just about one man’s greed, one man’s corruption or one man’s contempt for the Constitution. This is about a handful of the wealthiest people on Earth who, in their insatiable greed, have hijacked our economy and our political system in order to enrich themselves at the expense of working families throughout this country.”

At a No Kings rally in Atlanta, speakers included former gubernatorial candidate and voting rights activist Stacey Abrams.

Stacey Abrams : “And yet, we are living in a moment where ethnofascists are in charge of the government, when Christian nationalists are making economic policy, when we have a secret police telling us who we are. Their destination is division. Their destination is destruction. And their destination is denied. We will not go back, and we will not let them turn us around.”

President Trump responded to the No Kings protests by posting an AI-generated video that showed him wearing a crown while flying a jet labeled “King Trump.” In the video, Trump is seen dumping what looks like liquid feces on the heads of protesters.

Trump has also repeated his threat to investigate billionaire philanthropist George Soros, who Trump claimed had helped fund the No Kings protest, which attracted massive crowds despite attempts by House Speaker Mike Johnson and other Republicans to label the protest as a “hate America” rally.

Israel Kills Dozens in Gaza and Temporarily Halts Aid Deliveries

Oct 20, 2025

Israel carried out a wave of deadly attacks in Gaza over the weekend and temporarily halted aid deliveries, but Israel is now saying it will resume enforcement of the U.S.-brokered ceasefire. Israel had accused Hamas of killing two Israeli soldiers in Rafah, but there are reports the soldiers died when their bulldozer ran over unexploded ordnance.

Officials in Gaza say Israel has killed at least 97 Palestinians and injured 230 since the ceasefire came into effect on October 10. On Friday, Israeli forces fired a tank shell, killing 11 members of a Palestinian family in the Zeitoun neighborhood of Gaza City. The dead included three women and seven children between the ages of 5 and 13.

Israel also bombed the headquarters of the Palestine Media Production Company in central Gaza, killing broadcast engineer Ahmed Abu Matar. His colleague, Ajeb Mohammed, described the attack.

Ajeb Mohammed : “We were sitting here safe and sound. And as you can see, the place is closed off to us. No one enters or leaves. We are all journalists. No strangers were entering or staying with us. Then the bombing happened right next to us, in the middle of the chalet. Our colleague, the broadcasting engineer, was martyred, and our colleague’s child son was also martyred.”

President Trump Threatens to Send National Guard Troops to San Francisco

Oct 20, 2025

President Trump says the government will send National Guard troops to San Francisco and may invoke the Insurrection Act to do so. It comes as President Trump asked the Supreme Court on Friday to allow him to send National Guard troops to the Chicago area by lifting lower court orders blocking the deployment. Meanwhile, seven officials in Tennessee are suing the Trump administration for sending the National Guard to Memphis.

The lawsuit, backed by the nonprofit Democracy Forward, says, “Our nation’s founders recognized that military rule was incompatible with liberty and democracy. … The facts on the ground cannot justify Defendants’ overreach.”

Meanwhile, Vermont’s Republican Governor Phil Scott blasted Trump’s deployment of National Guard troops to Chicago and Portland, calling it unconstitutional. It follows Oklahoma’s GOP Governor Kevin Stitt criticizing the deployment of the Texas National Guard to Illinois.

Federal Government Shutdown Enters 20th Day

Oct 20, 2025

The federal government shutdown has entered its 20th day. Senate Democrats proposed a bill last week to keep funding the government through October 31, reversing cuts to Medicaid and extending Affordable Care Act subsidies, but that measure failed to garner a veto-proof majority of 60 votes. It comes as President Trump has tried to lay off thousands of furloughed workers, but that move was temporarily halted by the courts.

Trump Threatens to Cut Off Foreign Aid to Colombia and Launch Attacks Inside Venezuela

Oct 20, 2025

President Trump is threatening to cut off foreign aid to Colombia and raise tariffs on Colombian goods after Colombian President Gustavo Petro condemned the U.S. for blowing up boats in the Caribbean, many off the coast of Venezuela. In recent weeks, the U.S. has blown up at least seven boats, alleging without proof they were being used for drug trafficking. Petro accused the U.S. of killing innocent Colombians. Trump responded by calling Petro an “illegal drug leader.” Petro then responded by writing, “Trying to promote peace in Colombia is not being a drug trafficker.” Meanwhile, the U.S. has repatriated two individuals from Ecuador and Colombia who survived a deadly U.S. strike on a vessel last week.

This all comes as President Trump is threatening to begin launching attacks inside Venezuela despite reports that Venezuelan President Nicolás Maduro has offered to give the U.S. a stake in his country’s oil wealth and other natural resources. Trump was asked about this on Friday at the White House and responded by using an expletive to warn Venezuela.

Reporter : “… offering mediation.”

President Donald Trump : “He’s” —

Reporter : “What could he do in order to stop that?”

President Donald Trump : “He has — he has offered everything. He’s offered everything. You’re right. You know why? Because he doesn’t want to [bleep] around with the United States.”

Rodrigo Paz Wins Bolivian Presidential Election

Oct 20, 2025

In Bolivia, Senator Rodrigo Paz won Sunday’s presidential election, defeating another right-wing candidate, former President Jorge “Tuto” Quiroga. Paz’s election marks the end of nearly 20 years of rule in Bolivia by MAS , the Movement Toward Socialism party, which first came to power in 2006 with the election of Evo Morales. Rodrigo Paz is the son of former Bolivian President Jaime Paz Zamora.

Trump Urges Ukrainian President Zelensky to Accept Putin’s Terms to End the War

Oct 20, 2025

Ukrainian drones struck a major gas processing plant in southern Russia on Sunday, sparking a fire. It follows an explosive White House meeting on Friday between Ukrainian President Volodymyr Zelensky and President Trump that reportedly descended into a shouting match where Trump pressured Zelensky to accept Russia’s terms for ending the war. Trump also refused Ukraine’s request for long-range Tomahawk missiles. Trump reportedly told Zelensky, “if [Putin] wants it, he will destroy you.” Speaking to reporters, Trump referred to Ukraine’s Donbas region as “cut up,” urging Ukraine to “leave it the way it is right now,” which would essentially mean ceding territory to Russia. It comes as Trump and Putin are set to meet in Budapest in the coming weeks.

Prince Andrew Announces He’s Giving Up Royal Titles Ahead of the Publication of Virginia Giuffre’s Memoir

Oct 20, 2025

Prince Andrew has announced he is giving up his royal titles ahead of the publication of an explosive book by Jeffrey Epstein survivor Virginia Giuffre. In her book, “Nobody’s Girl: A Memoir of Surviving Abuse and Fighting for Justice,” Giuffre details how she was forced to have sex with Andrew beginning when she was 17. Giuffre also said she was beaten and raped by a “well-known prime minister.” In the book, Giuffre also details how she was recruited to work for Epstein by Ghislaine Maxwell, who met her at Donald Trump’s Mar-A-Lago resort, where both Giuffre and her father worked. The Metropolitan Police in London are investigating claims that Prince Andrew tried to ask his bodyguard to obtain personal information about Giuffre before a photo of the two of them, including Maxwell, was released to the public. A British government minister said that leaked emails show Prince Andrew had passed on Giuffre’s date of birth and Social Security number to his bodyguard. Virginia Giuffre died reportedly by suicide earlier this year in Australia.

Vermont Republican Legislator Resigns over Racist and Antisemitic Group Chat

Oct 20, 2025

Image Credit: Vermont State Legislature

Vermont state Senator Republican Samuel Douglass announced his resignation after it was revealed that he was involved in the exchange of racist and antisemitic messages with other Young Republican leaders. Douglass was the only elected official in the chat. Last Friday, the New York GOP officials voted to suspend its Young Republican chapter. In one message, the chair of the New York State Young Republicans, Peter Giunta, wrote, “I love Hitler.”

Trump Commutes Sentence of Former Republican Congressman George Santos

Oct 20, 2025

President Trump commuted the seven-year sentence of former Republican Congressmember George Santos. Santos was expelled from Congress in 2023 and pleaded guilty earlier this year to wire fraud and identity theft, including lying to Congress, stealing money from campaign donors and fraudulently collecting unemployment benefits. Trump’s decision to commute Santos prompted outrage from other lawmakers, including Republicans. New York Republican Congressmember Nick LaLota wrote on X, “George Santos didn’t merely lie — he stole millions, defrauded an election, and his crimes (for which he pled guilty) warrant more than a three-month sentence.” In one of his first interviews since being released, Santos said he was now free to get botox.

Kaiser Permanente Healthcare Workers End Five-Day Strike

Oct 20, 2025

A five-day strike organized by tens of thousands of Kaiser Permanente nurses and other frontline medical staff in California, Hawaii and Oregon has ended. No agreement has reportedly been reached, but the union representing the healthcare workers said there’s “new momentum” to begin negotiations. The union is requesting a salary increase of 25% over four years.

The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License . Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.

Non-commercial news needs your support

We rely on contributions from our viewers and listeners to do our work.
Please do your part today.

Make a donation

Show HN: Playwright Skill for Claude Code – Less context than playwright-MCP

Hacker News
github.com
2025-10-20 12:58:35
Comments...
Original Article

Playwright Skill for Claude Code

General-purpose browser automation as a Claude Skill

A Claude Skill that enables Claude to write and execute any Playwright automation on-the-fly - from simple page tests to complex multi-step flows. Packaged as a Claude Code Plugin for easy installation and distribution.

Claude autonomously decides when to use this skill based on your browser automation needs, loading only the minimal information required for your specific task.

Features

  • Any Automation Task - Claude writes custom code for your specific request, not limited to pre-built scripts
  • Visible Browser by Default - See automation in real-time with headless: false
  • Zero Module Resolution Errors - Universal executor ensures proper module access
  • Progressive Disclosure - Concise SKILL.md with full API reference loaded only when needed
  • Safe Cleanup - Smart temp file management without race conditions
  • Comprehensive Helpers - Optional utility functions for common tasks

Installation

This skill can be installed via the Claude Code plugin system or manually.

Option 1: Via Plugin System (Recommended)

# Add this repository as a marketplace
/plugin marketplace add lackeyjb/playwright-skill

# Install the plugin
/plugin install playwright-skill@playwright-skill

# Navigate to the skill directory and run setup
cd ~/.claude/plugins/marketplaces/playwright-skill/skills/playwright-skill
npm run setup

Verify installation by running /help to confirm the skill is available.

Option 2: Manual Git Clone

Install directly from GitHub to your skills directory:

Global Installation (Available Everywhere):

# Navigate to your Claude skills directory
cd ~/.claude/skills

# Clone the skill
git clone https://github.com/lackeyjb/playwright-skill.git

# Navigate into the skill directory (note the nested structure)
cd playwright-skill/skills/playwright-skill

# Install dependencies and Chromium browser
npm run setup

Project-Specific Installation:

# Install in a specific project
cd /path/to/your/project
mkdir -p .claude/skills
cd .claude/skills
git clone https://github.com/lackeyjb/playwright-skill.git
cd playwright-skill/skills/playwright-skill
npm run setup

Option 3: Download Release

  1. Download the latest release from GitHub Releases
  2. Extract to:
    • Global: ~/.claude/skills/playwright-skill
    • Project: /path/to/your/project/.claude/skills/playwright-skill
  3. Navigate to the skill directory and run setup:
    cd playwright-skill/skills/playwright-skill
    npm run setup

Verify Installation

Run /help to confirm the skill is loaded, then ask Claude to perform a simple browser task like "Test if google.com loads".

Quick Start

After installation, simply ask Claude to test or automate any browser task. Claude will write custom Playwright code, execute it, and return results with screenshots and console output.

Usage Examples

Test Any Page

"Test the homepage"
"Check if the contact form works"
"Verify the signup flow"

Visual Testing

"Take screenshots of the dashboard in mobile and desktop"
"Test responsive design across different viewports"

Interaction Testing

"Fill out the registration form and submit it"
"Click through the main navigation"
"Test the search functionality"

Validation

"Check for broken links"
"Verify all images load"
"Test form validation"

How It Works

  1. Describe what you want to test or automate
  2. Claude writes custom Playwright code for the task
  3. The universal executor (run.js) runs it with proper module resolution
  4. Browser opens (visible by default) and automation executes
  5. Results are displayed with console output and screenshots

Configuration

Default settings:

  • Headless: false (browser visible unless explicitly requested otherwise)
  • Slow Motion: 100ms for visibility
  • Timeout: 30s
  • Screenshots: Saved to /tmp/

Project Structure

playwright-skill/
├── .claude-plugin/
│   ├── plugin.json          # Plugin metadata for distribution
│   └── marketplace.json     # Marketplace configuration
├── skills/
│   └── playwright-skill/    # The actual skill (Claude discovers this)
│       ├── SKILL.md         # What Claude reads (314 lines)
│       ├── run.js           # Universal executor (proper module resolution)
│       ├── package.json     # Dependencies & setup scripts
│       └── lib/
│           └── helpers.js   # Optional utility functions
├── API_REFERENCE.md         # Full Playwright API reference (630 lines)
├── README.md                # This file - user documentation
├── CONTRIBUTING.md          # Contribution guidelines
└── LICENSE                  # MIT License

Advanced Usage

Claude will automatically load API_REFERENCE.md when needed for comprehensive documentation on selectors, network interception, authentication, visual regression testing, mobile emulation, performance testing, and debugging.

Dependencies

  • Node.js >= 14.0.0
  • Playwright ^1.48.0 (installed via npm run setup )
  • Chromium (installed via npm run setup )

Troubleshooting

Playwright not installed? Navigate to the skill directory and run npm run setup .

Module not found errors? Ensure automation runs via run.js , which handles module resolution.

Browser doesn't open? Verify headless: false is set. The skill defaults to visible browser unless headless mode is requested.

Install all browsers? Run npm run install-all-browsers from the skill directory.

What is a Claude Skill?

Skills are modular capabilities that extend Claude's functionality. Unlike slash commands that you invoke manually, skills are model-invoked—Claude autonomously decides when to use them based on your request.

When you ask Claude to test a webpage or automate browser interactions, Claude discovers this skill, loads the necessary instructions, executes custom Playwright code, and returns results with screenshots and console output.

Contributing

Contributions are welcome. Fork the repository, create a feature branch, make your changes, and submit a pull request. See CONTRIBUTING.md for details.

Learn More

License

MIT License - see LICENSE file for details.

The MacBook Air 2025 Is Now Cheaper Than a Random Mid-Range Windows Laptop

Hacker News
kotaku.com
2025-10-20 12:39:38
Comments...
Original Article

You know that feeling when you walk past an Apple Store and imagine what it would be like to own one of those sleek MacBooks but then you remember your budget and keep walking? That mental barrier just got a lot easier to cross: The brand-new 2025 MacBook Air with the M4 chip and 256GB storage is currently $849 on Amazon, down from $999 . We’ve n ever seen a MacBook this new drop this low and even older models never hit prices like this.

See at Amazon

Why This MacBook Air Is a Game Changer for Average Users

The 2025 MacBook Air answers a question that has been plaguing laptop shoppers for years: how do you achieve a machine that is powerful enough to do actual work, thin enough to take everywhere with you and runs cool enough that sitting on your lap won’t make you feel as though you are holding a space heater? The M4 chip does them all with ease. Built on a 3-nanometer process , this chip packs serious power.

The 13.6-inch Liquid Retina screen puts a 2560×1664-pixel resolution at your fingertips, translating to very clear text and rich color. The screen is able to reproduce more than a billion colors and 500 nits brightness so you can really see what you are doing even next to a sunny window. The thin-bezel design uses the minimum amount of physical space to achieve maximum screen area without bulking up the laptop itself.

Apple has boosted the base level RAM to 16GB of unified memory for this refresh that is a notable increase from the 8GB that previous base levels shipped with. Unified memory is not the same as regular RAM as the CPU and GPU share the same pool of resources and allows for faster access to data and easier multitasking. In everyday experience, that translates to the ability to edit photos in Adobe Lightroom while keeping open Spotify, Slack, and Safari with dozens of tabs without the stuttering that is otherwise a given. The 256GB SSD is quick storage for your most critical files and apps .

The 12MP Center Stage camera is a nice touch that alters the experience of video calls: Machine learning powers Center Stage to zoom and pan dynamically and keeps you in-frame no matter where you move. If you are the type that gestures with their hands or moves back and forth in their seat during long Zoom calls, this camera moves with a silky-smooth flow that keeps you in-frame at all times.

At $849, you are getting the latest technology from Apple at a price point that makes the MacBook Air affordable for anyone that was priced out of the world of Apple up until this point.

See at Amazon

Go proposal: Compare IP subnets

Lobsters
antonz.org
2025-10-20 12:30:35
Comments...
Original Article

Part of the Accepted! series, explaining the upcoming Go changes in simple terms.

Compare IP address prefixes the same way IANA does.

Ver. 1.26 • Stdlib • Low impact

Summary

An IP address prefix represents a IP subnet. These prefixes are usually written in CIDR notation:

10.0.0.0/8
127.0.0.0/8
169.254.0.0/16
203.0.113.0/24

In Go, an IP prefix is represented by the netip.Prefix type.

The new Prefix.Compare method lets you compare two IP prefixes, making it easy to sort them without having to write your own comparison code. The imposed order matches both Python's implementation and the assumed order from IANA.

Motivation

When the Go team initially designed the IP subnet type ( net/netip.Prefix ), they chose not to add a Compare method because there wasn't a widely accepted way to order these values.

Because of this, if a developer needs to sort IP subnets — for example, to organize routing tables or run tests — they have to write their own comparison logic. This results in repetitive and error-prone code.

The proposal aims to provide a standard way to compare IP prefixes. This should reduce boilerplate code and help programs sort IP subnets consistently.

Description

Add the Compare method to the netip.Prefix type:

// Compare returns an integer comparing two prefixes.
// The result will be 0 if p == p2, -1 if p < p2, and +1 if p > p2.
func (p Prefix) Compare(p2 Prefix) int

Compare orders two prefixes as follows:

  • First by validity (invalid before valid).
  • Then by address family (IPv4 before IPv6).
    10.0.0.0/8 < ::/8
  • Then by masked IP address (network IP).
    10.0.0.0/8 < 10.0.1.0/8
  • Then by prefix length.
    10.0.0.0/8 < 10.0.0.0/16
  • Then by unmasked address (original IP).
    10.0.0.0/8 < 10.0.0.1/8

This follows the same order as Python's netaddr.IPNetwork and the standard IANA convention .

Example

Sort a list of IP prefixes:

prefixes := []netip.Prefix{
    netip.MustParsePrefix("10.0.1.0/8"),
    netip.MustParsePrefix("203.0.113.0/24"),
    netip.MustParsePrefix("10.0.0.0/8"),
    netip.MustParsePrefix("169.254.0.0/16"),
    netip.MustParsePrefix("203.0.113.0/8"),
}

slices.SortFunc(prefixes, func(a, b netip.Prefix) int {
    return a.Compare(b)
})

for _, p := range prefixes {
    fmt.Println(p.String())
}
10.0.0.0/8
10.0.1.0/8
169.254.0.0/16
203.0.113.0/8
203.0.113.0/24

Further reading

𝗣 61642 • 𝗖𝗟 700355

★ Subscribe to keep up with new posts.

Beaver-engineered dam in the Czech Republic

Hacker News
en.wikipedia.org
2025-10-20 12:08:07
Comments...
Original Article

From Wikipedia, the free encyclopedia

A Eurasian beaver with her kit along the River Tay

In early 2025, beaver activity in the Brdy Protected Landscape Area, Czech Republic , contributed to the restoration of a wetland ecosystem. A family of beavers constructed a series of dams that accomplished environmental goals set by the Czech government, which had delayed its proposed project since 2018 for bureaucratic and financial reasons. The beaver-built dams saved the Czech government approximately US$1.2 million, providing ecological benefits including improved water quality, enhanced biodiversity , and better water retention. [ 1 ] [ 2 ] [ 3 ]

The Brdy region, located south of Prague , had been affected by artificial drainage systems established by the Brdy Military District [ cs ] , leading to environmental degradation . Decades earlier, soldiers had excavated bypass gullies to drain water from the land, transforming the wetland into a dry terrain . [ 1 ] [ 2 ] In 2016, the Brdy Protected Landscape Area [ cs ] was established in place of the abolished military district and some surrounding areas. [ 4 ] [ 5 ] Recognizing the ecological damage, the administration of the Brdy protected landscape area drafted plans in 2018 to construct small dams to restore the wetland and protect the Klabava river from sedimentation and acidic water pollution originating from nearby ponds. [ 6 ] However, bureaucratic obstacles, unresolved land ownership disputes, and financial constraints led to significant delays in implementing the project. [ 7 ]

Beaver construction

[ edit ]

External image
image icon Image of the natural dam created by beavers in Brdy photographed for the Nature Conservation Agency of the Czech Republic

In early 2025, a family of Eurasian beavers ( Castor fiber ) naturally built a series of dams in the same locations where the human-planned infrastructure was intended. [ 8 ] The beavers used wood, mud, and stones to create structures that slowed water drainage and restored the wetland environment. This spontaneous restoration effort resulted in a thriving ecosystem, benefiting local wildlife such as rare stone crayfish, frogs, aquatic insects, and bird species dependent on wetland habitats. [ 6 ]

The beavers slowed the time it takes for water upstream to travel to Winzer , from around 45 minutes to 20 days, according to Gerhard Shwab, local beaver specialist. [ 9 ]

Experts noted that beavers are instinctive engineers capable of altering landscapes to support water retention and biodiversity. Their activities help regulate water flow, mitigate soil erosion, and improve water filtration, making them vital contributors to wetland health. The beaver-made dams in Brdy not only recreated a functioning wetland but also provided long-term benefits by reducing flood risks , preventing drought effects, and maintaining a balanced ecosystem. [ 1 ] [ 10 ]

Reaction and recognition

[ edit ]

Czech conservation authorities praised the beavers for their unexpected yet effective environmental work. Bohumil Fišer, head of the Brdy Protected Landscape Area, stated that the beavers "built the dams without any project documentation and for free", and achieved the desired ecological outcomes "practically overnight". [ 2 ] [ 6 ] The estimated cost savings for the Czech government amounted to 30 million Czech koruna ( US$ 1.2 million), as the project was completed without human labor or funding. [ 11 ]

Zoologists and environmentalists emphasized the broader significance of the event, reinforcing the role of beavers as "ecosystem engineers". [ 12 ] This case contributed to growing discussions in Europe about the benefits of beaver rewilding programs, particularly in regions where their activity supports flood management , water conservation, and habitat restoration. [ 1 ]

The Brdy beaver colony is part of a larger resurgence of Eurasian beaver populations in the Czech Republic, with estimates indicating around 15,000 individuals across the country. [ 7 ] While beavers can sometimes create conflicts by felling trees or flooding agricultural lands, officials noted that the Brdy site is located far from farmland, making long-term coexistence with the beavers feasible. [ 13 ] Authorities anticipate no significant conflicts with the beaver colony for at least the next decade. [ 14 ]

  1. ^ a b c d Andrei, Mihai (2025-02-12). "Beavers Built a $1.2M Dam for Free — And Saved a Czech River" . ZME Science . Retrieved 2025-03-02 .
  2. ^ a b c Albeck-Ripka, Livia (2025-02-12). "Czech Dam Project Was Stalled by Bureaucracy. Beavers Built Their Own" . The New York Times . ISSN 0362-4331 . Retrieved 2025-03-02 .
  3. ^ "These eager beavers saved the Czech government $1.2 million" . National Geographic . 2025-03-02 . Retrieved 2025-03-02 .
  4. ^ Chráněná oblast v Brdech bude větší než vojenská, obce čekají potíže
  5. ^ "Vyhlášení Chráněné krajinné oblasti Brdy" ("Declaration of the Brdy Protected Landscape Area")
  6. ^ a b c "Beavers build planned dams in protected landscape area, while local officials still seeking permits" . Czech Radio . 2025-01-31 . Retrieved 2025-03-02 .
  7. ^ a b "Beavers save Czech government €1.2 million by building planned dam" . The Jerusalem Post . 2025-02-10 . Retrieved 2025-03-02 .
  8. ^ Travers, Scott. "3 Surprising Stories That Prove Beavers Are Nature's Most Diligent Water Engineers" . Forbes . Retrieved 2025-03-02 .
  9. ^ "Czechia's €1m dam built for free by beavers: Here's where else they're doing good work" . euronews.com .
  10. ^ "Pricey Dam Project 7 Years In The Making Finished By Beavers For Free" . HuffPost . 2025-02-08 . Retrieved 2025-03-02 .
  11. ^ Messenger, Stephen (2025-02-07). "Government Scraps Construction Project After Beavers Finish the Job Themselves" . The Dodo . Retrieved 2025-03-02 .
  12. ^ Barber, Harriet (2025-02-09). "Beavers finish seven-year dam project in two days" . The Telegraph . ISSN 0307-1235 . Retrieved 2025-03-02 .
  13. ^ "Dam fine: beavers save Czech treasury $1 million" . France 24 . 2025-02-11 . Retrieved 2025-03-06 .
  14. ^ France-Press, Agence (2025-02-11). "Eager beavers: rodents engineer Czech wetland project after years of human delay" . The Guardian . ISSN 0261-3077 . Retrieved 2025-03-02 .

Agentic AI’s OODA Loop Problem

Schneier
www.schneier.com
2025-10-20 12:00:28
The OODA loop—for observe, orient, decide, act—is a framework to understand decision-making in adversarial situations. We apply the same framework to artificial intelligence agents, who have to make their decisions with untrustworthy observations and orientation. To solve this problem, w...
Original Article

The OODA loop—for observe, orient, decide, act—is a framework to understand decision-making in adversarial situations. We apply the same framework to artificial intelligence agents, who have to make their decisions with untrustworthy observations and orientation. To solve this problem, we need new systems of input, processing, and output integrity.

Many decades ago, U.S. Air Force Colonel John Boyd introduced the concept of the “OODA loop,” for Observe, Orient, Decide, and Act. These are the four steps of real-time continuous decision-making. Boyd developed it for fighter pilots, but it’s long been applied in artificial intelligence (AI) and robotics. An AI agent, like a pilot, executes the loop over and over, accomplishing its goals iteratively within an ever-changing environment. This is Anthropic’s definition: “Agents are models using tools in a loop.” 1

OODA Loops for Agentic AI

Traditional OODA analysis assumes trusted inputs and outputs, in the same way that classical AI assumed trusted sensors, controlled environments, and physical boundaries. This no longer holds true. AI agents don’t just execute OODA loops; they embed untrusted actors within them. Web-enabled large language models (LLMs) can query adversary-controlled sources mid-loop. Systems that allow AI to use large corpora of content, such as retrieval-augmented generation ( https://en.wikipedia.org/wiki/Retrieval-augmented_generation ), can ingest poisoned documents. Tool-calling application programming interfaces can execute untrusted code. Modern AI sensors can encompass the entire Internet; their environments are inherently adversarial. That means that fixing AI hallucination is insufficient because even if the AI accurately interprets its inputs and produces corresponding output, it can be fully corrupt.

In 2022, Simon Willison identified a new class of attacks against AI systems: “prompt injection.” 2 Prompt injection is possible because an AI mixes untrusted inputs with trusted instructions and then confuses one for the other. Willison’s insight was that this isn’t just a filtering problem; it’s architectural. There is no privilege separation, and there is no separation between the data and control paths. The very mechanism that makes modern AI powerful—treating all inputs uniformly—is what makes it vulnerable. The security challenges we face today are structural consequences of using AI for everything.

  1. Insecurities can have far-reaching effects. A single poisoned piece of training data can affect millions of downstream applications. In this environment, security debt accrues like technical debt.
  2. AI security has a temporal asymmetry. The temporal disconnect between training and deployment creates unauditable vulnerabilities. Attackers can poison a model’s training data and then deploy an exploit years later. Integrity violations are frozen in the model. Models aren’t aware of previous compromises since each inference starts fresh and is equally vulnerable.
  3. AI increasingly maintains state—in the form of chat history and key-value caches. These states accumulate compromises. Every iteration is potentially malicious, and cache poisoning persists across interactions.
  4. Agents compound the risks. Pretrained OODA loops running in one or a dozen AI agents inherit all of these upstream compromises. Model Context Protocol (MCP) and similar systems that allow AI to use tools create their own vulnerabilities that interact with each other. Each tool has its own OODA loop, which nests, interleaves, and races. Tool descriptions become injection vectors. Models can’t verify tool semantics, only syntax. “Submit SQL query” might mean “exfiltrate database” because an agent can be corrupted in prompts, training data, or tool definitions to do what the attacker wants. The abstraction layer itself can be adversarial.

For example, an attacker might want AI agents to leak all the secret keys that the AI knows to the attacker, who might have a collector running in bulletproof hosting in a poorly regulated jurisdiction. They could plant coded instructions in easily scraped web content, waiting for the next AI training set to include it. Once that happens, they can activate the behavior through the front door: tricking AI agents (think a lowly chatbot or an analytics engine or a coding bot or anything in between) that are increasingly taking their own actions, in an OODA loop, using untrustworthy input from a third-party user. This compromise persists in the conversation history and cached responses, spreading to multiple future interactions and even to other AI agents. All this requires us to reconsider risks to the agentic AI OODA loop, from top to bottom.

  • Observe: The risks include adversarial examples, prompt injection, and sensor spoofing. A sticker fools computer vision, a string fools an LLM. The observation layer lacks authentication and integrity.
  • Orient: The risks include training data poisoning, context manipulation, and semantic backdoors. The model’s worldview—its orientation—can be influenced by attackers months before deployment. Encoded behavior activates on trigger phrases.
  • Decide: The risks include logic corruption via fine-tuning attacks, reward hacking, and objective misalignment. The decision process itself becomes the payload. Models can be manipulated to trust malicious sources preferentially.
  • Act: The risks include output manipulation, tool confusion, and action hijacking. MCP and similar protocols multiply attack surfaces. Each tool call trusts prior stages implicitly.

AI gives the old phrase “inside your adversary’s OODA loop” new meaning. For Boyd’s fighter pilots, it meant that you were operating faster than your adversary, able to act on current data while they were still on the previous iteration. With agentic AI, adversaries aren’t just metaphorically inside; they’re literally providing the observations and manipulating the output. We want adversaries inside our loop because that’s where the data are. AI’s OODA loops must observe untrusted sources to be useful. The competitive advantage, accessing web-scale information, is identical to the attack surface. The speed of your OODA loop is irrelevant when the adversary controls your sensors and actuators.

Worse, speed can itself be a vulnerability. The faster the loop, the less time for verification. Millisecond decisions result in millisecond compromises.

The Source of the Problem

The fundamental problem is that AI must compress reality into model-legible forms. In this setting, adversaries can exploit the compression. They don’t have to attack the territory; they can attack the map. Models lack local contextual knowledge. They process symbols, not meaning. A human sees a suspicious URL; an AI sees valid syntax. And that semantic gap becomes a security gap.

Prompt injection might be unsolvable in today’s LLMs. LLMs process token sequences, but no mechanism exists to mark token privileges. Every solution proposed introduces new injection vectors: Delimiter? Attackers include delimiters. Instruction hierarchy? Attackers claim priority. Separate models? Double the attack surface. Security requires boundaries, but LLMs dissolve boundaries. More generally, existing mechanisms to improve models won’t help protect against attack. Fine-tuning preserves backdoors. Reinforcement learning with human feedback adds human preferences without removing model biases. Each training phase compounds prior compromises.

This is Ken Thompson’s “trusting trust” attack all over again. 3 Poisoned states generate poisoned outputs, which poison future states. Try to summarize the conversation history? The summary includes the injection. Clear the cache to remove the poison? Lose all context. Keep the cache for continuity? Keep the contamination. Stateful systems can’t forget attacks, and so memory becomes a liability. Adversaries can craft inputs that corrupt future outputs.

This is the agentic AI security trilemma. Fast, smart, secure; pick any two. Fast and smart—you can’t verify your inputs. Smart and secure—you check everything, slowly, because AI itself can’t be used for this. Secure and fast—you’re stuck with models with intentionally limited capabilities.

This trilemma isn’t unique to AI. Some autoimmune disorders are examples of molecular mimicry—when biological recognition systems fail to distinguish self from nonself. The mechanism designed for protection becomes the pathology as T cells attack healthy tissue or fail to attack pathogens and bad cells. AI exhibits the same kind of recognition failure. No digital immunological markers separate trusted instructions from hostile input. The model’s core capability, following instructions in natural language, is inseparable from its vulnerability. Or like oncogenes, the normal function and the malignant behavior share identical machinery.

Prompt injection is semantic mimicry: adversarial instructions that resemble legitimate prompts, which trigger self-compromise. The immune system can’t add better recognition without rejecting legitimate cells. AI can’t filter malicious prompts without rejecting legitimate instructions. Immune systems can’t verify their own recognition mechanisms, and AI systems can’t verify their own integrity because the verification system uses the same corrupted mechanisms.

In security, we often assume that foreign/hostile code looks different from legitimate instructions, and we use signatures, patterns, and statistical anomaly detection to detect it. But getting inside someone’s AI OODA loop uses the system’s native language. The attack is indistinguishable from normal operation because it is normal operation. The vulnerability isn’t a defect—it’s the feature working correctly.

Where to Go Next?

The shift to an AI-saturated world has been dizzying. Seemingly overnight, we have AI in every technology product, with promises of even more—and agents as well. So where does that leave us with respect to security?

Physical constraints protected Boyd’s fighter pilots. Radar returns couldn’t lie about physics; fooling them, through stealth or jamming, constituted some of the most successful attacks against such systems that are still in use today. Observations were authenticated by their presence. Tampering meant physical access. But semantic observations have no physics. When every AI observation is potentially corrupted, integrity violations span the stack. Text can claim anything, and images can show impossibilities. In training, we face poisoned datasets and backdoored models. In inference, we face adversarial inputs and prompt injection. During operation, we face a contaminated context and persistent compromise. We need semantic integrity: verifying not just data but interpretation, not just content but context, not just information but understanding. We can add checksums, signatures, and audit logs. But how do you checksum a thought? How do you sign semantics? How do you audit attention?

Computer security has evolved over the decades. We addressed availability despite failures through replication and decentralization. We addressed confidentiality despite breaches using authenticated encryption. Now we need to address integrity despite corruption. 4

Trustworthy AI agents require integrity because we can’t build reliable systems on unreliable foundations. The question isn’t whether we can add integrity to AI but whether the architecture permits integrity at all.

AI OODA loops and integrity aren’t fundamentally opposed, but today’s AI agents observe the Internet, orient via statistics, decide probabilistically, and act without verification. We built a system that trusts everything, and now we hope for a semantic firewall to keep it safe. The adversary isn’t inside the loop by accident; it’s there by architecture. Web-scale AI means web-scale integrity failure. Every capability corrupts.

Integrity isn’t a feature you add; it’s an architecture you choose. So far, we have built AI systems where “fast” and “smart” preclude “secure.” We optimized for capability over verification, for accessing web-scale data over ensuring trust. AI agents will be even more powerful—and increasingly autonomous. And without integrity, they will also be dangerous.

References

1 . S. Willison, Simon Willison’s Weblog , May 22, 2025. [Online]. Available: https://simonwillison.net/2025/May/22/tools-in-a-loop/

2 . S. Willison, “Prompt injection attacks against GPT-3,” Simon Willison’s Weblog , Sep. 12, 2022. [Online]. Available: https://simonwillison.net/2022/Sep/12/prompt-injection/

3 . K. Thompson, “Reflections on trusting trust,” Commun. ACM , vol. 27, no. 8, Aug. 1984. [Online]. Available: https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_ReflectionsonTrustingTrust.pdf

4 . B. Schneier, “The age of integrity,” IEEE Security & Privacy , vol. 23, no. 3, p. 96, May/Jun. 2025. [Online]. Available: https://www.computer.org/csdl/magazine/sp/2025/03/11038984/27COaJtjDOM

This essay was written with Barath Raghavan, and originally appeared in IEEE Security & Privacy .

Tags: , ,

Posted on October 20, 2025 at 7:00 AM 0 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.

Gleam v1.13.0 released

Lobsters
gleam.run
2025-10-20 11:07:58
Comments...
Original Article

Gleam is a type-safe and scalable language for the Erlang virtual machine and JavaScript runtimes. Today Gleam v1.13.0 has been published. Let's go over at the highlights now.

External API for Gleam data

One of Gleam's strengths is that it is part of the BEAM and JavaScript ecosystems, enabling Gleam programs to take advantage of code written in Erlang, Elixir, JavaScript, and more. This is a large part of how Gleam was able to become a practical production-ready language so quickly, by not restricting Gleam programmers to just the comparatively young Gleam package ecosystem.

A function written in one of these other languages can be imported into a Gleam module as an "external function", and then called without any additional performance overhead.

@external(erlang, "moon_base", "launch_spaceship")
pub fn lift_off(countdown: Int) -> Result(Spaceship, LaunchError)

Here the Erlang function launch_spaceship from the module moon_base is being imported.

One restriction to external functions is that they will need to return data types that Gleam can understand. Gleam's "external type" feature lets the programmer refer to types defined in other languages, but if the programmer wants to be able to directly construct and pattern match data from external functions it will need to be in a format compatible with Gleam data types.

Often a programmer will write a small wrapper function in the external language to convert the data, and also to make the interface adhere to Gleam patterns and conventions.

-module(moon_base).
-export([launch_spaceship/1]).

launch_spaceship(Countdown) ->
    try
        Spaceship = launch_control:launch_spaceship(Countdown),
        {ok, Spaceship}
    catch
        error:no_fuel -> {error, no_fuel};
        error:bad_weather -> {error, bad_weather}
    end.

This launch_spaceship Erlang function wraps the function from the launch_control module, converting the exception-based API into a Result type, the Erlang representation of Gleam's Ok and Error variants.

One thing that a Gleam programmer may find challenging is knowing how to construct Gleam data in these wrapper functions. A lack of detailed documentation made it unclear what the correct approach should be.

This lack of clarity makes learning how to use external code more challenging, and worse, it may result in programmers using internal APIs that are intended only to be used by the compiled Gleam code. If the Gleam ecosystem were to grow with many packages using these internal APIs, it would force the Gleam core team to support them as if they were public APIs. Committing to these APIs would greatly limit what changes we can make to the internal representation of Gleam data, and make many potential performance improvements impossible.

To fix this we have done two things. First, we have created a guide on Gleam externals , detailing how to correctly write and use externals.

Secondly, a dedicated API is now provided for JavaScript based code to work with Gleam data, both making usage clearer and giving the Gleam core team maximum freedom to improve performance in future. Each data type defined in Gleam will have a set of functions defined to work with it, for example:

// In src/person.gleam
pub type Person {
  Teacher(name: String, subject: String)
  Student(name: String)
}
// In src/my_javascript_code.mjs
import {...} from "./person.mjs";

// Constructing custom types
let teacher = Person$Teacher("Joe Armstrong", "Computer Science");
let student = Person$Student("Louis Pilfold");

let randomPerson = Math.random() > 0.5 ? teacher : student;

// Checking variants
let randomIsTeacher = Person$isTeacher(randomPerson);

// Getting fields
let teacherSubject = Person$Teacher$subject(teacher);

// The `name` field is shared so can be accessed from either variant
let personName = Person$name(randomPerson);

There will be a migration period where existing JavaScript externals will need to migrate over to the new API. We have created tooling to analyse the Gleam package ecosystem to identify code that is in need of updating, and we will be helping with this process.

Further additions will be made to the externals guide detailing useful patterns, how to avoid common problems, and advising when and how to use externals.

Thank you Surya Rose for taking the lead role in implementing these new APIs, and for the Gleam team more widely for the design of this addition!

Improved bit array exhaustiveness checking

Gleam's bit array syntax allows you to declaratively construct and parse binary data in a way that may be easier to understand than using binary operators.

The compiler now applies an optimisation known as "interference based pruning" when compiling bit array pattern matching where matches are performed at the start of bit arrays. This optimisation drastically reduces compile times, memory usage and the compiled code size, removing many redundant checks.

It is particularly impactful for programs that pattern match on some fixed patterns at the start of the bit array. For example, network protocol parsers.

pub fn parser_headers(headers: BitArray, bytes: Int) -> Headers {
  case headers {
    <<"CONTENT_LENGTH" as header, 0, value:size(bytes), 0, rest:bytes>>
    | <<"QUERY_STRING" as header, 0, value:size(bytes), 0, rest:bytes>>
    | <<"REQUEST_URI" as header, 0, value:size(bytes), 0, rest:bytes>>
    // ...
    | <<"REDIRECT_STATUS" as header, 0, value:size(bytes), 0, rest:bytes>>
    | <<"SCRIPT_NAME" as header, 0, value:size(bytes), 0, rest:bytes>>
      -> [#(header, value), ..parse_headers(rest)]
  }
}

Additionally, the compiler now raises a warning for unreachable branches that are matching on bit array segments that could never match. Consider this example:

pub fn get_payload(packet: BitArray) -> Result(BitArray, Nil) {
  case packet {
    <<200, payload:bytes>> -> Ok(payload)
    <<404, _:bits>> -> Error(Nil)
    _ -> Ok(packet)
  }
}

There's a subtle bug here. The second branch can never match since it's impossible for the first byte of the bit array to have the value 404 . The new error explains this nicely:

warning: Unreachable pattern
  ┌─ /src.gleam:4:5

4 │     <<404, _:bits>> -> Error(Nil)
     ^^^^^^^^^^^^^^^
       
       A 1 byte unsigned integer will never match this value

This pattern cannot be reached as it contains segments that will never
match.

Hint: It can be safely removed.

Thank you Giacomo Cavalieri for these improvements! Exhaustiveness checking is a very complex field, so these additions are very impressive.

Unused argument detection

Gleam's unused code detection and purity tracking emits a warning any time some code is unused and could be removed without changing the behaviour of the program.

This has been extended to be able to identify function arguments that are used when the function calls itself recursively, but never actually used in the function's implementation. For example:

import gleam/io

pub fn greet(x, times) {
  case times {
    0 -> Nil
    _ -> {
      io.println("Hello, Joe!")
      greet(x, times - 1)
    }
  }
}

In this piece of code the x argument is unused, so the compiler will raise the following warning:

warning: Unused function argument
  ┌─ /Users/giacomocavalieri/Desktop/prova/src/prova.gleam:3:14

3 │ pub fn greet(x, times) {
              ^ This argument is unused

This argument is passed to the function when recursing, but it's never
used for anything.

Thank you Giacomo Cavalieri !

Better meaningless opaque type error

A public custom type can be marked as "opaque", meaning that while other modules can import and reference the type, they are unable to construct or pattern match on values of that type. This is useful for restricting the ways that a data type can be used in order to provide a more robust API.

pub opaque type Permission {
  SuperUser
  Regular
  Restricted
}

It is invalid to mark a private type as opaque. Previously this would result in a slightly cryptic syntax, but now a specific helpful error has been added for this case.

error: Private opaque type
  ┌─ /src/one/two.gleam:2:1

2 │ opaque type Wibble {
 ^^^^^^ You can safely remove this.

Only a public type can be opaque.

The language server now also offers a "quick fix" code action to remove opaque from a private type:

opaque type Wibble {
// ^^^ This is an error!
  Wobble
}

If you hover over the type and trigger the quick fix, the language server will automatically remove the opaque keyword:

type Wibble {
  Wobble
}

Thank you Giacomo Cavalieri !

More fault tolerance

Gleam's compiler implements fault tolerant analysis . This means that when there is some error in the code the compiler can still continue to analyse the code to the best of its ability, ignoring the invalid parts. Because of this, the Gleam language server can have a good understanding of the code and provide IDE features even when the codebase is in an invalid state.

Giacomo Cavalieri and sobolevn ) have improved the compiler to be fault tolerant for errors relating to analysis of labeled fields in variant patterns, parsing of private opaque type definitions, and parsing of type names followed by () , further improving the experience of using the Gleam language server.

Thank you both!

Redundant pattern alias warning

_ as x is a valid pattern in Gleam. The _ means "don't assign any name to this value", and the as x part means "assign the name x to this value".

As you can see, this is quite a silly pattern. The alias as pattern makes the discard _ pattern redundant, and it would always be better to use the pattern x , which means "assign this value to the name x".

// Redundant
let _ as x = something()

// Recommended
let x = something()

Using an alias pattern with a discard pattern has been deprecated, and the Gleam code formatter will rewrite any instances of it to the recommended syntax.

Thank you eutampieri for this!

More inefficient list check warnings

Gleam's basic sequence type is an immutable linked list with structural sharing, a data type inherited from Erlang and one common in functional programming languages.

The correct way to check if a list is empty is to pattern match on it with the empty-list pattern, or to compare it to an empty list literal.

pub fn both_empty(list1: List(a), list2: List(b)) -> Bool {
  // Pattern matching approach.
  // More verbose, but can be part of a larger pattern match.
  let list1_empty = case list1 {
    [] -> True
    _ -> False
  }

  // Comparison approach.
  let list2_empty = list2 == []

  list1_empty && list2_empty
}

The standard library's list.length function returns the length of a given list. Gleam and Erlang lists don't store the length on them as a static property, so this function has to traverse the full list, and count the number of elements, making it a very wasteful way to determine if a list is empty.

This behaviour may be suprising to programmers familiar with languages with a different core sequence data type, so they might not realise this is not a good way to check for an empty list. To remove this confusion the compiler would emit a warning for code like list.length(items) == 0 , informing the programmer of better alternatives.

With this release the warning will also be emitted for more inefficient use of list.length , including checks for non-empty lists using operators like > and < .

warning: Inefficient use of `list.length`
  ┌─ /data/data/com.termux/files/home/test_gleam/src/test_gleam.gleam:5:13

5 │     let _ = 0 < list.length(numbers)
             ^^^^^^^^^^^^^^^^^^^^^^^

The `list.length` function has to iterate across the whole
list to calculate the length, which is wasteful if you only
need to know if the list is empty or not.

Hint: You can use `the_list != []` instead.

Thank you Andrey Kozhev !

A helpful syntax error for JavaScripters

In Gleam names are assigned to values within functions using the let keyword. There is a const keyword too, but it is used to declare module-level constants, and it is a syntax error to use it within functions.

The compiler now provides an helpful error message for any programmers accustomed to languages where const is used within functions, such as JavaScript.

pub fn deep_thought() -> Int {
  const the_answer = 42
  the_answer
}
error: Syntax error
  ┌─ /src/file.gleam:2:3

3 │   const the_answer = 43
   ^^^^^ Constants are not allowed inside functions

All variables are immutable in Gleam, so constants inside functions are not
necessary.
Hint: Either move this into the global scope or use `let` binding instead.

Thank you Surya Rose !

A helpful error for Rustaceans, C#ers, and friends

Gleam has a consistent syntax for constructors at the type and value level, using () for both. It does not use () for value constructors and <> for type constructors, as is common in some other languages.

To help folks coming from other languages a set of helpful errors have been added for when they try to use this non-Gleam syntax in their Gleam code.

error: Syntax error
  ┌─ /src/parse/error.gleam:2:12

2 │ type Either<a, b> {
            ^ I was expecting `(` here.

Type parameters use lowercase names and are surrounded by parentheses.

    type Either(a, b) {

See: https://tour.gleam.run/data-types/generic-custom-types/

Notice how the error message includes the correct syntax for the specific code that the programmer has written. The programmer could copy/paste the correct version into their code, if they so desired. A link to the documentation is also provided, linking to whichever feature the syntax error is for.

Thank you Aaron Christiansen !

A helpful syntax error for Pythonistas, Elixirists, and friends

Similar to the last two new error messages, there's now a helpful error message for programmers trying to use # instead of // to write a comment. Thank you sobolevn !

Displaying dependency version information

Gleam's built tool integrates with Hex, the package management system and primary package repository for the BEAM ecosystem. Gleam implements dependency version locking to ensure that builds are deterministic, and to prevent unaudited code from unexpectedly becoming part of your application. Dependency code is just as much of a risk and responsibility as code directly written by the programmer, so it must be treated with great care and consideration.

The only time the build tool will select new versions of dependency packages is if the programmer adds or removes a dependency, if the programmer changes the package's dependency version requirements, or if the programmer requests the dependency versions be upgraded using the gleam update command.

In these cases when dependencies are changed, added, or removed the build tool will now print the changes, to help the programmer understand and go on to audit the new code.

$ gleam add lustre
  Resolving versions
Downloading packages
 Downloaded 3 packages in 0.04s
      Added gleam_json v3.0.2
      Added houdini v1.2.0
      Added lustre v5.3.5

hex owner transfer

The hex owner transfer command has been added to the build tool, allowing Gleam programmers to transfer ownership of existing Hex packages to another account. Thank you Giacomo Cavalieri !

Improved type displaying

When a Gleam package is published to Hex HTML documentation is generated and published to the HexDocs documentation hosting website. This documentation has now been improved to now print the names of public type aliases instead of internal type names when annotating functions and types. This makes the documentation more likely to use the APIs that the package author intends for their users.

For example, for the following code:

import my_package/internal

pub type ExternalAlias = internal.InternalRepresentation

pub fn do_thing() -> ExternalAlias { ... }

This is what the build tool used to generate:

pub fn do_thing() -> @internal InternalRepresentation

This is technically correct, but not very useful for the reader of the documentation, as they cannot learn anything about these internal types. Now it will not use the internal name, allowing the programmer to click-through to its documentation, and understand how they should refer to this type in their code.

pub fn do_thing() -> ExternalAlias

This improvement also applies to the language server, both in information displayed on hover, and in code actions such as "add annotations".

import lustre/html
import lustre/element
import lustre/attribute

pub fn make_link(attribute, element) {
  html.a([attribute], [elements])
}

If the "add annotations" code action is run on this function the language server will identify that the imported lustre/attribute module has public export of the Attribute type, and that the imported lustre/element module has public export of the Element type, so they will both be used instead of the internal definition.

pub fn make_link(
  attribute: attribute.Attribute,
  element: element.Element(a)
) -> element.Element(a) {
   html.a([attribute], [elements])
}

Thank you Surya Rose !

Improved type naming in code actions

Another way in which code actions need to consider type names is with type parameters. The "add type annotations" and "generate function" code actions must find names for any type variables that do not clash with any already in use. Previously the language server would track names in-use at the module level which could result in correct but unexpected names being used.

Take this code, for example.

fn something(a: a, b: b, c: c) -> d { todo }

fn pair(a, b) { #(a, b) }

Previously, when triggering the "Add type annotations" code action on the pair function, the language server would have used these names:

fn pair(a: e, b: f) -> #(e, f) { #(a, b) }

However in 1.13, it will now use these names:

fn pair(a: a, b: b) -> #(a, b) { #(a, b) }

Thank you Surya Rose !

Tangled support

Tangled is a new open source source forge, and the Gleam build tool now has support for it. When specified the HTML documentation will include links to the source code definitions for each type and value in the package, as it would for previously supported source forges such as GitHub and Forgejo.

repository = { type = "tangled", user = "me", repo = "my_project" }

Thank you to Naomi Roberts for this. As part of this work she added support for links to multi-line sections of code in Tangled itself!

Further language server support

The language server was missing support for a few syntaxes, this has now been fixed. You can now go to definition, rename, etc. from alternative patterns in case expressions:

case wibble {
  Wibble | Wobble -> 0
  //         ^- Previously you could not trigger actions from here
}

And hovering over a record field in a record access expression will now show the documentation for that field, if any exists.

Thank you Surya Rose and fruno !

Remove unreachable clauses code action

Gleam's pattern matching analysis can identify any clauses of a case expression are unreachable due to previous patterns already matching any values the redundant one could match.

pub fn main() {
  case find_user() {
    Ok(person) -> todo
    Ok(Admin) -> todo
    Ok(User) -> todo
    Error(_) -> todo
  }
}

Here the Ok(Admin) and Ok(User) patterns could never match as all the Ok values would be instead matched by the earlier Ok(person) pattern.

This is clearly a mistake in the code, so the compiler will emit a warning and highlight the unreachable clauses. Most commonly the programmer will want to edit the patterns to correct them, but some times the clauses are no longer needed, and can be deleted entirely. To help with this scenario the language server now offers a quick-fix code action to delete any redundant clauses.

Triggering it on the above code will result in the code being edited like so:

pub fn main() {
  case find_user() {
    Ok(person) -> todo
    Error(_) -> todo
  }
}

Thank you Giacomo Cavalieri !

Pattern match on value code action improvements

Gleam has a single flow-control feature: pattern matching with the case expression. Because of this there's a lot of pattern matching in Gleam programs! The language server offers a code action to quickly pattern match on a focused value, and with this release it has been further improved.

It can now be triggered on lists, with the default clauses including one for when the list is empty, and one for when it is non-empty.

pub fn is_empty(list: List(a)) -> Bool {
  //            ^^^^ Triggering the action here
}

Triggering the action over the list argument would result in the following code:

pub fn is_empty(list: List(a)) -> Bool {
  case list {
    [] -> todo
    [first, ..rest] -> todo
  }
}

The code action can now be triggered on variables introduced by other patterns. For example, here we have a let statement with a pattern defining the variables name and role .

pub fn main() {
  let User(name:, role:) = find_user("lucy")
  //              ^^^^ Triggering the action here
}

Triggering the action on role results in a case expression being inserted on the line below, with a clause for each of the possible variants of the Role type that variable held.

pub fn main() {
  let User(name:, role:) = find_user("lucy")
  case role {
    Admin -> todo
    Member -> todo
  }
}

If the variable was introduced within a case expression already then the behaviour is different. For example:

pub fn main() {
  case find_user() {
    Ok(user) -> todo
    Error(_) -> todo
  }
}

Triggering the code action on the user variable would cause the code to be rewritten to expand that clause of the case expression, replacing it with one specialised clause for each of the possible variants of the type of that variable.

pub fn main() {
  case find_user() {
    Ok(Admin) -> todo
    Ok(Member) -> todo
    Error(_) -> todo
  }
}

Thank you Giacomo Cavalieri ! Pattern matching is such a important core feature of Gleam that these improvements make a big difference to the experience of writing and editing Gleam code.

Collapse nested case expressions code action

Another new code action is one to collapse nested case expressions into one, reducing nesting and enabling further optimisations in some situations.

case user {
  User(name:) ->
    case name {
      "Joe" -> "Hello, Joe!"
      _ -> "Hello there!"
    }
  Guest -> "You're not logged in!"
}

Triggering the code action on the first clause will result in it being replaced by multiple clauses that produce the same behaviour as the nested version.

case user {
  User(name: "Joe") -> "Hello, Joe!"
  User(name: _) -> "Hello there!"
  Guest -> "You're not logged in!"
}

Thank you Giacomo Cavalieri !

Add omitted labels code action

Function parameters and record fields can have labels, names that can be used at the call-site to make it clearer what each argument is, and to make the ordering of the arguments not matter. The language server now offers a code action to add the omitted labels in a call. For example:

pub type User {
  User(first_name: String, last_name: String, likes: List(String))
}

pub fn main() {
  let first_name = "Giacomo"
  User(first_name, "Cavalieri", ["gleam"])
}

Triggering the code action on the User constructor will result in the language server adding the labels to the arguments.

pub type User {
  User(first_name: String, last_name: String, likes: List(String))
}

pub fn main() {
  let first_name = "Giacomo"
  User(first_name:, last_name: "Cavalieri", likes: ["gleam"])
}

Inter-module generate function code action

The "Generate function" code action now works when the missing function is to be defined in another module. For example:

// src/maths.gleam
pub fn add(a: Int, b: Int) -> Int { a + b }
// src/app.gleam
import maths

pub fn main() -> Nil {
  echo maths.add(1, 2)
  echo maths.subtract(2, 1)
  Nil
}

The app module is calling a function called subtract from the maths module, but that function doesn't exist. Triggering the code action on the call to maths.subtract will edit the maths.gleam file to add the outline of the function, for the programmer to complete.

pub fn add(a: Int, b: Int) -> Int { a + b }

pub fn subtract(int: Int, int_2: Int) -> Int {
  todo
}

Thank you Surya Rose ! This is a nice little quality-of-life improvement for Gleam programmers.

And the last code action of this release, one that has been eagerly anticipated for some time by many Gleam programmers: extract function.

const head_byte_count = 256

pub fn get_head_of_file() {
  let assert Ok(contents) = read_file()
  case contents {
    <<head:bytes-size(head_byte_count), _:bits>> -> Ok(head)
    _ -> Error(Nil)
  }
}

If you were to select the case expression in your editor and trigger the code action, then it would be extracted to a new function, like so:

const head_byte_count = 256

pub fn get_head_of_file() {
  let assert Ok(contents) = read_file()
  function(contents)
}

fn function(contents: BitArray) -> Result(BitArray, Nil) {
  case contents {
    <<head:bytes-size(head_byte_count), _:bits>> -> Ok(head)
    _ -> Error(Nil)
  }
}

Unfortunately the language server protocol design is rather limiting, so the Gleam language server cannot prompt the programmer for a suitable name, it has to use a meaningless name instead. The "rename" feature of the language server can be triggered to give it a more appropriate name.

I believe that Microsoft's rewrite of the TypeScript toolchain will include it using the language server protocol instead of their custom protocol, so hopefully this will result in them expanding the LSP specification to include features their custom protocol has, such as code actions being able to ask for more information.

Thank you again Surya Rose !

Formatter improvements

Gleam has a code formatter that can clean up a file of Gleam code in an instant, freeing up time that would be otherwise spent on the busy-work of manually laying out code. Typically it is run by the programmer's text editor when a file is saved. Several improvements have been made to it with this release.

Bools can be negated with the ! operator, and ints can be negated with the - operator. Negating a value multiple times is redundant and does nothing, so the formatter now collapses duplicate negations.

pub fn useless_negations() {
  let lucky_number = --11
  let lucy_is_a_star = !!!False
}

The code is rewritten like so:

pub fn useless_negations() {
  let lucky_number = 11
  let lucy_is_a_star = !False
}

Additionally, the formatter no longer removes blocks from case clause guards, as the programmer may wish to include them to make code clearer, even if they are not required according to Gleam's operator precedency rules. This also makes the behaviour consistent with the formatting of regular expressions in Gleam functions.

Thank you Giacomo Cavalieri !

And the rest

And thank you to the bug fixers and experience polishers: Andrey Kozhev , Benjamin Peinhardt , Danielle Maywood , fruno , Giacomo Cavalieri , Joohoon Cha , Matias Carlander , Surya Rose , and Tristan-Mihai Radulescu ).

For full details of the many fixes and improvements they've implemented see the changelog .

A call for support

Gleam is not owned by a corporation; instead it is entirely supported by sponsors, most of which contribute between $5 and $20 USD per month, and Gleam is my sole source of income.

We have made great progress towards our goal of being able to appropriately pay the core team members, but we still have further to go. Please consider supporting the project or core team members Giacomo Cavalieri and Surya Rose on GitHub Sponsors.

GitHub Sponsors

Thank you to all our sponsors! And special thanks to our top sponsors:

We need to start doing web blocking for non-technical reasons

Lobsters
utcc.utoronto.ca
2025-10-20 11:04:25
Comments...
Original Article

You're probably reading this page because you've attempted to access some part of my blog (Wandering Thoughts) or CSpace , the wiki thing it's part of. Unfortunately whatever you're using to do so has a HTTP User-Agent header value that is too generic or otherwise excessively suspicious. Unfortunately, as of early 2025 there's a plague of high volume crawlers (apparently in part to gather data for LLM training) that behave like this. To reduce the load on Wandering Thoughts I'm experimenting with (attempting to) block all of them, and you've run into this.

All HTTP User-Agent headers should clearly identify what they are, and for non-browser user agents, they should identify not just the software involved but also who specifically is using that software. An extremely generic value such as " Go-http-client/1.1 " is not something that I consider acceptable any more.

Chris Siebenmann, 2025-02-17

Trump “National Police Force” Built on ICE Partnerships With Local Agencies Like… Wildlife Commissions?

Intercept
theintercept.com
2025-10-20 11:00:00
Trump is using lies about immigrant crime waves to create a massive law enforcement apparatus that operates under his orders. The post Trump “National Police Force” Built on ICE Partnerships With Local Agencies Like… Wildlife Commissions? appeared first on The Intercept....
Original Article

In addition to deploying tens of thousands of federal agents from across the federal government to carry out his deportation agenda, President Donald Trump is rapidly expanding the network of state and local police going after immigrants through partnerships with U.S. Immigrations and Customs Enforcement.

The aggressive, nationwide law enforcement regime, all taking place under orders from the White House, amounts to what scholars, attorneys, and now a federal judge say are steps toward the creation of a national police force. And the ranks of ICE partners won’t be filled with just local cops: In at least three states, the administration is joining forces with agencies typically tasked with environmental and marine protection, lottery control, and gaming to target immigrants.

“This is quite a common tactic,” said Charis Kubrin, a professor at the University of California, Irvine who studies immigration and crime. ”There’s this idea that we’re going to get local, not just police officers, but nurses and teachers and other public officials involved in enforcing immigration laws.”

It started largely with immigration, using federal agents and a little-remarked-upon program known as 287(g) to funnel funding to local law enforcement for partnerships. The widespread ICE incursions and local police partnerships, however, have also been justified by the myth of an immigrant crime wave.

“The research is pretty unequivocal that these policies have no impact on public safety.”

“There is this moral panic now about migrant crime. This is rhetoric that is at odds generally with what we know about immigration and crime,” Kubrin said. “The research is pretty unequivocal that these policies have no impact on public safety whatsoever.”

“We didn’t really need this increased cooperation,” she said. “The foundational assumption of this widespread immigrant criminality upon which all of these policies and practices are based, is patently not true.”

Recruiting and paying local police to do immigration work, however, is just one part of the bigger project of creating such a national police force, said the American Civil Liberties Union’s Naureen Shah. And critics are worried that with Trump’s willingness to leverage state power for his own ends, such a security force could become a tool for carrying out the president’s political agenda.

Shah, who leads the ACLU’s policy and advocacy work on immigration, said, “Their larger project is to blur the lines between different law enforcement agencies in the military and create one national police force that is essentially under the command of the president.”

Local Partners

By funneling money to local police to do immigration work that falls under the federal government’s purview, Trump is effectively bolstering a police force answerable to his own authority by slowly buying up state and local police on a massive scale. The administration announced last month that 1,000 agencies had partnered with ICE to help target people for deportation.

The money is coming through a program that was falling out of favor prior to Trump’s first term: the 287(g) program, named for the section of the Immigration and Nationality Act that forms the legal basis for local partners to detain and begin the process of deporting people targeted by the federal government.

For an administration eager to increase its sway over local law enforcement, immigration makes for a good starting point because the government has allocated astronomical sums of cash toward arresting, detaining , and deporting immigrants.

Paired with Trump’s military deployments to U.S. cities, the expansion of federal control over local police is the first step down a slippery slope, said Shah.

“It starts with immigration, and it’s through immigration, but it’s not limited to immigration.”

“They’re having federal law enforcement agencies scale up taskforces with state and local law enforcement so that when they talk about deploying into whatever city it is, they are not deploying on their own just as an invading force,” she said. “It starts with immigration, and it’s through immigration, but it’s not limited to immigration.”

The Fall and Rise of 287(g)

287(g) agreements were on the decline before Trump was first elected in 2016. During his 2020 campaign, former President Joe Biden pledged that he would end all 287(g) agreements made by Trump. And, in recent years, local sheriffs ran for office on promises to refuse to work with ICE .

“There’s a reason why the federal government has largely been tasked with policing immigration, and that’s why there was a lot of resistance to 287(g) including among police chiefs,” said Kubrin, the UC Irvine professor.

Both tacks to reduce the agreements faltered. Biden ended a handful of contracts but largely left the agreements in place . And political pressure on law enforcement mounted to form agreements. Several Republican governors, for instance, sued sheriffs who refused to work with ICE and, in some cases, won cooperation with Trump’s deportation agenda.

Now, the mixture of financial incentives and politics are driving a surge. Since Trump took office, partnerships that deploy state and local police to go after immigrants have increased 600 percent.

The number of law enforcement agencies signing on to aid Trump’s deportation machine is surging in part because local police agencies strapped for cash welcome additional funding. The Department of Homeland Security announced in September that it would pay out “performance awards” and fully reimburse police for annual salary and benefits for each 287(g) officer, including partial overtime coverage.

ICE is also offering signing bonuses of up to $50,000, along with student loan forgiveness , for new recruits, and airing ads to attract police in at least a dozen cities.

In addition to shoring up their finances, law enforcement agencies are also responding directly to political pressure from the White House.

“I think it has to do with political pressure for these jurisdictions to get involved,” Kubrin said. “There’s lots of political pressure.”

Much of the narrative around recruiting police to take on Trump’s deportation agenda has taken for granted that police should be helping ICE carry out their work, said the ACLU’s Shah. It’s not so much that ICE doesn’t have adequate resources, it’s that they need local police on their side to carry out Trump’s political agenda.

“They’re grabbing for local police because local police are all over the place,” Shah said. “It’s very linked to the larger threats of authoritarianism in the country, and I don’t see that in any of the coverage.”

More Than Just Cops

The massive, nationalized police force taking shape as Trump expands his reach into state and local agencies around the country is not limited to just police.

In Florida, for instance, the Trump administration has active 287(g) agreements with other state agencies, including the Fish and Wildlife Conservation Commission, the Department of Environmental Protection law enforcement division, the Gaming Control Commission, and the Department of Lottery Services. Wildlife commissions in Louisiana and Virginia are also partnering with the administration to target immigrants for deportation, as well as Virginia’s Marine Resources Commission. (A spokesperson for the Virginia Department of Wildlife Resources said the agency has not been requested to participate in any activities related to its 287(g) agreement since it entered into the partnership this summer.)

Since April, the administration has also partnered with university police or trustees for at least nine Florida state universities.

Partnering with agencies that aren’t traditionally focused on law enforcement is part of a strategy to enlist local officials outside of police in enforcing immigration laws, said Kubrin. The same strategy shaped laws like Arizona’s infamous S.B. 1070 migrant racial profiling law and others modeled after it in states like Georgia and Indiana.

Kubrin said she harbored a serious worry about eventually having to identify undocumented students in her own UC Irvine classes.

“It’s 10 times worse,” she said, “when you’re asking medical officials and teachers to be also policing immigration.”

The Invasion

The creep of federal control into state and local police departments comes as Trump has sent more than 35,000 troops to cities around the country. In a rambling address to military leaders earlier this month, Trump called to use American cities as training grounds for the military to fight a “a war from within.”

In a ruling last month against Trump’s deployment of the National Guard to Los Angeles, a federal judge wrote that Trump had used the troops as his own police force and styled himself as chief.

“Almost three months after Defendants first deployed the National Guard to Los Angeles, 300 National Guard members remain stationed there,” the judge wrote. “Moreover, President Trump and Secretary Hegseth have stated their intention to call National Guard troops into federal service in other cities across the country—including Oakland and San Francisco, here in the Northern District of California—thus creating a national police force with the President as its chief.”

Though Trump’s efforts to deploy the National Guard to Portland and Chicago have met legal roadblocks, the president is signaling that more cities could see National Guard deployments in the coming months. Several states have also agreed to use the National Guard to assist ICE .

None of it is likely to do much, Kubrin said.

“These policies and programs are not cashing in on the promise that they will lower crime rates,” Kubrin said. “But they are doing potential harm in communities.”

“Mass shootings, gun violence, gender based violence, corporate crime,” she said. “Immigrant crime is a very small slice of the crime problem.”

Amazon Web Services outage hits dozens of websites and apps

Guardian
www.theguardian.com
2025-10-20 10:55:36
Snapchat, Roblox, Signal and Duolingo among platforms affected after problems at cloud provider AWS A major internet outage has hit dozens of websites and apps around the world, with users reporting troubles getting online after problems at Amazon’s cloud computing service. The affected platforms in...
Original Article

A major internet outage has hit dozens of websites and apps around the world, with users reporting troubles getting online after problems at Amazon’s cloud computing service.

The affected platforms include Snapchat , Roblox, Signal and Duolingo as well as a host of Amazon-owned operations including its main retail site and the Ring doorbell company.

In the UK, Lloyds bank was affected as well as its subsidiaries Halifax and Bank of Scotland, while there were also reports of problems accessing the HM Revenue and Customs website on Monday morning. Also in the UK, multiple Ring users took to social media to complaint their doorbells were not working.

In the UK alone reports of problems on individual apps ran into the tens of thousands for each platform.

Amazon reported that the problem originated in the east coast of the US at Amazon Web Services, a unit that provides vital web infrastructure for a host of companies, who rent out space on Amazon servers. AWS is the world’s largest cloud computing platform.

Just after midnight (PDT) in the US (8am BST) Amazon confirmed “increased error rates and latencies” for AWS services in a region on the east coast of the US. The ripple effect appeared to hit services around the world, with the Downdetector site reporting problems with the same sites in multiple continents.

Cisco’s Thousand Eyes, a service that tracks internet outages , also reported a surge in problems on Monday morning, with many of them located in Virginia, the location of Amazon’s US-East-1 region where AWS said the problems began.

Hide read comments

Lobsters
lobste.rs
2025-10-20 10:44:58
I sometimes want to hide comments I've already read so that it's easier for me to find the unread comments of a story I revisit. I made a bookmarklet for this: document.querySelectorAll(".comment:not(:has(.comment_unread))").forEach((comment) => comment.style.display = "none") Would this (with a...
Original Article

I sometimes want to hide comments I've already read so that it's easier for me to find the unread comments of a story I revisit. I made a bookmarklet for this:

document.querySelectorAll(".comment:not(:has(.comment_unread))").forEach((comment) => comment.style.display = "none")

Would this (with a better UX design) be an interesting feature for others?

State-based vs Signal-based rendering

Hacker News
jovidecroock.com
2025-10-20 10:36:59
Comments...
Original Article

State-based vs Signal-based rendering

When we think about state management in front-end frameworks, we often focus on the API—hooks, observables, or signals. However, there's a deeper paradigm shift at play: where rendering happens . Traditional state management like React hooks triggers renders at the point where state is created, while signal-based approaches like Preact Signals or Solid.js trigger renders only where state is consumed.

This shift from "render where you create state" to "render where you use state" has profound implications for performance, code organization, and mental models.

The Core Difference

In traditional state management with React hooks, when you call useState , any update to that state causes the component—and all its descendants—to re-render. It doesn't matter whether those descendants actually use the state; they're caught in the render wave simply because they're children of the component that holds the state.

const Parent = () => {
  const [count, setCount] = useState(0);
  return (
    <>
      {/* re-renders even though it doesn't use count */}
      <ChildA />
      <ChildB />
      {/* re-renders, actually uses count */}
      <ChildC count={count} />
    </>
  );
};

With signal-based rendering, the paradigm inverts. A signal is a reactive primitive that tracks its own dependencies. When you create a signal, it doesn't trigger re-renders at the creation site. Instead, rendering only occurs at components that actually access the signal's value.

const Parent = () => {
  const count = useSignal(0);
  return (
    <>
      {/* do NOT re-render */}
      <ChildA />
      <ChildB />
      {/* only re-renders if it reads count.value */}
      <ChildC count={count} />
    </>
  );
};

This granular reactivity means only the precise components that depend on the signal will re-render when it updates. The mental model shifts from "prevent unnecessary re-renders" to "re-renders only happen where they're needed."

Context: The Paradigm Shift Amplified

This difference becomes even more pronounced when dealing with the Context API. In React, when you distribute state through context and update it, all consumers of that context re-render, regardless of whether they actually read the updated value.

const CountContext = createContext();

const Provider = ({ children }) => {
  const [count, setCount] = useState(0);
  const [name, setName] = useState('');
  return (
    <CountContext.Provider
      value={{ count, name, setCount, setName }}
    >
      {children}
    </CountContext.Provider>
  );
};

const ComponentA = () => {
  const { name } = useContext(CountContext);
  // Re-renders when count changes,
  // even though it only uses name.
  return <div>{name}</div>;
};

With signals in context, the reactivity is surgical. The context can hold signals, and only components that actually call .value on a signal will subscribe to its updates.

const CountContext = createContext();

const Provider = ({ children }) => {
  const count = useSignal(0);
  const name = useSignal('');
  return (
    <CountContext.Provider value={{ count, name }}>
      {children}
    </CountContext.Provider>
  );
};

const ComponentA = () => {
  const { name } = useContext(CountContext);
  // Only re-renders when name changes,
  // not when count changes
  return <div>{name.value}</div>;
};

This is a game-changer for large applications where context is used to distribute state across many components. You no longer need to split contexts to prevent unnecessary re-renders or reach for complex optimization patterns.

Rendering Propagation

Let's visualize how re-renders propagate through a component tree:

State-Based (React Hooks)

In state-based rendering, when state updates, the entire subtree from the point of state creation re-renders. You need to manually optimize with React.memo , shouldComponentUpdate , useMemo , and useCallback to prevent unnecessary work.

All descendants re-render (shown in red), regardless of whether they actually use the state. Only GC 2 genuinely needs the update, but Child 1 , Child 2 , Child 3 , GC 1 , and GC 3 all re-render unnecessarily.

Signal-Based (Preact Signals / Solid.js)

In signal-based rendering, only components that actually read the signal's value re-render. The component hierarchy is irrelevant—what matters is data dependency , not component ancestry .

Only GC 2 , which actually accesses signal.value , re-renders (shown in green). All other components remain unchanged (shown in gray), even though they're part of the same component tree.

Granular Control with Control Flow

Preact has a few utilities to take this further with control flow components like Show and For . These components scope reactivity even more precisely.

const items = signal([]);
// When the items signal updates, only the affected items re-render
<For each={items}>
  {(item) => (
    <div>
      {/* Only this item re-renders when item.value changes */}
      <span>{item.name.value}</span>
      <button
        onClick={() => item.count.value++}
      >
        {item.count.value}
      </button>
    </div>
  )}
</For>

Compare this to classic hooks, where changing an item in a list might trigger re-renders across sibling items, the parent component, and any other children—unless you've carefully memoized everything.

These control-flow components scope the re-render of a Signal (be that a derived computed or plain signal value) down to its JSX children.

Performance Implications

This paradigm shift has tangible performance implications:

  • Less computational work : Fewer components re-render means less JavaScript execution. You're not running render functions, diffing virtual DOM, or applying effects for components that don't care about the state change.
  • Reduced bundle size : No need for memoization helpers like React.memo , shouldComponentUpdate , useMemo , or useCallback . The framework's reactivity system handles optimization automatically.
  • Predictable performance : Re-render locations are determined by where signals are accessed, not by component hierarchy. This makes performance predictable and debugging easier—you can trace which components update by following signal reads.
  • No prop drilling : Signals can be passed through context or even imported directly without triggering unnecessary re-renders. You're not forced to split contexts or create provider pyramids.

When State-Based Makes Sense

It's worth noting that state-based rendering isn't inherently bad. For small components or applications where re-renders are cheap, the hooks model is simple and sufficient. The cost of re-rendering a few dozen components is often negligible.

The trade-off becomes significant in:

  • Large component trees with deep nesting
  • High-frequency updates (animations, real-time data)
  • Applications with complex state distribution (multiple contexts, global state)

Conclusion

The shift from state-based to signal-based rendering is more than a performance optimization—it's a paradigm shift in how we think about reactivity. Instead of preventing re-renders through memoization, we only trigger re-renders where they're needed .

This inversion—from "render where you create state" to "render where you use state"—aligns our code with the actual data flow. It makes applications faster by default and simplifies the mental model: if you read a signal, you'll update when it changes. If you don't, you won't.

As frameworks like Preact Signals and Solid.js demonstrate, this isn't a theoretical improvement—it's a practical one that makes building performant, maintainable applications easier. The future of front-end reactivity is fine-grained, and it's already here.

Copy-and-Patch: How It Works

Lobsters
transactional.blog
2025-10-20 10:22:58
Comments...
Original Article

From the tutorial, you’ve followed a set of given rules to produce stencils and relocation holes, with no explanation given as to why the rules are the way they are. We’re now going to dive into that reasoning, and look at exactly how each part of the guidance on clang flags and relocation hole macros came to be.

Stencil Creation

All of the idioms around creating stencils are about abusing features of clang as much as possible to be able to generate functions as only the specific sequences of instructions we want. There’s number of tricks involved:

First, we rely on the calling convention to be able to force values into known registers. Our goal is to be able to form programs by concatenating stencils, and so we must be able to match the outputs of one stencil to the inputs of another. By making the stencil inputs be the function arguments, and ending each function with a (tail)call to another function, we can rely on the calling convention to place input and output values into consistent registers. This ending call can be easily identified and trimmed off from the stencil. As a minor optimization we rely on specifically the GHC / preserve_none calling convention, which tries to pass as many arguments in registers as possible. This maximizes our ability to keep values in registers, and minimizes the chance that the compiler will try to generate a stack frame as we won’t be pushing arguments to a stack.

Second, we rely on compiler optimizations to elide the stack frame prologue/epilogue and to turn the ending call into a tailcall. Setting up and tearing down a stack frame is a notable overhead on the small stencil functions, and means each setup must have a paired teardown. Ending the stencil with a tailcall is what allows us to trivially elide the jump instruction and fall through into the next concatenated stencil, as well as helping to ensure that any stack operations have been undone before the jump.

Third, we extensively abuse dynamic relocations to allow stencils to declare holes for values to be filled in at JIT compile time, and when compiling the stencil the C compiler will tell us how/where to patch in constants or addresses into the code that it generated. If we wish to be able to patch in an integer constant, we can declare an extern int some_constant , and then cast the address of that variable to an int. By paying attention to the name of the extern symbol being referenced, we can more intelligently disambiguate its intended use and treat certain references specially. The machine code model has a significant impact on the relocations generated, and we’ll discuss that more later.

To show how these all fit together, let us consider a stencil which swaps its two arguments between its input and output, and multiplies them by a patchable constant:

#include <stdint.h>
extern void hole_fn(void) __attribute__((preserve_none));
extern int hole_for_int;

__attribute__((preserve_none))
void swap_and_multiply(int a, int b) {
  const int hole_value = (int)((uintptr_t)&hole_for_int);
  int c = a * hole_value;
  a = b * hole_value;
  b = c;

  typedef void(*outfn_type)(int, int) __attribute__((preserve_none));
  outfn_type stencil_output = (outfn_type)&hole_fn;
  stencil_output(a, b);
}

We compile this with clang -mcmodel=medium -O3 -c swap_and_multiply.c , and examine the generated code with objdump -d -Mintel,x86-64 --disassemble --reloc swap_and_multiply.o :

0000000000000000 <swap_and_multiply>:
   0:	44 89 e0             	mov    eax,r12d
   3:	41 bc 00 00 00 00    	mov    r12d,0x0
			5: R_X86_64_32	hole_for_int
   9:	41 0f af c4          	imul   eax,r12d
   d:	45 0f af e5          	imul   r12d,r13d
  11:	41 89 c5             	mov    r13d,eax
  14:	e9 00 00 00 00       	jmp    19 <swap_and_multiply+0x19>
			15: R_X86_64_PLT32	hole_fn-0x4

And thus we have achieved our exact goals in stencil creation. The function body is only our targeted set of instructions. There’s no stack frame setup or teardown. The relocation information tells us exactly how and where to patch in our integer constant at JIT compile time. And the use of a unique symbol hole_fn means the tail call jump is easy to identify and strip off from the generated code, as we end up with a unique pointer to it.

Now, let’s unwind each of the techniques involved here to illustrate their individual impact on the generated code.

Calling Convention

The standard x86_64 calling convention places the first six arguments into registers (in order: rdi , rsi , rdx , rcx , r8 , r9 ), and then the rest go on the stack. There’s a very nice overview of the standard (cdecl) calling convention for x86_64 in The 64 bit x86 C Calling Convention . However, the guidance for copy-and-patch stencils is to instead opt in to the preserve_none calling convention. Clang/LLVM only supports preserve_none on x86_64 and AArch64, and GCC doesn’t support it at all (but support is being worked on ).

We can look at the difference between cdecl and preserve_none by building a small stencil which just swaps the order of its inputs:

cdecl calling convention preserve_none calling convention
#include <stdint.h>
extern void hole_fn(void)
  __attribute__((cdecl));

__attribute__((cdecl))
void swap_ints(int a, int b) {
  typedef void(*outfn_type)(int, int)
    __attribute__((cdecl));
  outfn_type stencil_output =
    (outfn_type)&hole_fn;
  stencil_output(b, a);
}
#include <stdint.h>
extern void hole_fn(void)
  __attribute__((preserve_none));

__attribute__((preserve_none))
void swap_ints(int a, int b) {
  typedef void(*outfn_type)(int, int)
    __attribute__((preserve_none));
  outfn_type stencil_output =
    (outfn_type)&hole_fn;
  stencil_output(b, a);
}
; <swap_ints>:
mov    eax,edi
mov    edi,esi
mov    esi,eax
jmp    b <swap_ints+0xb>
;; R_X86_64_PLT32	hole_fn-0x4
; <swap_and_multiply>:
mov    eax,r12d
mov    r12d,r13d
mov    r13d,eax
jmp    e <swap_and_multiply+0xe>
;; R_X86_64_PLT32	hole_fn-0x4

Which is…​ not really all that different. preserve_none is useful though as the number of arguments go up. As mentioned above, x86_64 provides six registers for arguments, so we can better illustrate the difference by extending swap_ints to 8 parameters:

#include <stdint.h>
extern void hole_fn(void)
  __attribute__((CALLING_CONVENTION));

__attribute__((CALLING_CONVENTION))
void swap_ints(int a, int b, int c, int d, int e, int f, int g, int h) {
  typedef void(*outfn_type)(int, int, int, int,
                            int, int, int, int)
  __attribute__((CALLING_CONVENTION));
  outfn_type stencil_output = (outfn_type)&hole_fn;
  stencil_output(h, g, f, e, d, c, b, a);
}

// clang -DCALLING_CONVENTION=cdecl -O3 -c
// clang -DCALLING_CONVENTION=preserve_none -O3 -c
cdecl calling convention preserve_none calling convention
; <swap_ints>:
push   rbx
mov    eax,ecx
mov    r10d,edx
mov    r11d,esi
mov    ebx,edi
mov    edi,DWORD PTR [rsp+0x18]
mov    esi,DWORD PTR [rsp+0x10]
mov    edx,r9d
mov    ecx,r8d
mov    r8d,eax
mov    r9d,r10d
push   rbx
push   r11
call   27 <swap_ints+0x27>
;; R_X86_64_PLT32	hole_fn-0x4
add    rsp,0x10
pop    rbx
ret
; <swap_ints>:
mov    eax,r15d
mov    ebx,r14d
mov    r8d,r13d
mov    r9d,r12d
mov    r12d,ecx
mov    r13d,edx
mov    r14d,esi
mov    r15d,edi
mov    edi,eax
mov    esi,ebx
mov    edx,r8d
mov    ecx,r9d
jmp    27 <swap_ints+0x27>
;; R_X86_64_PLT32	hole_fn-0x4

So it’s helpful for when it matters. It moves us from being able to only define stencils with 6 inputs and outputs to stencils that have 12 inputs and outputs, after which preserve_none also runs out of registers and has to start setting up a stack frame. However, there’s multiple categories of registers. Floating point values and SSE operations use xmm registers, AVX uses ymm registers, and AVX-512 uses zmm registers. The calling convention also controls how these operate:

floating point SIMD
STENCIL_FUNCTION
void float_passthrough(float a) {
  DECLARE_STENCIL_OUTPUT(float);
  return stencil_output(a);
}
#include <immintrin.h>
STENCIL_FUNCTION
void simd_passthrough(__m512 a) {
  DECLARE_STENCIL_OUTPUT(__m512);
  return stencil_output(a);
}

cdecl

; <float_passthrough>:
push   r15
push   r14
push   r13
push   r12
push   rbx
call   10e <float_passthrough+0xe>
;; R_X86_64_PLT32
pop    rbx
pop    r12
pop    r13
pop    r14
pop    r15
ret
; <simd_passthrough>:
push   r15
push   r14
push   r13
push   r12
push   rbx
call   12e <simd_passthrough+0xe>
;; R_X86_64_PLT32
pop    rbx
pop    r12
pop    r13
pop    r14
pop    r15
vzeroupper
ret

preserve none

; <float_passthrough>:
jmp    105 <float_passthrough+0x5>
;; R_X86_64_PLT32
; <simd_passthrough>:
jmp    115 <simd_passthrough+0x5>
;; R_X86_64_PLT32

Any number of floating point or simd registers cause a stack frame to get emitted on cdecl, and thus if you’re trying to use them in stencils, you’ll have to use preserve_none. You’ll then be limited to 8 function arguments / registers before it will start passing arguments on the stack.

For SIMD specifically, note that one can use attribute target("arch") to be able to generate code for different SIMD feature sets, and detect which one to select as the code for the stencil at runtime:

__attribute__((preserve_none,target("avx")))
void fused_multiply_add_avx(__m512 a, __m512 b, __m512 c) {
  DECLARE_STENCIL_OUTPUT(__m512);
  return stencil_output(a * b + c);
}

__attribute__((preserve_none,target("no-avx")))
void fused_multiply_add_sse2(__m512 a, __m512 b, __m512 c) {
  DECLARE_STENCIL_OUTPUT(__m512);
  return stencil_output(a * b + c);
}
0000000000000100 <fused_multiply_add_avx>:
 100:	62 f2 75 48 a8 c2    	vfmadd213ps zmm0,zmm1,zmm2
 106:	e9 00 00 00 00       	jmp    10b <fused_multiply_add_avx+0xb>
			107: R_X86_64_PLT32	cnp_stencil_output-0x4
 10b:	0f 1f 44 00 00       	nop    DWORD PTR [rax+rax*1+0x0]

0000000000000110 <fused_multiply_add_sse2>:
 110:	0f 59 c4             	mulps  xmm0,xmm4
 113:	0f 58 44 24 08       	addps  xmm0,XMMWORD PTR [rsp+0x8]
 118:	0f 59 cd             	mulps  xmm1,xmm5
 11b:	0f 58 4c 24 18       	addps  xmm1,XMMWORD PTR [rsp+0x18]
 120:	0f 59 d6             	mulps  xmm2,xmm6
 123:	0f 58 54 24 28       	addps  xmm2,XMMWORD PTR [rsp+0x28]
 128:	0f 59 df             	mulps  xmm3,xmm7
 12b:	0f 58 5c 24 38       	addps  xmm3,XMMWORD PTR [rsp+0x38]
 130:	e9 00 00 00 00       	jmp    135 <fused_multiply_add_sse2+0x25>
			131: R_X86_64_PLT32	cnp_stencil_output-0x4

Tail Call

As was mentioned, we rely on clang’s optimization primary for converting the stencil_output call to a tailcall. It also happens to be necessary for eliding the stack frame prologue and epilogue when it’s not necessary. Going back to our swap_and_multiply example:

#include <stdint.h>
extern void hole_fn(void) __attribute__((preserve_none));
extern int hole_for_int;

__attribute__((preserve_none))
void swap_and_multiply(int a, int b) {
  const int hole_value = (int)((uintptr_t)&hole_for_int);
  int c = a * hole_value;
  a = b * hole_value;
  b = c;

  typedef void(*outfn_type)(int, int) __attribute__((preserve_none));
  outfn_type stencil_output = (outfn_type)&hole_fn;
  stencil_output(a, b);
}

We can look at the resulting code without optimizations ( -O0 ) and with optimizations ( -O3 ):

clang -O0 clang -O3
; <swap_and_multiply>:
push   rbp (1)
mov    rbp,rsp
sub    rsp,0x20
mov    DWORD PTR [rbp-0x4],r12d
mov    DWORD PTR [rbp-0x8],r13d
mov    eax,0x0
;; R_X86_64_32	hole_for_int
mov    DWORD PTR [rbp-0xc],eax
mov    eax,DWORD PTR [rbp-0x4]
mov    ecx,DWORD PTR [rbp-0xc]
imul   eax,ecx
mov    DWORD PTR [rbp-0x10],eax
mov    eax,DWORD PTR [rbp-0x8]
mov    ecx,DWORD PTR [rbp-0xc]
imul   eax,ecx
mov    DWORD PTR [rbp-0x4],eax
mov    eax,DWORD PTR [rbp-0x10]
mov    DWORD PTR [rbp-0x8],eax
mov    QWORD PTR [rbp-0x18],0x0
;; R_X86_64_32S	hole_fn
mov    rax,QWORD PTR [rbp-0x18]
mov    r12d,DWORD PTR [rbp-0x4]
mov    r13d,DWORD PTR [rbp-0x8]
call   rax (3)
add    rsp,0x20
pop    rbp (2)
ret
; <swap_and_multiply>:
mov    eax,r12d
mov    r12d,0x0
;; R_X86_64_32	hole_for_int
imul   eax,r12d
imul   r12d,r13d
mov    r13d,eax
jmp    19 <swap_and_multiply+0x19> (3)
;; R_X86_64_PLT32	hole_fn-0x4

So, clang is obviously doing great work for us. and are the stack frame setup and teardown in the unoptimized version, and they’ve been elided in the optimized version. The call at has been replaced with a tailcall jmp at .

I’m not aware of a more specific way to request clang to emit the stack frame when it’s not necessary. -fomit-frame-pointer -momit-leaf-frame-pointer causes clang to drop the push rbp / pop rbp , but the sub rsp,0x20 and add rsp,0x20 remain as the unoptimized code relies on the stack for local variables. Maybe running only mem2reg would then suffice, but the whole point here is to get all of LLVM’s optimizations for "free" within a stencil anyway.

Clang does support the musttail attribute to force tailcall generation. However, it requires that the input and output types match, which doesn’t fit our needs for stencil creation.

extern void hole_fn(void) __attribute__((preserve_none));

__attribute__((preserve_none))
void add_two_ints(int a, int b) {
  typedef void(*outfn_type)(int) __attribute__((preserve_none));
  outfn_type stencil_output = (outfn_type)&hole_fn;
  // Force the tailcall, via an attribute on the return statement.
  __attribute__((musttail)) return stencil_output(a + b);
}
$ clang -O3 -c example.c
example.c:12:29: error: cannot perform a tail call to function 'stencil_output'
because its signature is incompatible with the calling function
   12 |   __attribute__((musttail)) return stencil_output(a + b);
      |                             ^
example.c:11:3: note: target function has different number of parameters
(expected 2 but has 1)
   11 |   outfn_type stencil_output = (outfn_type)&hole_fn;
      |   ^
example.c:12:18: note: tail call required by 'musttail' attribute here
   12 |   __attribute__((musttail)) return stencil_output(a + b);
      |                  ^

So, unless that changes in the future, we have to rely on -O3 magically doing the right thing.

Relocations

This far, we’ve examined the "copy" part of copy-and-patch. It is now time to focus on the "patch" part instead.

A relocation is a bit of information that clang leaves for the dynamic linker when referencing an external symbol, so that when the program is run and the executable and its various libraries are loaded into random addresses in memory, the dynamic linker can patch the executable with the correct addresses of all of the symbols it needs. In copy-and-patch, we abuse this by referencing an external symbol every time that we want a hole to be inserted into the stencil, and then looking at the relocation information generated after compilation to know what offsets to patch within the generated code to fill the hole at JIT compile time.

We lean heavily on the medium machine code model, which sets the expectation that code can be referenced within +-2GB (32-bit values), and large data needs to be referenced by full 64-bit values. Others have covered the topics of machine code models and relocations before, so please see Understanding the x64 code models or Relocation Overflow and Code Models for background on this topic. The official AMD64 ABI documentation is atypically clear and useful as well. Small views both code and data as 32-bit values, large views both as 64-bit values, and so using medium means we’re able to generate holes of either 32-bit or 64-bit depending on if we reference code or data.

I’ve summarized everything to be aware of within the realm of making holes into one program:

#include <stdint.h>

extern uint8_t cnp_small_data_array[8];
extern uint8_t cnp_large_data_array[1000000];
extern void cnp_function_near(uint32_t, uint64_t);
extern uint8_t cnp_function_far[1000000];

void stencil_example(void) {
  uint32_t small = (uint32_t)((uintptr_t)&cnp_small_data_array);
  uint64_t large = (uint64_t)((uintptr_t)&cnp_large_data_array);
  typedef void(*fn_ptr_t)(uint32_t, uint64_t);
  fn_ptr_t near_ptr = &cnp_function_near;
  near_ptr(small, large);

  uint64_t largefn = (uint64_t)((uintptr_t)&cnp_function_far);
  asm volatile("" : "+r" (largefn) : : "memory");
  fn_ptr_t far_ptr = (fn_ptr_t)largefn;
  far_ptr(small, largefn);
}

The key part, which I cannot emphasize enough, is that we completely and utterly ignore the actual data referred to by the symbol. We always take the address of the symbol, and cast it to what we need. Hence, the use of some macros above to make this friendlier.

We compile this with clang -O3 -mcmodel=medium -c example.c , though -mcmodel=medium is the default anyway, and view the generated code and relocations with objdump -d -Mintel,x86-64 --disassemble --reloc example.o as usual:

0000000000000000 <stencil_example>:
   0:	50                   	push   rax
   1:	48 be 00 00 00 00 00 	movabs rsi,0x0
   8:	00 00 00
			3: R_X86_64_64	cnp_large_data_array
   b:	bf 00 00 00 00       	mov    edi,0x0
			c: R_X86_64_32	cnp_small_data_array
  10:	e8 00 00 00 00       	call   15 <stencil_example+0x15>
			11: R_X86_64_PLT32	cnp_function_near-0x4
  15:	48 be 00 00 00 00 00 	movabs rsi,0x0
  1c:	00 00 00
			17: R_X86_64_64	cnp_function_far
  1f:	bf 00 00 00 00       	mov    edi,0x0
			20: R_X86_64_32	cnp_small_data_array
  24:	58                   	pop    rax
  25:	ff e6                	jmp    rsi

When referring to a small piece of data, we’ll get a 32-bit hole. You can see this with the relocation for cnp_small_data_array being a R_X86_64_32 . Referring to a large piece of data instead gets us a 64-bit hole. cnp_large_data_array was assigned R_X86_64_64 , and clearly there are more 00 bytes to fill in. -mlarge-data-threshold=threshold controls the exact line between how large an array must be for it to be considered "large data" and get 64-bit addressing treatment, but it’s safe to just declare a needlessly large extern array as the array won’t exist anyway.

When calling a function, the function is expected to be within +-2GB according to the code model, so the invocation of cnp_function_near becomes a 32-bit hole of R_X86_64_PLT32 . When patching references between stencils, it will be important to track the exact offsets of the source jmp/call and the destination, as the offset is relative. If you wish to call back into a function that’s a part of the JIT compiler runtime, that function won’t likely be within +-2GB. We need to be able to emit a call/jmp to the full 64-bit address. It turns out that this is incredibly difficult to do:

void stencil_example(void) {
  typedef void(*fn_ptr_t)(uint64_t);
  fn_ptr_t direct_assign = (fn_ptr_t)((uintptr_t)&cnp_function_far);
  direct_assign(0);

  uint64_t far_as_int = (uint64_t)((uintptr_t)&cnp_function_far);
  fn_ptr_t indirect_assign = (fn_ptr_t)far_as_int;
  indirect_assign(far_as_int);

  uint64_t far_forgettable = (uint64_t)((uintptr_t)&cnp_function_far);
  // Abuse an empty asm volatile to make clang unable to understand
  // where the value came from.
  asm volatile("" : "+r" (far_forgettable) : : "memory");
  fn_ptr_t forgotten = (fn_ptr_t)far_forgettable;
  forgotten(far_forgettable);
}
0000000000000000 <stencil_example>:
   0:	53                   	push   rbx
   1:	31 ff                	xor    edi,edi
   3:	e8 00 00 00 00       	call   8 <stencil_example+0x8>
			4: R_X86_64_PLT32	cnp_function_far-0x4
   8:	48 bb 00 00 00 00 00 	movabs rbx,0x0
   f:	00 00 00
			a: R_X86_64_64	cnp_function_far
  12:	48 89 df             	mov    rdi,rbx
  15:	e8 00 00 00 00       	call   1a <stencil_example+0x1a>
			16: R_X86_64_PLT32	cnp_function_far-0x4
  1a:	48 89 df             	mov    rdi,rbx
  1d:	5b                   	pop    rbx
  1e:	ff e7                	jmp    rdi

Of which we see that there’s two 32-bit relocations ( R_X86_64_PLT32 ) and one 64-bit one ( R_X86_64_64 ). There’s 32-bit relocations because clang sees that we turned an external symbol into a function pointer. Code must be within +-2GB according to the code model, so 32 bits is fine. Clang is also then smart enough to track this through an assignment to a variable, and although it loads the full 64-bit address into a register as the argument, it then emits a 32-bit relocation for the actual call, because it still knows that the address came from a symbol definition. The only way I found to make clang "forget" the source of function pointer value was to run it through an empty asm volatile so that clang thinks no assumptions are valid anymore, and then it finally is willing to just jump to the 64-bit value in the register.


Major internet outage hits websites, apps and Lloyds Bank after issue at Amazon Web Services – business live

Guardian
www.theguardian.com
2025-10-20 10:06:06
Fortnite, Roblox, SnapChat, Lloyds Bank and Ring doorbells among services hit by online outage Heads-up: UBS have upgraded their view on global equities to ‘attractive’, a sign that they think investors should be putting more money into shares. Despite the angst in parts of the market that valuatio...
Original Article

Users report major internet outage after AWS suffers 'operational issue'

A major internet outage has hit dozens of popular websites, online games and apps.

Users have been reporting problems accessing sites such as Amazon , Roblox , Fortnite and Snapchat, for example, this morning.

The outage may be related to a glitch at Amazon Web Services (AWS), the e-commerce giant’s on-demand cloud computing platform which underpins many online operations.

AWS has reported an ‘operational issue’ at its data centre in North Virginia, known as us-east-1”.

It says:

We can confirm increased error rates and latencies for multiple AWS Services in the US-EAST-1 Region. This issue may also be affecting Case Creation through the AWS Support Center or the Support API.

We are actively engaged and working to both mitigate the issue and understand root cause.

Key events

Please turn on JavaScript to use this feature

Some precious Duolingo streaks could be lost unless the internet outage is fixed before the end of the day.

There has been a jump in reports of problems at Duolingo since around 8am, according to Downdetector:

A chart showing reports of problems at Duolingo
A chart showing reports of problems at Duolingo Illustration: Downdetector

However, Duolingo’s service status page declares “All Systems Operational”, so users may not get a break from being pestered to do a language lesson.

AWS find "potential root cause for error rates"

Progress! Amazon Web Services say they may have found the cause of the problem.

In an operational update, AWS say:

We have identified a potential root cause for error rates for the DynamoDB APIs in the US-EAST-1 Region.

Based on our investigation, the issue appears to be related to DNS resolution of the DynamoDB API endpoint in US-EAST-1. We are working on multiple parallel paths to accelerate recovery. This issue also affects other AWS Services in the US-EAST-1 Region. Global services or features that rely on US-EAST-1 endpoints such as IAM updates and DynamoDB Global tables may also be experiencing issues.

During this time, customers may be unable to create or update Support Cases. We recommend customers continue to retry any failed requests. We will continue to provide updates as we have more information to share, or by 2:45 AM [10.45am UK time].

There are also hundreds of reports of problems accessing the website of the UK’s HM Revenue & Customs (HMRC) – the UK’s tax, payments and customs authority – this morning.

If you try to log into the site, which can be used to submit tax returns or seek advice on tax, you are told “Sorry, there is a problem with the service”.

A screengrab showing HMRC’s website isn’t working
Photograph: HMRC

Thousand Eyes shows scale of internet outages

Cisco’s Thousand Eyes , a service that tracks internet outages , reports a surge in problems this morning.

Many are located at Virginia, the location of Amazon’s US-EAST-1 region where engineers are working now to fix the problem responsible for “increased error rates and latencies”

A map showing internet outages
A map showing internet outages Photograph: Thousand Eyes

Ring doorbells brought down by internet outage

This morning’s outage also appears to have scuppered some smart doorbells.

Several people are reporting problems using their Ring doorbells this morning:

@ring_uk your entire system is not working. Not telephone support. No chat. Doorbell online and can’t connect. What the actual F? No messages on socials telling us why?!?

— Richard A Morgan (He, Him) (@morgocomics) October 20, 2025

Good morning @ring
I'm not getting any notifications/alerts for my doorbell. The history of events is not loading either. The devices on the app seem to be 'loading'.
Am I the only person having this problem?

— Miss Understood (ツ) (@miss_sahota) October 20, 2025

Ring were bought by Amazon in 2018.

AWS: engineers are actively working to fix the problem

Amazon Web Services have updated their services health data, saying engineers have been “immediately engaged” to fix the problem at its services.

AWS says:

We can confirm significant error rates for requests made to the DynamoDB endpoint in the US-EAST-1 Region.

This issue also affects other AWS Services in the US-EAST-1 Region as well. During this time, customers may be unable to create or update Support Cases.

Engineers were immediately engaged and are actively working on both mitigating the issue, and fully understanding the root cause. We will continue to provide updates as we have more information to share, or by 2:00 AM [10am UK time].

Problems at Lloyds, Halifax and Bank of Scotland

Some bank customers are reporting problems logging into their accounts this morning.

There’s been a surge of reports of problems at Lloyds Bank , Halifax , and Bank of Scotland (which are all part of Lloyds Banking Group), according to Downdetector.

A chart showing outages reported at Lloyds Banking Group
A chart showing outages reported at Lloyds Banking Group Photograph: Downdetector

Fortnite investigating log-in problems

Fortnite, the popular online gaming platform, has confirmed its services are affected by the Internet outage, saying:

An outage affecting several services on the internet is also impacting Fortnite log-ins. We’re investigating this now, and will update you when we have more details.

⚠️ An outage affecting several services on the internet is also impacting Fortnite log-ins. We're investigating this now, and will update you when we have more details.

— Fortnite Status (@FortniteStatus) October 20, 2025

Users report major internet outage after AWS suffers 'operational issue'

A major internet outage has hit dozens of popular websites, online games and apps.

Users have been reporting problems accessing sites such as Amazon , Roblox , Fortnite and Snapchat, for example, this morning.

The outage may be related to a glitch at Amazon Web Services (AWS), the e-commerce giant’s on-demand cloud computing platform which underpins many online operations.

AWS has reported an ‘operational issue’ at its data centre in North Virginia, known as us-east-1”.

It says:

We can confirm increased error rates and latencies for multiple AWS Services in the US-EAST-1 Region. This issue may also be affecting Case Creation through the AWS Support Center or the Support API.

We are actively engaged and working to both mitigate the issue and understand root cause.

Oil hit by glut predictions

Concerns about a global oil glut have hit crude prices this morning.

Brent crude is down 0.75% at $60.84 per barrel, approaching a five-month low, following recent forecasts that supply will rise faster than demand.

Toshitaka Tazawa , an analyst at Fujitomi Securities , explained:

“Concerns about oversupply from increased production by oil- producing nations, coupled with fears of an economic slowdown stemming from escalating U.S.-China trade tensions, are fuelling selling pressure.”

Out of Words – crafting gaming’s most unusual love story from clay and glue

Guardian
www.theguardian.com
2025-10-20 10:00:40
Most games want you to save the world. This stop-motion adventure wants you to hold someone’s hand Stop-motion adventure Out of Words was one of the most striking reveals at this year’s Summer Game Fest. While most games are built from code, Out of Words is made from clay, fabric, and glue: a love s...
Original Article

S top-motion adventure Out of Words was one of the most striking reveals at this year’s Summer Game Fest. While most games are built from code, Out of Words is made from clay, fabric, and glue: a love story literally crafted by hand that even caught the attention of Metal Gear creator Hideo Kojima (“The biggest praise we could imagine,” game director Johan Oettinger says.)

Oettinger dreamed of making a stop-motion video game since he was 12, when he first played 90s point-and-click claymation game The Neverhood. After years working across films, commercials and installation art, Out of Words became the project to merge these two lifelong passions.

Two claymation characters running through a grassy landscape.
A handcrafted world … Out of Words. Photograph: Epic Games

The process of bringing clay to life is as unusual as the game itself. The 40-strong studio in Aarhus, Denmark developed a pipeline of custom scanners and photogrammetry to bring handcrafted assets into Unreal Engine , paired with stop-motion cutscenes that see animators move puppets 12 to 24 frames per second. “Something magical happens when real materials are formed to represent a character … [it] gives a sincerity that makes the character more real than any other form of storytelling,” Oettinger says.

This means that the studio is as likely to be immersed in glue, wood and clay as it is in programming. “Yes – and we’re just as likely to be singing the best Disney songs than to be in the most focused silence,” game developer Mariano Pugliese says. “The same enthusiasm is shared between the puppet maker turning wood and iron into a character and the CG artist adjusting the sets in the game.”

The story of Out of Words centres on Kurt and Karla, two young characters on the brink of first love. “I want to tell a story that meditates on the split second where you are looking for the right words to say to the one you are falling in love with, and this is of course almost impossible, especially if you are an early teen,” says Oettinger. “The split second becomes a whole inner journey of light and darkness … in our story this journey is shared between the two characters, the split second becomes their inner world of finding the right words and hopefully coming out on the other side of the game holding hands.”

Four characters made of clay looking at the camera.
A cast made of clay … Out of Words. Photograph: Epic Games

Out of Words is a co-operative adventure controlled by two players, either in person or online, with the journey the game’s leads go on mirroring the players’ evolving relationship.

“The co-operative play element is at the very core of the game design,” says design lead Jeff Sparks. “Players experience all the highs and lows together, guiding Kurt and Karla through a world full of unexpected challenges.” Trust is a key theme, while the co-op action has been designed to reflect the evolving bond between the characters.

Even its strangest puzzles are built on emotion. One features a giant clay face, Sparks explains: “When Kurt and Karla walk on to his mouth, it twists and shifts around, almost like they’re tickling his lips … at its core it’s just an elaborate door, but with all the creativity and mechanics layered on to it, it comes alive in a way I’ve never seen before.”

In the end, Out of Words isn’t about bosses , but the adventure of human personal connection. “You must treasure the immense adventure it is to express your love to someone for the first time,” Oettinger says. “The dream is that you experience this and it forges a deep connection between the two of you playing.”

The ambitions are lofty: “We basically want you to remember this experience for the rest of your life.” And what could be more fitting for a game made by hand, than to be all about what happens when you finally reach out and take one?

AWS outage crashes Amazon, PrimeVideo, Fortnite, Perplexity and more

Bleeping Computer
www.bleepingcomputer.com
2025-10-20 09:24:25
AWS outage has taken down millions of websites, including Amazon.com, PrimeVideo, Perplexity AI, Canva and more. [...]...
Original Article

AWS

AWS outage has taken down millions of websites, including Amazon.com, PrimeVideo, Perplexity AI, Canva and more.

The outage started approx 30 minutes ago and it's affecting consumers in all regions, including the United States and Europe.

According to AWS Status page, Amazon is aware of major disruption affecting multiple services.

"We can confirm increased error rates and latencies for multiple AWS Services in the US-EAST-1 Region. This issue may also be affecting Case Creation through the AWS Support Center or the Support API. We are actively engaged and working to both mitigate the issue and understand root cause," AWS noted.

"We are investigating increased error rates and latencies for multiple AWS services in the US-EAST-1 Region."

This is a developing story...

AWS outage crashes Amazon, Prime Video, Fortnite, Perplexity and more

Bleeping Computer
www.bleepingcomputer.com
2025-10-20 09:24:25
AWS outage has taken down millions of websites, including Amazon.com, Prime Video, Perplexity AI, Canva and more. [...]...
Original Article

AWS

AWS outage has taken down millions of websites, including Amazon.com, Prime Video, Perplexity AI, Canva and more.

The outage started approx 30 minutes ago and it's affecting consumers in all regions, including the United States and Europe.

According to AWS Health page , Amazon is aware of major disruption affecting multiple services.

"We can confirm increased error rates and latencies for multiple AWS Services in the US-EAST-1 Region. This issue may also be affecting Case Creation through the AWS Support Center or the Support API. We are actively engaged and working to both mitigate the issue and understand root cause," AWS noted.

"We are investigating increased error rates and latencies for multiple AWS services in the US-EAST-1 Region."

While Amazon has not shared the specific cause of the outage, the status updates indicate that it is related to a DNS resolution issue for the DynamoDB API endpoint in the US-EAST-1 AWS region.

Fortnite, Perplexity, Canva and others confirm service disruption

In a post on X, Epic Games' Fornite confirmed a major service disruption. While Fornite gameplay itself is not affected, you won't be able to log-in, as the login is powered by AWS.

Perplexity also confirmed its chat app is offline due to AWS outage.

Graphic design company Canva acknowledgd service outage impacting image editing and other features.

"We are currently experiencing significantly increased error rates which are impacting functionality on Canva. Our team is actively investigating the issue and working to restore full access as quickly as possible," Canva noted on its status page.

According to Downdetector , 15 major services, including enertainment platforms like Roblox and Hulu, are offline due to AWS issues.

List of major services affected by AWS outage:

  • Amazon
  • Prime Video
  • Fortnite
  • Canvas
  • Clash of Clans
  • Clash of Royals
  • Palworld
  • Snapchat
  • Perplexity
  • Canva
  • Roblox
  • Hulu
  • Robinhood
  • Grammarly

Update 10/20/25 5:00 AM EDT: Some services are recovering after 45 minutes of outage

Update 10/20/25 5:25 AM EDT : AWS says the services are now fully restored.

Update 10/20/25 12:06 PM EDT: While AWS says they have mitigated the DNS issue, they are now saying they are having issues with its network load balancers, which continue to cause widespread outages for companies utilizing the cloud platform.

"We have taken additional mitigation steps to aid the recovery of the underlying internal subsystem responsible for monitoring the health of our network load balancers and are now seeing connectivity and API recovery for AWS services," reads AWS' status page.

"We have also identified and are applying next steps to mitigate throttling of new EC2 instance launches. We will provide an update by 10:00 AM PDT."

Many online services, including Canvas, which is widely used by schools in the US, continue to show outage messages when users attempt to log in to the platform.

Canvas still down due to AWS outage
Canvas still down due to AWS outage
Source: BleepingComputer

BleepingComputer contacted Amazon with questions about what is causing the outage and will update the article if we receive a response.

This is a developing story.

Major AWS outage takes down Fortnite, Alexa, Snapchat, and more

Hacker News
www.theverge.com
2025-10-20 09:12:09
Comments...
Original Article

Jess Weatherbed

is a news writer focused on creative industries, computing, and internet culture. Jess started her career at TechRadar, covering news and hardware reviews.

Amazon Web Services (AWS) is currently experiencing a major outage that has taken down online services, including Amazon, Alexa, Snapchat, Fortnite , ChatGPT , Epic Games Store, Epic Online Services , and more. The AWS status checker is reporting that multiple services are “impacted” by operational issues, and that the company is “investigating increased error rates and latencies for multiple AWS services in the US-EAST-1 Region” — though outages are also impacting services in other regions globally.

Users on Reddit are reporting that the Alexa smart assistant is down and unable to respond to queries or complete requests, and in my own experience, I found that routines like pre-set alarms are not functioning. The AWS issue also appears to be impacting platforms running on its cloud network, including Perplexity, Airtable, Canva, and the McDonalds app. The cause of the outage hasn’t been confirmed, and it’s unclear when regular service will be restored.

“Perplexity is down right now,” Perplexity CEO Aravind Srinivas said on X . “The root cause is an AWS issue. We’re working on resolving it.”

The AWS dashboard first reported issues affecting the US-EAST-1 Region at 3:11AM ET. “We are actively engaged and working to both mitigate the issue and understand root cause. We will provide an update in 45 minutes, or sooner if we have additional information to share,” Amazon said in an update published at 3:51AM ET.

AWS outages in the US-East-1 region have created widespread disruptions in 2023 , 2021 , and 2020 , forcing multiple websites and platforms offline for several hours before regular service was restored.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

Docker Systems Status: Full Service Disruption

Hacker News
www.dockerstatus.com
2025-10-20 08:31:23
Comments...
Original Article

Issues accessing Registry, Hub, Scout, DBC, DHI Full Service Disruption

Components

Docker Hub Registry, Docker Authentication, Docker Hub Web Services, Docker Desktop, Docker Billing, Docker Package Repositories, Docker Hub Automated Builds, Docker Hub Security Scanning, Docker Docs, Docker Community Forums, Docker Support, Docker.com Website, Docker Scout, Docker Build Cloud, Testcontainers Cloud, Docker Cloud, Docker Hardened Images

Locations

Client Machines, Docker Web Services


October 20, 2025 01:22 PDT
October 20, 2025 08:22 UTC

[Identified] We have identified the underlying issue with one of our cloud service providers. We are monitoring the situation and prepare our systems for when the issues with our service provider resolve.


October 20, 2025 00:16 PDT
October 20, 2025 07:16 UTC

[Investigating] We are seeing issues accessing and using our services across many of our products. We are currently investigating and will report back as soon as possible..

AWS Multiple Services Down in us-east-1

Hacker News
health.aws.amazon.com
2025-10-20 08:22:28
Comments...

Major AWS Outage Happening

Hacker News
old.reddit.com
2025-10-20 08:11:06
Comments...
Original Article

whoa there, pardner!

Your request has been blocked due to a network policy.

Try logging in or creating an account here to get back to browsing.

If you're running a script or application, please register or sign in with your developer credentials here . Additionally make sure your User-Agent is not empty and is something unique and descriptive and try again. if you're supplying an alternate User-Agent string, try changing back to default as that can sometimes result in a block.

You can read Reddit's Terms of Service here .

If you think that we've incorrectly blocked you or you would like to discuss easier ways to get the data you want, please file a ticket here .

When contacting us, please include your Reddit account along with the following code:

6d86031f-5923-4b60-8d49-b0e90359266b

Discussion of the Benefits and Drawbacks of the Git Pre-Commit Hook

Lobsters
yeldirium.de
2025-10-20 07:54:33
Comments...
Original Article

The Git pre-commit hook is a commonly used tool for automated quality assurance in the contribution process to software projects. It is run by default when a user creates a commit and prevents the commit from being created if it fails, i.e. exits with a non-zero exit code. Setting up a pre-commit hook is as simple as putting a script named pre-commit into the .git/hooks folder. The pre-commit hook can be a powerful tool, since it is the last chance to prevent sensitive information from being tracked by version control, which can be difficult to clean up if it is noticed too late.

This text does not concern itself with the pre-receive hook or other Git hooks, for which different benefits and drawbacks might be true. It is only focused on the pre-commit hook, since it is the most widely used hook and the one I was recently confronted with.

Since the hooks folder is not part of Git’s version control, setting up the pre-commit hook is not trivial to do for new contributors to a project and requires some tools to set up easily. There are various tools for this - Pre-Commit is a standalone tool, Prek is a work-in-progress Rust re-engineering of Pre-Commit, and Husky requires Node.js and npm and supports other Git hooks as well.

Pre-Commit includes dependency management of hooks, which are written and packaged for it. Common quality assurance tools like Gitleaks , ESLint or Nixfmt include a manifest with a configuration for Pre-Commit, which makes them installable with it.

Husky relies on users using npm to install it and any other dependencies, and its hooks are predominantly written in JavaScript/TypeScript. It is more of an ecosystem-internal solution than a tool for software projects in general.

Without external tools, a common and simple way to set up hooks and to track the hooks in Git’s version control is to create symlinks from the hooks folder to the repository with something like ln -s $(pwd)/scripts/pre-commit .git/hooks/pre-commit . The symlink has to be created only once after cloning the repository, which works well for long time contributors, but is an easily overlooked step during onboarding.

Benefits, Drawsbacks and Everything In-Between

Preventing Leaked Secrets

Secret strings should never be committed into version control and the pre-commit hook is the last point in time to prevent that. Once committed, secrets can be hard to remove and an accidental push can add the secrets to the - in some cases public - remote repository. Experienced Git users can find their way around this, but preventing this from happening in the first place is still more secure and helps prevent mistakes from less experienced users. Gitleaks and other tools can quickly scan changed files for common patterns of secret strings like API tokens, SSH keys, GPG keys and others. Including Gitleaks in a pre-commit hook is unintrusive, since it is quite fast, and gives immediate feedback, thus being a huge security benefit without much friction.

Formatting, Linting, Testing

Other changes are not necessarily problematic when pushed to a remote repository, but will most likely be rejected during code review, namely formatting issues, linter issues and failing tests. Formatting issues can be prevented by auto-formatting in the pre-commit hook, which requires zero user intervention. Committing files without reviewing them, even automatically changing them upon commit, still feels wrong to me, but in the case of formatters there is no discussion about the result anyway and if something goes wrong it will be caught in code review afterwards. Linter issues and failing tests would usually be caught by QA pipelines before merging a pull request, so checking for these locally strictly isn’t necessary either, but catching them early tightens the feedback loop.

These tools can sometimes, depending on project size, take a while to run, which increases friction in the workflow. If they impede the workflow, their use should be reconsidered.

Reducing Mental Load

Most projects have scripts to format, lint and test before pushing. Oftentimes developers - and I’m definitely not an exception here - forget to run those, only for CI pipelines to fail two minutes later and prevent a merge. Having a pre-commit hook run these scripts automatically reduces mental load and increases the probability that issues are caught before code review.

Not Every Commit is Made to Satisfy QA

Most developers use Git to track incremental changes while working. We don’t commit only after a few days of work, but multiple times per day. Good developers clean up their commit history afterwards, to improve commit messages and to make commits individually semantically complete. But before that, commits are often made in incomplete states, including linter errors and most of all failing tests. Preventing commits due to failing tests can be a serious break in workflows and can and will make developers develop the habit of disabling pre-commit hooks, even setting aliases to do so.

Necessary Setup

Pre-commit hooks have to be installed for Git to know about them and to run them. This requires a setup step after cloning a repository, which is one step more in every onboarding process. This is not necessarily a problem, but requires attention and documentation. The setup can be simplified by integrating it with other setup steps, e.g. by using devenv to set up development environments. It can even be automated entirely by using something like direnv .

Slow Hooks are Annoying

If your workflow is to make many small commits, long running hooks are seriously annoying and can also lead to the scenario described above, where developers disable pre-commit hooks out of habit.

Conclusion

The biggest benefit, entirely preventing secret leaks, is definitely worth the setup of pre-commit hooks. However, the hooks need to be set up in a way that makes people actually use them, which means keeping them fast and reducing the friction in development workflows. What this means, exactly, depends on your project and your team. Formatters are fast, even most linters are and can be configured to fix issues without manual intervention and to ignore the rest, which means that no commits are ever blocked and some QA is ensured automatically. Running tests in hooks is probably a bad idea, since it can prevent work-in-progress commits and be a real nuisance.

I’ve recently started using devenv to set up my development environments. Devenv integrates Pre-Commit and makes it very easy to install and manage pre-commit hooks. This makes onboarding very easy, removes Pre-Commit’s package management and unifies package management using Nix , and even integrates pre-commit hooks in a central devenv test command.

So far, this makes for a pretty good workflow for me. Having the pre-commit hook automatically fix formatting issues and some linter issues feels pretty good. I have not yet tested all of this in actual large projects and I suspect that friction will be much worse with more code, but intend to do test it and then re-evaluate.


Alternatives

I’ve seen projects include a make reviewable command or script that runs the full QA pipeline and tells the user whether the current state of the repository is fit for review. This is much more manual, but does not interfere with the normal development workflow and is more explicit. However, users need to be told about it and they need to actively use it. Many code reviews in these projects begin with a feedback loop of “please run this script” - “oh, i didn’t know/forgot this exists”. More automation makes this smoother.

Bat v0.26.0 Released

Hacker News
github.com
2025-10-20 07:49:29
Comments...
Original Article

v0.26.0

Features

Bugfixes

Other

Syntaxes

Themes

DeepSeek OCR

Hacker News
github.com
2025-10-20 07:26:33
Comments...
Original Article

DeepSeek AI


Homepage Hugging Face

Discord Twitter Follow

📥 Model Download | 📄 Paper Link | 📄 Arxiv Paper Link |

Explore the boundaries of visual-text compression.

Release

  • [2025/x/x]🚀🚀🚀 We release DeepSeek-OCR, a model to investigate the role of vision encoders from an LLM-centric viewpoint.

Contents

Install

Our environment is cuda11.8+torch2.6.0.

  1. Clone this repository and navigate to the DeepSeek-OCR folder
git clone https://github.com/deepseek-ai/DeepSeek-OCR.git
  1. Conda
conda create -n deepseek-ocr python=3.12.9 -y
conda activate deepseek-ocr
  1. Packages
  • download the vllm-0.8.5 whl
pip install torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0 --index-url https://download.pytorch.org/whl/cu118
pip install vllm-0.8.5+cu118-cp38-abi3-manylinux1_x86_64.whl
pip install -r requirements.txt
pip install flash-attn==2.7.3 --no-build-isolation

Note: if you want vLLM and transformers codes to run in the same environment, you don't need to worry about this installation error like: vllm 0.8.5+cu118 requires transformers>=4.51.1

vLLM-Inference

  • VLLM:

Note: change the INPUT_PATH/OUTPUT_PATH and other settings in the DeepSeek-OCR-master/DeepSeek-OCR-vllm/config.py

cd DeepSeek-OCR-master/DeepSeek-OCR-vllm
  1. image: streaming output
python run_dpsk_ocr_image.py
  1. pdf: concurrency ~2500tokens/s(an A100-40G)
python run_dpsk_ocr_pdf.py
  1. batch eval for benchmarks
python run_dpsk_ocr_eval_batch.py

Transformers-Inference

  • Transformers
from transformers import AutoModel, AutoTokenizer
import torch
import os
os.environ["CUDA_VISIBLE_DEVICES"] = '0'
model_name = 'deepseek-ai/DeepSeek-OCR'

tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = AutoModel.from_pretrained(model_name, _attn_implementation='flash_attention_2', trust_remote_code=True, use_safetensors=True)
model = model.eval().cuda().to(torch.bfloat16)

# prompt = "<image>\nFree OCR. "
prompt = "<image>\n<|grounding|>Convert the document to markdown. "
image_file = 'your_image.jpg'
output_path = 'your/output/dir'

res = model.infer(tokenizer, prompt=prompt, image_file=image_file, output_path = output_path, base_size = 1024, image_size = 640, crop_mode=True, save_results = True, test_compress = True)

or you can

cd DeepSeek-OCR-master/DeepSeek-OCR-hf
python run_dpsk_ocr.py

Support-Modes

The current open-source model supports the following modes:

  • Native resolution:
    • Tiny: 512×512 (64 vision tokens)✅
    • Small: 640×640 (100 vision tokens)✅
    • Base: 1024×1024 (256 vision tokens)✅
    • Large: 1280×1280 (400 vision tokens)✅
  • Dynamic resolution
    • Gundam: n×640×640 + 1×1024×1024 ✅

Prompts examples

# document: <image>\n<|grounding|>Convert the document to markdown.
# other image: <image>\n<|grounding|>OCR this image.
# without layouts: <image>\nFree OCR.
# figures in document: <image>\nParse the figure.
# general: <image>\nDescribe this image in detail.
# rec: <image>\nLocate <|ref|>xxxx<|/ref|> in the image.
# '先天下之忧而忧'

Visualizations

Acknowledgement

We would like to thank Vary , GOT-OCR2.0 , MinerU , PaddleOCR , OneChart , Slow Perception for their valuable models and ideas.

We also appreciate the benchmarks: Fox , OminiDocBench .

Citation

coming soon!

Integer Set Library (ISL) - A Primer

Lobsters
www.jeremykun.com
2025-10-20 07:22:02
Comments...
Original Article

Polyhedral optimization is a tool used in compilers for optimizing loop nests. While the major compilers that use this implement polyhedral optimizations from scratch, 1 there is a generally-applicable open source C library called the Integer Set Library (ISL) that implements the core algorithms used in polyhedral optimization.

This article gives an overview of a subset of ISL, mainly focusing on the representation of sets and relations and basic manipulations on them. Because my personal use of ISL is related to MLIR, I will also include some notes about the MLIR port of ISL (called the Fast Presburger Library, or FPL), as well as an interoperability layer I wrote between ISL and FPL. This article does not cover the details of the algorithms used in ISL.

All the code written for this article is available on GitHub .

Integer sets and quasi-affine formulas

The core data object in ISL is a set of integer points, and the core of polyhedral optimization is to analyze and manipulate such sets. Because the integer sets of interest can be large or infinite, the points are represented indirectly as the set of integer points satisfying a system of equalities and inequalities with a restricted form called quasi-affine formulas (defined formally after the following motivation).

One guiding principle in polyhedral optimization is to construct an integer set corresponding to the “iteration space” of a loop nest. That is, there is one point in the integer set for each execution of a statement within a loop. I’ll work with a simplified loop nest that is a “perfect nest” that contains a single inner-most statement abstracted as S .

for (int i = 0; i < 100; i++) {
    for (int j = 0; j < 200; j++) {
        S(i, j);
    }
}

We want to construct an integer set that contains all pairs (i, j) for which S(i, j) is executed. More generally, a loop nest can have different offsets, bounds, and step sizes, as well as an if statement guarding the execution of S .

for (int i = 3; i < 100; i += 5) {
    for (int j = 100; j > 0; j -= 2) {
        if ((i + j) % 2 == 0 {
            S(i, j);
        }
    }
}

The observation of polyhedral optimization is that most for loops and if statements that occur in practical code have a limited structure. The loops have constant bounds and static, constant step sizes (e.g., we don’t typically increment j as j *= 2 or j += i*i ) and the conditions in if statements typically involve just a little bit more than linear arithmetic. Notably, code often uses floor/ceiling division and modulo operations to express things like operating over a tiled memory layout.

This was the inspiration for the restriction of an integer set to be defined by a system of quasi-affine formulas .

Definition: A quasi-affine formula is a multivariate formula built from the following operations:

  • Integer literals
  • Integer-valued variables
  • addition and subtraction
  • multiplication by an integer constant
  • floor- and ceiling-rounded division by a nonzero integer constant
  • modulus by a nonzero integer constant

Borrowing a BNF grammar from the MLIR website , we can also define it as

affine-expr ::= `(` affine-expr `)`
              | affine-expr `+` affine-expr
              | affine-expr `-` affine-expr
              | `-`? integer-literal `*` affine-expr
              | affine-expr `ceildiv` integer-literal
              | affine-expr `floordiv` integer-literal
              | affine-expr `mod` integer-literal
              | `-`affine-expr
              | bare-id
              | `-`? integer-literal

A Presburger formula is a first-order logical formula involving quasi-affine expressions and the standard binary comparison operators ( = , != , < , <= , > , >= ). In particular, Presburger formulas may contain existentially and universally quantified variables.

A Presburger set (or just integer set , scoped to this article) is a subset of points in $\mathbb{Z}^n$ satisfying a Presburger formula.

Examples using isl_cat

Our first foray into using ISL will be to show some examples of Presburger sets.

The code repository for this article uses bazel with a pre-configured ISL dependency, so if you have bazel (or better bazelisk ) installed, you can clone the repo and build ISL with

git clone git@github.com:j2kun/isl-primer.git
cd isl-primer
bazel build @isl//:isl

The file examples.isl contains the example Presburger sets we will show, and the Polly Labs Playground gives a UI that can be used to visualize the sets. 2 Otherwise, you can run them through bazel build @isl//:isl_cat to have ISL parse them, validate/simplify them, and print them back out.

bazel run @isl//:isl_cat < examples.isl

# isl_cat reads from stdin, so run without args and paste a formula to the terminal
bazel run @isl//:isl_cat
{ [i, j, a, b] : i = a + 1 and (b + j) mod 2 = 0 }

ISL splits the kinds of sets you can define into a few categories. The simplest is a basic set , which is defined as a Presburger set that happens to be a single convex integer polyhedron. As an example, this defines a triangle:

{ [i,j] : 0 < i < 10 and 0 < j < 10 and i < j }

The syntax here is close to a set-builder notation from math. The curly braces {} denote a set, the square brackets [ ] around i, j define the two variables that name the two dimensions of the underlying integer lattice $\mathbb{Z}^2$, and the colon : separates the variable definition from the formulas.

The “underlying integer lattice” is called a space in ISL terminology. A space is simply defined by a number of dimensions, with names for each dimension, and possibly a name for the overall space itself. To name the space above, we put a symbol before the opening square bracket.

{ A[i,j] : 0 < i < 10 and 0 < j < 10 and i < j }

The names of the spaces can be used to identify statements in a loop nest.

Next, ISL defines a set (as opposed to a basic set ) as a Presburger set that happens to have multiple disjoint convex components. These have to be associated with the same underlying space. The Presburger set below defines a union of a triangle and a square.

{ [i, j] : 0 <= i <= 10 and 0 <= j <= 10 and j > i or (6 <= i <= 9 and 0 < j <= 4) }

Next, a union set is a finite union of Presburger sets, which may use different underlying spaces with different dimensions. A semicolon separates the space definitions.

{
    A[i, j] : 0 <= i <= 10 and 0 <= j <= 10;
    B[i, j, k] : 6 <= i <= 9 and 0 < j <= 4 and 0 <= k < 2
}

As far as I understand, the purpose of a union set is to represent two code statements ( A and B ) within the same structure, as they may be statements that share a common parent loop and should be optimized jointly. However, I have not worked with this in detail myself.

Finally there are map analogues of the above set types. A map type is just another Presburger set, but for which some of the variables are distinguished as domain variables, while others are codomain variables (in ISL they are called range variables). An -> separates the domain from the codomain which are otherwise defined as spaces as above.

For example, a basic map splits the variables of a basic set , e.g.,

{
    [i, j] -> [a, b] : i = a + 1 and (b + j) mod 2 = 0
    and 0 <= i < 10 and 0 <= j < 10
    and 0 <= a < 10 and 0 <= b < 10
}

A map is like a basic map but may have multiple disjoint components, and a union map is a finite union of maps.

Mathematically speaking, maps in polyhedral analysis are not really “maps,” but rather binary relations. Nothing here stops one domain point from “mapping” to different range points, or vice versa. Using relations is a strength of the formalism, because it adds more generality. But because, I suppose, someone thought typing “relation” instead of “map” was too laborious, we now have to live with this extra little cognitive overhead.

Finally, polyhedral analysis (and ISL) also supports symbolic constants by prefixing a Presburger set with a list of symbols in square brackets and an -> .

For example, the Presburger set below uses a symbolic constant N to define the upper bound of the variables i and j .

[N] -> { [i, j] : 0 < i < N and j > i and 0 < j < N }

Another supported syntax is to represent an existential local variable. Below is an example of a presburger set where a local variable ensures the domain and range form sequential pairs here the first entry in the pair is even.

{
    [i] -> [j] :
    exists q :
    0 <= i < N
    and 0 <= j < N
    and i = 2*q
    and j = 2*q + 1
}

Generating C code

The first nontrivial thing we can do is to generate C code that iterates over the points in a Presburger set and executes a statement for each point in the set.

To do this we need to specify what’s called an iteration schedule . Iteration schedules can be quite complex, and to my knowledge there is no systematic way to determine what is the “right” iteration schedule. It’s application dependent. So I will only scratch the surface here and focus on the simplest case where you specify what subset of variables are iterated over, and in what order.

This specification is defined by converting a presburger set to a map, which maps all the space variables to the subset of variables you want to iterate over.

For the example of the N -parameterized triangle above,

[N] -> { [i, j] : 0 < i < N and j > i and 0 < j < N }

we can convert the [i,j] space to A[i, j] -> [i, j] to tell it to iterate over both i and j , in that order.

[N] -> { A[i, j] -> [i, j] : 0 < i < N and j > i and 0 < j < N }

And then run isl_codegen on the result. (Ingore the second two lines of the input below, they are for advanced control of the generated code)

bazel run @isl//:isl_codegen << EOF
[N] -> { A[i, j] -> [i, j] : 0 < i < N and j > i and 0 < j < N }
{ : }
{}
EOF

for (int c0 = 1; c0 < N - 1; c0 += 1)
  for (int c1 = c0 + 1; c1 < N; c1 += 1)
    A(c0, c1);

Or reversing the order of iteration by mapping to [j, i] instead of [i, j] :

bazel run @isl//:isl_codegen << EOF
[N] -> { A[i, j] -> [j, i] : 0 < i < N and j > i and 0 < j < N }
{ : }
{}
EOF

for (int c0 = 2; c0 < N; c0 += 1)
  for (int c1 = 1; c1 < c0; c1 += 1)
    A(c1, c0);

Note how, because the constraints include $0 < i < j$, the generated code knows to start j and 2 and i at 1, as those are the smallest values satisfying the constraints. Also note how it swapped the order the induction variables are passed to the statement A when the iteration order was switched.

One more example: the produced loop nest can include if statements, or arithmetic in the final parameters passed to the statement. To show this I use an integer relation that describes a diagonal packing from an earlier article on this blog about diagonal data packing .

bazel run @isl//:isl_codegen << EOF
{ [i0, i1] -> [ct, slot] : (i0 - i1 + ct) mod 16 = 0 and (-i0 + slot) mod 16 = 0 and 0 <= i0 <= 9 and 0 <= i1 <= 15 and 0 <= ct <= 15 and 0 <= slot <= 1023 }
{ : }
{}
EOF

for (int c0 = 0; c0 <= 15; c0 += 1)
  for (int c1 = 0; c1 <= 1023; c1 += 1)
    if ((c1 + 6) % 16 >= 6)
      (c1 % 16, (c0 + c1) % 16);

Later in this article I’ll show code that can be used to generate this using the ISL C API.

Basic manipulation with the ISL C API

Hello, ISL: extracting the domain of a map

The ISL C API is quite a beast. By my count there are over 20 thousand different function declarations, and because it’s a pure-C library, they look like this

__isl_give isl_set *isl_map_domain(__isl_take isl_map *bmap);

There are types for the isl_basic_set , isl_set , isl_union_set , isl_basic_map , isl_map , and isl_union_map . 3 And the functions are named so that they start with isl_ , then the type of the main argument being operated on (e.g., map ), then a description of the operation.

So the isl_map_domain function takes as input an isl_map , which is frees before returning a pointer to an isl_set representing the subset of integer points that are in the domain of the map.

ISL internally does some reference counting, each object is associated with an isl_ctx context object that handles this. The macros __isl_give , __isl_take , and __isl_keep (which are no-ops as far as I can tell), are used to indicate to the caller whether the function takes ownership of the argument pointer ( __isl_take ), whether the caller keeps ownership ( __isl_keep ), and whether the caller is responsible for freeing the returned pointer ( __isl_give ). Even with this helpful convention, I still find myself leaking memory when using ISL, so running bazel with --config=asan is a must.

So as a hello world example, here is a C++ program that creates a context, parses a basic map from a string, and then extracts the domain and prints it.

#include "include/isl/ctx.h"
#include "include/isl/map.h"
#include "include/isl/map_type.h"
#include "include/isl/set.h"

std::string parse_map_and_extract_domain_as_string(
    isl_ctx *ctx, std::string islStr) {
  isl_set *domainMap = isl_map_domain(
        isl_map_read_from_str(ctx, islStr.c_str()));
  char *result_str = isl_set_to_str(domainMap);
  std::string result(result_str);

  free(result_str);
  isl_set_free(domainMap);
  return result;
}

And a simple binary to run it:

#include "isl_api_examples.h"
#include <iostream>
#include <string>
#include "include/isl/ctx.h"

int main() {
  std::string islMap;
  std::cout << "Enter an ISL map: ";
  std::getline(std::cin, islMap);

  isl_ctx *ctx = isl_ctx_alloc();
  std::string result = parse_map_and_extract_domain_as_string(ctx, islMap);
  std::cout << "Result: " << result << std::endl;
  isl_ctx_free(ctx);
  return 0;
}

The build file

load("@rules_cc//cc:cc_binary.bzl", "cc_binary")
load("@rules_cc//cc:cc_library.bzl", "cc_library")
load("@rules_cc//cc:cc_test.bzl", "cc_test")

cc_library(
    name = "isl_api_examples",
    srcs = ["isl_api_examples.cpp"],
    hdrs = ["isl_api_examples.h"],
    deps = ["@isl"],
)

cc_binary(
    name = "get_domain",
    srcs = ["get_domain.cpp"],
    deps = [":isl_api_examples"],
)

and running it:

$ bazel run :get_domain
Enter an ISL map: { [i] -> [j] : (-i + j) mod 7 = 0 and 0 <= i <= 20 and 0 <= j <= 20 }
Result: { [i] : 0 <= i <= 20 }

# In this next example, I replace the upper bound on j with 3, to demonstrate
# that the extracted domain really only includes the subset of i values for which
# there is a corresponding j value.
$ bazel run :get_domain
Enter an ISL map: { [i] -> [j] : (-i + j) mod 7 = 0 and 0 <= i <= 20 and 0 <= j <= 3 }
Result: { [i] : 0 <= i <= 20 and 7*floor((-1 - i)/7) <= -4 - i }

Composing maps

While the ISL API has a number of basic operations like taking unions and intersections of sets and maps, in my opinion things get interesting when you start composing maps. Mainly this is because, a general composition of two relations requires introducing existentially quantified variables, and ISL has powerful algorithms for simplifying and eliminating these to arrive at a nice representation of the composed map.

Because most of my present uses for polyhedral optimization involve analyzing memory layouts, this example will show how, if a tensor is laid out in memory according to a given integer relation, then a modification to the tensor in the program can be composed with the original layout to get a descriptor of the new layout.

For example, suppose we have a 3-dimensional tensor, and a Presburger relation that describes a column-major layout in vector registers of size $16$.

{
    S[i, j, k] -> [reg, lane] :
        0 <= i < 32
    and 0 <= j < 64
    and 0 <= k < 1024
    and 0 <= lane < 16
    and i + j * 32 + k * 32 * 64 = 16 * reg + lane
}

Now suppose the program includes an operation that implements a transposition by swapping axes i and j . Subsequent analysis after this step would need to incorporate this operation into a new layout. This can be done by precomposing the memory layout relation with the following relation:

{ S[i, j, k] -> [j, i, k] }

Which is shorthand for

{ S[i0, j0, k0] -> [i1, j1, k1] : i0 = j1 and i1 = j0 and k0 = k1 }

To do this in ISL, we will peek behind the curtain a bit to see how the underlying constraints are organized as a matrix of coefficients of equalities and inequalities.

The declaration: parse a layout from a string, then precompose it with a relation that swaps two specified domain dimensions.

std::string precompose_transposition(isl_ctx* ctx, std::string starting_layout,
                                     int dim1, int dim2);

The implementation as a whole, then line by line.

std::string precompose_transposition(isl_ctx* ctx, std::string starting_layout,
                                     int dim1, int dim2) {
  isl_map* layout = isl_map_read_from_str(ctx, starting_layout.c_str());
  isl_set* domain = isl_map_domain(isl_map_copy(layout));

  isl_map* domain_map =
      isl_map_from_domain_and_range(isl_set_copy(domain), domain);
  isl_space* transpose_space = isl_map_get_space(domain_map);
  isl_map_free(domain_map);
  unsigned num_domain_vars = isl_space_dim(transpose_space, isl_dim_in);

  isl_mat* eq_mat =
      create_empty_constraint_matrix(transpose_space, num_domain_vars);
  isl_mat* ineq_mat = create_empty_constraint_matrix(transpose_space, 0);

  // Column order is: [domain_vars, range_vars, div_vars, symbol_vars, constant]
  // so the offset between a domain variable and its corresponding range
  // variable is num_domain_vars
  for (int domain_var = 0; domain_var < num_domain_vars; ++domain_var) {
    // First constraint: domain_var dim1 - range_var dim2 = 0
    if (domain_var == dim1) {
      isl_mat_set_element_si(eq_mat, /*row=*/dim1, /*col=*/dim1, 1);
      isl_mat_set_element_si(eq_mat, /*row=*/dim1,
                             /*col=*/num_domain_vars + dim2, -1);
      continue;
    }

    if (domain_var == dim2) {
      // Second constraint: domain_var dim2 - range_var dim1 = 0
      isl_mat_set_element_si(eq_mat, /*row=*/dim2, /*col=*/dim2, 1);
      isl_mat_set_element_si(eq_mat, /*row=*/dim2,
                             /*col=*/num_domain_vars + dim1, -1);
      continue;
    }

    // Otherwise, domain_var d - range_var d = 0
    isl_mat_set_element_si(eq_mat, /*row=*/domain_var, /*col=*/domain_var, 1);
    isl_mat_set_element_si(eq_mat, /*row=*/domain_var,
                           /*col=*/num_domain_vars + domain_var, -1);
  }

  isl_map* transpose =
      isl_map_from_basic_map(isl_basic_map_from_constraint_matrices(
          transpose_space, eq_mat, ineq_mat, isl_dim_in, isl_dim_out,
          isl_dim_div, isl_dim_param, isl_dim_cst));

  isl_map* composed = isl_map_apply_range(transpose, layout);

  char* result_str = isl_map_to_str(composed);
  std::string result(result_str);

  free(result_str);
  isl_map_free(composed);
  return result;
}

After parsing the layout, the first step is to create an underlying space for the new Presburger map which uses the exact same domain variables as the input layout. Most of this step is jumping through ISL API hoops and managing memory.

isl_set* domain = isl_map_domain(isl_map_copy(layout));
isl_map* domain_map =
  isl_map_from_domain_and_range(isl_set_copy(domain), domain);
isl_space* transpose_space = isl_map_get_space(domain_map);
isl_map_free(domain_map);

Next we create new empty constraint matrices to store the constraints that will represent the transposition.

isl_mat* eq_mat = create_empty_constraint_matrix(transpose_space, num_domain_vars);
isl_mat* ineq_mat = create_empty_constraint_matrix(transpose_space, 0);

This uses a helper function that allocates a matrix and zero-initializes all entries.

__isl_give isl_mat* create_empty_constraint_matrix(__isl_keep isl_space* space,
                                                   unsigned num_constraints) {
  unsigned num_in = isl_space_dim(space, isl_dim_in);
  unsigned num_out = isl_space_dim(space, isl_dim_out);
  unsigned num_div = isl_space_dim(space, isl_dim_div);
  unsigned num_param = isl_space_dim(space, isl_dim_param);

  // The layout of the columns is: [domain_vars, range_vars, div_vars,
  // symbol_vars, constant]
  unsigned num_cols = num_in + num_out + num_div + num_param + 1;

  isl_mat* mat =
      isl_mat_alloc(isl_space_get_ctx(space), num_constraints, num_cols);

  for (int i = 0; i < isl_mat_rows(mat); ++i)
    for (int j = 0; j < isl_mat_cols(mat); ++j)
      isl_mat_set_element_si(mat, /*row=*/i, /*col=*/j, 0);
  return mat;
}

Next we define the actual constraints by setting individual coefficients. Recall, our desired relation is:

{ S[i0, j0, k0] -> [i1, j1, k1] : i0 = j1 and i1 = j0 and k0 = k1 }

And so we want one constraint per (domain, range) variable pair. Iterating over all the domain variables, we have two constraints special for the transposed dimensions:

// First constraint: domain_var dim1 - range_var dim2 = 0
isl_mat_set_element_si(eq_mat, /*row=*/0, /*col=*/dim1, 1);
isl_mat_set_element_si(eq_mat, /*row=*/0, /*col=*/num_domain_vars + dim2, -1);

// Second constraint: domain_var dim2 - range_var dim1 = 0
isl_mat_set_element_si(eq_mat, /*row=*/1, /*col=*/dim2, 1);
isl_mat_set_element_si(eq_mat, /*row=*/1, /*col=*/num_domain_vars + dim1, -1);

And one that equates the domain and range variables by default.

// Otherwise, domain_var d - range_var d = 0
isl_mat_set_element_si(eq_mat, /*row=*/domain_var, /*col=*/domain_var, 1);
isl_mat_set_element_si(eq_mat, /*row=*/domain_var,
                       /*col=*/num_domain_vars + domain_var, -1);

The constraint matrix specifies one constraint per row, one variable per column, and each row is a linear combination of the variables and coefficients, with the last column representing the constant term.

Finally, we construct the relation from the constraint matrices and call apply_range on the result. The construction allows us to specify the intended order of the columns in relation to the variables (e.g., the columns have domain variables first for isl_dim_in , then range variables ( isl_dim_out ), then other variables that are not relevant here.

isl_map* transpose =
  isl_map_from_basic_map(isl_basic_map_from_constraint_matrices(
      transpose_space, eq_mat, ineq_mat, isl_dim_in, isl_dim_out,
      isl_dim_div, isl_dim_param, isl_dim_cst));

isl_map* composed = isl_map_apply_range(transpose, layout);

The apply_range call takes the first argument, and composes its range with the domain of the second argument to construct a composite relation. The rest of the code converts the result to a string and frees memory as before.

While the code is mucky, it exposes one critical question to understand how ISL works internally: how does this representation of constraints as linear combinations of variables permit us to represent modular arithmetic, and floor and ceiling division? The answer is in the div dimension. In ISL, a div dimension is a special variable, always existentially quantified, that is used to represent the intermediate computation of a division.

For example, if you print the constraint matrix for a simple set like { [i] : 5 <= i } (I included a binary and API example to dump the raw constraint matrix for a basic_set ), you get

bazel run dump_constraints << EOF
{ [i] : i = 5 }
EOF

Equality Matrix:

[[1,-5]]

The sole equality here is signifying $1 \cdot i - 5 = 0$, as the first column is the coefficient of i and the last column is the constant term.

However, if you dump a more complicated system that has modular arithmetic, like { [i] : (i % 2) = 1 } , you will get the following equality constraint:

bazel run dump_constraints << EOF
{ [i] : (i % 2) = 1 }
EOF

Equality Matrix:

[[-1,2,-1]]

The last column is still the constant term, and the second-to-last column is a new “div” variable $d$ representing the result of an integer division. $d$ is existentially quantified, so the formula is really

\[ \exists d \in \mathbb{Z} : -1 \cdot i + 2 \cdot d - 1 = 0 \]

Rearranging and taking both sides of the equation mod 2 recovers the original formula. ISL does a similar trick for floor and ceiling division; all quasi-affine expressions can reduce such operations to a system of purely linear equalities and inequalities, where the intermediate divisors are represented as existentially quantified variables.

Manually adding your own div/mod variables to the constraint system is a bit of a pain. ISL has a helper API for constructing quasi-affine expressions at include/isl/aff.h , but for brevity I will defer the exploration of that API to a future article. 4

Enumerating maps

Next I’ll show another debugging tool: enumerating and dumping all the points in a basic map or set.

The key API is isl_set_foreach_point , which takes a callback function pointer and calls it for each point in the set. In my example, the callback inserts points into a struct that is aware of the prior domain/range breakdown and is able to split a point into domain and range points.

struct PointPairCollector {
  using Point = std::vector<int64_t>;
  std::vector<std::pair<Point, Point>> points;
  int domain_dims;
  int range_dims;

  PointPairCollector(int domain_dims, int range_dims)
      : domain_dims(domain_dims), range_dims(range_dims) {}
};

Then the function pointer to populate it:

isl_stat pointPairCallback(__isl_take isl_point* pnt, void* user) {
  PointPairCollector* collector = static_cast<PointPairCollector*>(user);

  isl_space* space = isl_point_get_space(pnt);
  isl_space_free(space);

  std::vector<int64_t> domainPoint(collector->domain_dims);
  std::vector<int64_t> rangePoint(collector->range_dims);

  // Extract domain coordinates
  for (int i = 0; i < collector->domain_dims; i++) {
    isl_val* coord = isl_point_get_coordinate_val(pnt, isl_dim_set, i);
    if (isl_val_is_int(coord)) {
      domainPoint[i] = isl_val_get_num_si(coord);
    }
    isl_val_free(coord);
  }

  // Extract range coordinates
  for (int i = 0; i < collector->range_dims; i++) {
    isl_val* coord = isl_point_get_coordinate_val(pnt, isl_dim_set,
                                                  collector->domain_dims + i);
    if (isl_val_is_int(coord)) {
      rangePoint[i] = isl_val_get_num_si(coord);
    }
    isl_val_free(coord);
  }

  collector->points.emplace_back(domainPoint, rangePoint);
  isl_point_free(pnt);
  return isl_stat_ok;
}

And the routine to combine them:

void enumeratePoints(isl_basic_map *bmap,
                     PointPairCollector& collector) {
  isl_set* set = isl_set_from_basic_set(isl_basic_map_wrap(bmap));
  isl_set_foreach_point(set, &pointPairCallback, &collector);
  isl_set_free(set);
}

AST generation

Next I’ll show how to use the C API to generate the loop nest as a C string. 5

std::string generate_loop_nest_as_c_str(isl_basic_map *bmap) {
  auto *ctx = isl_basic_map_get_ctx(bmap);
  isl_union_map *schedule =
      isl_union_map_from_basic_map(isl_basic_map_copy(bmap));

  // Context and options are intentionally empty.
  isl_set *context = isl_set_universe(isl_space_params_alloc(ctx, 0));
  isl_union_map *options = isl_union_map_empty(isl_space_params_alloc(ctx, 0));

  // Build the AST
  isl_ast_build *build = isl_ast_build_from_context(context);
  build = isl_ast_build_set_options(build, options);
  isl_ast_node *tree = isl_ast_build_node_from_schedule_map(build, schedule);
  isl_ast_build_free(build);

  char *cStr = isl_ast_node_to_C_str(tree);
  std::string actual = std::string(cStr);
  free(cStr);
  isl_ast_node_free(tree);

  // Add a leading newline for ease of comparison with multiline strings.
  return actual.insert(0, "\n");
}

The key API is isl_ast_build_node_from_schedule_map , which takes a map whose [domain] -> [range] part specifies the iteration order as mentioned above. Then isl_ast_build_node_from_schedule_map cosntructs an isl_ast_node , which is a classical AST structure with nodes for loops, if statements, etc. The isl_ast_node_to_C_str is a helper that converts the AST to a C string, but you can walk the AST yourself if you want to, say, generate an MLIR loop nest from the AST. You can see how we do this in the HEIR project here .

Interoperating with MLIR’s FPL

The MLIR compiler project has its own internal implementation of a subset of the core algorithms of ISL, called the Fast Presburger Library (FPL). However, because it’s incomplete, there are some features of ISL, including code generation and advanced simplification algorithms, that our group at the HEIR project have found critical.

So until FPL is a more complete port of ISL, we added an interoperability layer between MLIR and ISL. 6 It can convert between MLIR’s IntegerRelation and ISL’s basic_map , and it does so simply by copying the underlying constraint matrices between the two formats. This is part of why I avoided the higher-level ISL APIs in my examples above, so that this conversion layer would be easier to understand.

Adding MLIR as a dependency to the code repo for this project would incur massive build times for little benefit, so instead I’ll point to the code and show the core snippets here.

Because both libraries use the same representation internally, a set of constraint matrices for equalities and inequalities, the conversion requires simply copying the matrices back and forth.

Converting a basic_map to an IntegerRelation :

presburger::IntegerRelation convertBasicMapToRelation(
    isl_basic_map* bmap) {
  // Variables in an IntegerRelation are stored in the order
  //
  //   Domain, Range, Symbols, Locals, Constant
  //
  // In ISL this corresponds to
  //   In,     Out,   Param,   Div,    Cst
  isl_mat* eqMat = isl_basic_map_equalities_matrix(
      bmap, isl_dim_in, isl_dim_out, isl_dim_param, isl_dim_div, isl_dim_cst);
  isl_mat* ineqMat = isl_basic_map_inequalities_matrix(
      bmap, isl_dim_in, isl_dim_out, isl_dim_param, isl_dim_div, isl_dim_cst);

  PresburgerSpace fplSpace = PresburgerSpace::getRelationSpace(
      /*numDomain=*/isl_basic_map_dim(bmap, isl_dim_in),
      /*numRange=*/isl_basic_map_dim(bmap, isl_dim_out),
      /*numSymbols=*/isl_basic_map_dim(bmap, isl_dim_param),
      /*numLocals=*/isl_basic_map_dim(bmap, isl_dim_div));
  IntegerRelation result(
      /*numReservedInequalities=*/isl_mat_rows(ineqMat),
      /*numReservedEqualities=*/isl_mat_rows(eqMat),
      /*numReservedCols=*/isl_mat_cols(eqMat), fplSpace);

  populateConstraints(result, eqMat, /*eq=*/true);
  populateConstraints(result, ineqMat, /*eq=*/false);

  isl_mat_free(eqMat);
  isl_mat_free(ineqMat);

  return result;
}

Then populateConstraints iterates and copies the coefficients one by one.

void populateConstraints(IntegerRelation& rel, __isl_keep isl_mat* mat,
                         bool eq) {
  unsigned numRows = isl_mat_rows(mat);
  unsigned numCols = isl_mat_cols(mat);

  for (unsigned i = 0; i < numRows; i++) {
    SmallVector<int64_t, 8> row;
    for (unsigned j = 0; j < numCols; j++) {
      isl_val* val = isl_mat_get_element_val(mat, i, j);
      row.push_back(isl_val_get_num_si(val));
      isl_val_free(val);
    }

    if (eq) {
      rel.addEquality(row);
    } else {
      rel.addInequality(row);
    }
  }
}

Similary, the converse conversion

__isl_give isl_mat* createConstraintRows(__isl_keep isl_ctx* ctx,
                                         const IntegerRelation& rel,
                                         bool isEq) {
  unsigned numRows = isEq ? rel.getNumEqualities() : rel.getNumInequalities();
  unsigned numDimVars = rel.getNumDimVars();
  unsigned numLocalVars = rel.getNumLocalVars();
  unsigned numSymbolVars = rel.getNumSymbolVars();
  unsigned numCols = rel.getNumCols();
  isl_mat* mat = isl_mat_alloc(ctx, numRows, numCols);

  for (unsigned i = 0; i < numRows; i++) {
    auto row = isEq ? rel.getEquality(i) : rel.getInequality(i);

    // Dims stay at the same positions.
    for (unsigned j = 0; j < numDimVars; j++)
      mat = isl_mat_set_element_si(mat, i, j, (int64_t)row[j]);
    // Output locals before symbols.
    for (unsigned j = 0; j < numLocalVars; j++)
      mat = isl_mat_set_element_si(
          mat, i, j + numDimVars, (int64_t)row[j + numDimVars + numSymbolVars]);
    // Output symbols in the end.
    for (unsigned j = 0; j < numSymbolVars; j++)
      mat = isl_mat_set_element_si(mat, i, j + numDimVars + numLocalVars,
                                   (int64_t)row[j + numDimVars]);
    // Finally output the constant.
    mat = isl_mat_set_element_si(
         mat, i, numCols - 1, (int64_t)row[numCols - 1]);
  }
  return mat;
}

__isl_give isl_basic_map* convertRelationToBasicMap(const IntegerRelation& rel,
                                                    __isl_keep isl_ctx* ctx) {
  isl_mat* eqMat = createConstraintRows(ctx, rel, /*isEq=*/true);
  isl_mat* ineqMat = createConstraintRows(ctx, rel, /*isEq=*/false);
  isl_space* space =
      isl_space_alloc(ctx, rel.getNumSymbolVars(), rel.getNumDomainVars(),
                      rel.getNumRangeVars());
  return isl_basic_map_from_constraint_matrices(
      space, eqMat, ineqMat, isl_dim_in, isl_dim_out, isl_dim_div,
      isl_dim_param, isl_dim_cst);
}

Next steps

In future articles I’d like to take a closer look at some of the core algorithms underlying polyhedral optimization, and ISL in particular. Many of these algorithms utilize some deep work in convex optimization, including simplex-like methods, Gomory cuts, Fourier-Motzkin elimination, and basis reduction. Chapter 2 of the ISL reference documentation gives a taste of what is involved.

Thanks to Asra Ali for her feedback on a draft of this article.



Extism - make all software programmable. Extend from within

Lobsters
extism.org
2025-10-20 07:04:00
Comments...
Original Article

Easy to Use

Leveraging the power and portability of WebAssembly, Extism is an off-the-shelf plug-in system just a library import away. Ship in days, not weeks or months.

Secure by Default

Don't worry about what some plug-in code might do to your program. Extism is built with security as a core principle, and fully sandboxes the execution of all plug-in code.

Available Everywhere

Our flexible architecture uniquely allows Extism to run almost anywhere, with idiomatic Host SDKs for Python, Node, Ruby, Rust, Go, PHP, C/C++, OCaml, & more.

AI-generated ‘poverty porn’ fake images being used by aid agencies

Guardian
www.theguardian.com
2025-10-20 07:00:36
Exclusive: Pictures depicting the most vulnerable and poorest people are being used in social media campaigns in the sector, driven by concerns over consent and cost AI-generated images of extreme poverty, children and sexual violence survivors are flooding stock photo sites and increasingly being ...
Original Article

AI-generated images of extreme poverty, children and sexual violence survivors are flooding stock photo sites and increasingly being used by leading health NGOs, according to global health professionals who have voiced concern over a new era of “poverty porn”.

“All over the place, people are using it,” said Noah Arnold, who works at Fairpicture, a Swiss-based organisation focused on promoting ethical imagery in global development. “Some are actively using AI imagery, and others, we know that they’re experimenting at least.”

Arsenii Alenichev, a researcher at the Institute of Tropical Medicine in Antwerp studying the production of global health images, said: “The images replicate the visual grammar of poverty – children with empty plates, cracked earth, stereotypical visuals.”

Alenichev has collected more than 100 AI-generated images of extreme poverty used by individuals or NGOs as part of social media campaigns against hunger or sexual violence. Images he shared with the Guardian show exaggerated, stereotype-perpetuating scenes: children huddled together in muddy water; an African girl in a wedding dress with a tear staining her cheek. In a comment piece published on Thursday in the Lancet Global Health, he argues these images amount to “poverty porn 2.0”.

While it is hard to quantify the prevalence of the AI-generated images, Alenichev and others say their use is on the rise, driven by concerns over consent and cost. Arnold said that US funding cuts to NGO budgets had made matters worse.

“It is quite clear that various organisations are starting to consider synthetic images instead of real photography, because it’s cheap and you don’t need to bother with consent and everything,” said Alenichev.

AI-generated images of extreme poverty now appear in their dozens on popular stock photo sites, including Adobe Stock Photos and Freepik , in response to queries such as “poverty”. Many bear captions such as “Photorealistic kid in refugee camp”; “Asian children swim in a river full of waste”; and “Caucasian white volunteer provides medical consultation to young black children in African village”. Adobe sells licences to the last two photos in that list for about £60.

“They are so racialised. They should never even let those be published because it’s like the worst stereotypes about Africa, or India, or you name it,” said Alenichev.

Joaquín Abela, CEO of Freepik, said the responsibility for using such extreme images lay with media consumers, and not with platforms such as his. The AI stock photos, he said, are generated by the platform’s global community of users, who can receive a licensing fee when Freepik’s customers choose to buy their images.

Freepik had attempted to curb biases it had found in other parts of its photo library, he said, by “injecting diversity” and trying to ensure gender balance into the photos of lawyers and CEOs hosted on the site.

But, he said, there was only so much his platform could do. “It’s like trying to dry the ocean. We make an effort, but in reality, if customers worldwide want images a certain way, there is absolutely nothing that anyone can do.”

A series of photographs showing black and brown-skinned people living in poverty in what appear to be refugee camps
A screen grab showing AI-generated images of ‘poverty’ on a stock photo site. Pictures such as these have raised concerns over biased imagery and stereotypes. Illustration: Freepik

In the past, leading charities have used AI-generated images as part of their communications strategies on global health. In 2023, the Dutch arm of UK charity Plan International released a video campaign against child marriage containing AI-generated images of a girl with a black eye, an older man and a pregnant teenager.

Last year, the UN posted a video on YouTube with AI-generated “re-enactments” of sexual violence in conflict, which included AI-generated testimony from a Burundian woman describing being raped by three men and left to die in 1993 during the country’s civil war. The video was removed after the Guardian contacted the UN for comment.

A UN Peacekeeping spokesperson said: “The video in question, which was produced over a year ago using a fast-evolving tool, has been taken down, as we believed it shows improper use of AI, and may pose risks regarding information integrity, blending real footage and near-real artificially generated content.

“The United Nations remains steadfast in its commitment to support victims of conflict-related sexual violence, including through innovation and creative advocacy.”

Arnold said the rising use of these AI images comes after years of debate in the sector around ethical imagery and dignified storytelling about poverty and violence. “Supposedly, it’s easier to take ready-made AI visuals that come without consent, because it’s not real people.”

Kate Kardol, an NGO communications consultant, said the images frightened her, and recalled earlier debates about the use of “poverty porn” in the sector.

“It saddens me that the fight for more ethical representation of people experiencing poverty now extends to the unreal,” she said.

Generative AI tools have long been found to replicate – and at times exaggerate – broader societal biases . The proliferation of biased images in global health communications may make the problem worse, said Alenichev, because the images could filter out into the wider internet and be used to train the next generation of AI models, a process which has been shown to amplify prejudice .

A spokesperson for Plan International said the NGO had, as of this year: “adopted guidance advising against using AI to depict individual children”, and said the 2023 campaign had used AI-generated imagery to safeguard “the privacy and dignity of real girls”.

Adobe declined to comment.

Birger Schacht: A plea for

PlanetDebian
bisco.org
2025-10-20 06:28:51
A couple of weeks ago there was an article on the Freexian blog about Using JavaScript in Debusine without depending on JavaScript. It describes how JavaScript is used in the Debusine Django app, namely “for progressive enhancement rather than core functionality”. This is an approach I also follow w...
Original Article

A couple of weeks ago there was an article on the Freexian blog about Using JavaScript in Debusine without depending on JavaScript . It describes how JavaScript is used in the Debusine Django app, namely “for progressive enhancement rather than core functionality”. This is an approach I also follow when implementing web interfaces and I think developments in web technologies and standardization in recent years have made this a lot easier.

One of the examples described in the post, the “Bootstrap toast” messages, was something that I implemented myself recently, in a similar but slightly different way.

In the main app I develop for my day job we also use the Bootstrap framework . I have also used it for different personal projects (for example the GSOC project I did for Debian in 2018, was also a Django app that used Bootstrap ). Bootstrap is still primarily a CSS framework, but it also comes with a JavaScript library for some functionality. Previous versions of Bootstrap depended on jQuery, but since version 5 of Bootstrap, you don’t need jQuery anymore. In my experience, two of the more commonly used JavaScript utilities of Bootstrap are modals (also called lightbox or popup, they are elements that are displayed “above” the main content of a website) and toasts (also called alerts, they are little notification windows that often disappear after a timeout). The thing is, Bootstrap 5 was released in 2021 and a lot has happened since then regarding web technologies. I believe that both these UI components can nowadays be implemented using standard HTML5 elements.

An eye opening talk I watched was Stop using JS for that from last years JSConf(!). In this talk the speaker argues that the Rule of least power is one of the core principles of web development, which means we should use HTML over CSS and CSS over JavaScript. And the speaker also presents some CSS rules and HTML elements that added recently and that help to make that happen, one of them being the dialog element :

The <dialog> HTML element represents a modal or non-modal dialog box or other interactive component, such as a dismissible alert, inspector, or subwindow.

The Dialog element at MDN

The baseline for this element is “widely available”:

This feature is well established and works across many devices and browser versions. It’s been available across browsers since March 2022.

The Dialog element at MDN

This means there is an HTML element that does what a modal Bootstrap does!

Once I had watched that talk I removed all my Bootstrap modals and replaced them with HTML <dialog> elements (JavaScript is still needed to .show() and .close() the elements, though, but those are two methods instead of a full library). This meant not only that I replaced code that depended on an external library, I’m now also a lot more flexible regarding the styling of the elements.

When I started implementing notifications for our app, my first approach was to use Bootstrap toasts, similar to how it is implemented in Debusine. But looking at the amount of HTML code I had to write for a simple toast message, I thought that it might be possible to also implement toasts with the <dialog> element. I mean, basically it is the same, only the styling is a bit different. So what I did was that I added a #snackbar area to the DOM of the app. This would be the container for the toast messages. All the toast messages are simply <dialog> elements with the open attribute, which means that they are visible right away when the page loads.

<div id="snackbar">
  {% for message in messages %}
    <dialog class="mytoast alert alert-{{ message.tags }}" role="alert" open>
      {{ message }}
    </dialog>
  {% endfor %}
</div>

This looks a lot simpler than the Bootstrap toasts would have.

To make the <dialog> elements a little bit more fancy, I added some CSS to make them fade in and out:

.mytoast {
    z-index: 1;
    animation: fadein 0.5s, fadeout 0.5s 2.6s;
}

@keyframes fadein {
    from {
        opacity: 0;
    }

    to {
        opacity: 1;
    }
}

@keyframes fadeout {
    from {
        opacity: 1;
    }

    to {
        opacity: 0;
    }
}

To close a <dialog> element once it has faded away, I had to add one JavaScript event listener:

window.addEventListener('load', () => {
    document.querySelectorAll(".mytoast").forEach((element) => {
        element.addEventListener('animationend', function(e) {
            e.animationName == "fadeout" && element.close();
        });
    });
});

(If one would want to use the same HTML code for both script and noscript users, then the CSS should probably adapted: it fades away and if there is no JavaScript to close the element, it stays visible after the animation is over. A solution would for example be to use a close button and for noscript users simply let it stay visible - this is also what happens with the noscript messages in Debusine).

So there are many “new” elements in HTML and a lot of “new” features of CSS. It makes sense to sometimes ask ourselves if instead of the solutions we know (or what a web search / some AI shows us as the most common solution) there might be some newer solution that was not there when the first choice was created. Using standardized solutions instead of custom libraries makes the software more maintainable. In web development I also prefer standardized elements over a third party library because they have usually better accessibility and UX.

In How Functional Programming Shaped (and Twisted) Frontend Development the author writes:

Consider the humble modal dialog. The web has <dialog>, a native element with built-in functionality: it manages focus trapping, handles Escape key dismissal, provides a backdrop, controls scroll-locking on the body, and integrates with the accessibility tree. It exists in the DOM but remains hidden until opened. No JavaScript mounting required.

[…]

you’ve trained developers to not even look for native solutions. The platform becomes invisible. When someone asks “how do I build a modal?”, the answer is “install a library” or “here’s my custom hook,” never “use <dialog>.”

Ahmad Alfy

Space Elevator

Hacker News
neal.fun
2025-10-20 05:42:08
Comments...

The Red Scare Is American Past and Present

Portside
portside.org
2025-10-20 05:26:18
The Red Scare Is American Past and Present Ira Mon, 10/20/2025 - 00:26 ...
Original Article

In his firsthand account of the 1949 Peekskill Riot, the two-day frenzy of state-sanctioned mob violence against a left-wing music festival headlined by Paul Robeson, writer Howard Fast mostly describes his disbelief. He was invited to help first with the planning and then with the defense of the concert, as mobs of vigilantes with clubs, knives, and guns shut down the performances, violently assaulted many of the attendees, and forced Robeson into hiding. Fast, in the face of mobs shouting racist and antisemitic slogans, believed they could bring Robeson back a week later, with a cordon of United Electrical and Longshoremen union members ringing the concert. The exit from the fairgrounds became a living hell: a gauntlet of thrown rocks, smashed windows, overturned cars, and concertgoers beaten within inches of their lives, including the first Black military aviator in World War I, Eugene Bullard.

After driving through a hail of rocks and slurs, Fast recorded in his book Peekskill, USA his stunned disbelief at seeing the glistening, wet pavement around the burning metal carcasses of smashed-up cars. At first thinking the slick rivulets were gasoline or oil, he realized the streams were blood from the fleeing concertgoers. He recalls a sense of disassociated unreality: this could not, he thought, be happening. Remembering the conversations with others after the violence over the weekend — no strangers to the Left or struggle — “their talk was uneasy and troubled. They were trying to understand what had happened, what had changed . . . a pervading difference had come to the place; they had to know what that difference was.”

Rioters clash at Paul Robeson’s 1949 concert in Peekskill, New York. (Getty Images)

I have thought of Fast’s account as I assume many, like myself, are in an equal state of stunned disbelief at the rapidly unfolding repression of the second Donald Trump administration. Every week brings crises that would be remarkable if they occurred only once in a decade: the calling of federal troops into major US cities, live assassination of migrants in international waters, labeling of “anti-fascist” organizations as domestic terrorists, the abduction and deportation of students for acts of constitutionally protected speech, the creation of politically targeted “antisemitism” lists of pro-Palestine faculty, ICE agents swarming major cities and rounding up migrants in the street, the defunding of major private research universities, the gutting of federal agencies, the embrace of dangerous quack medical science.

Like Fast reading reports that the local American Legion were planning to assassinate Paul Robeson, or even seeing literal blood in the streets, the disorientation of the violence is not only the somatic shock of that violence: it is that, as Fast relates, a short time before, such things did not seem possible.

What Fast was witnessing was the early unfolding of the Second Red Scare, the decade-long suppression, arrest, deportation, terrorization, and occasional public execution of Communists and other leftists in the United States. As we enter something like another red scare, one that seemed to many liberals and even leftists impossible to conceive of months before its machinery started, it’s helpful to remember what we are analogizing.

Historian Ellen Schrecker refers to the era sometimes known as “McCarthyism” (a label she and others dispute) as “the most widespread and longest-lasting wave of political repression in American history.” The Second Red Scare started well before and lasted long after the demise of the demagogue from Wisconsin, Senator Joseph McCarthy; what is at stake in his memorialization is the scope and scale of the repression.

For many, the Second Red Scare was a minor incident, a bump on the road toward achieving and fulfilling a twentieth-century liberal consensus marked by the triumphs of civil rights and the feminist and LGBT movements a decade later. Indeed, many academic histories of “postwar liberalism” barely mention the Second Red Scare. In mass culture, even when it is the focus of a film like Trumbo or Goodnight and Good Luck , one gets the sense that it mainly involved the persecution of a few Communists in the film industry — tragic perhaps, but with little lasting impact on the wider American culture and American politics. The recent TV series For All Mankind presented the Second Red Scare as primarily a civil rights issue for queer federal workers, which it certainly was — and yet, not a wave of political repression of which anti-queer violence was one form of violence among many.

By official counting, two people were executed by the state, several hundred academics were fired, several thousand went to prison or were deported, and several tens of thousands lost their jobs as state or federal workers. While the Red Scare “was not Nazi Germany,” to quote Schrecker, that one even needs to declaim it as such is telling. As Herbert Marcuse wrote in the last years of his life, the Red Scare inaugurated “a new stage of development” in the “Western world,” one that echoes the “horrors of the Nazi regime,” a state of “permanent counter-revolution” against “every thing” that is “called ‘communist.'”

Left writers at the time frequently made the analogy between the Second Red Scare and fascism. A popular pamphlet released by the left-wing Pacific Publishing claimed that Je McCarthy was “spearhead of fascism” and “on the path of Hitler.” Jewish Life was even less tentative: “McCarthyism is fascism.” Mike Gold called it “Nazi America.” That its deatho toll was nowhere near classically fascist regimes does not mean the Second Red Scare’s targets and aims were not the same: to crush the Left and, especially, crush any possible alternative to capitalism or American global hegemony. If we want to understand how we arrived in this authoritarian moment in 2025, we need to understand one of the central pathways that brought us here, the Second Red Scare.

An American Night

If the Red Scare had only fired, imprisoned, and publicly executed members of the Communist Party, it would have been enough to dramatically alter the terrain of politics in the United States. As much as the Communist Party is remembered for some unsavory positions, from the support of the Nazi-Soviet Pact to turning around just over a year later to support the US government’s “no strike pledge” during World War II to defense of Joseph Stalin, the CPUSA was, in the words of historian Michael Denning, the “most important left-wing political organization of the Popular Front era.”

Without many of the party’s key campaigns and coalitions, it is very possible the 1930s in the United States would have looked less like the New Deal and more like Peron’s Argentina or Franco’s Spain: there were not only real far-right movements in the United States, many of the elite business interests were hostile to even the program of social reforms proposed by President Franklin Delano Roosevelt. From organizing marches of the unemployed in the early 1930s, the defense of nine Black youths falsely accused of rape in Scottsboro, Alabama, to forming the backbone of early Congress of Industrial Organizations, as one labor organizer put it, “social security, unemployment insurance, early de-segregation orders were the direct result of Communist Party’s organizing.”

Left to right: Some of the top leaders of the Communist Party USA swept up in the 1948 Red Scare dragnet descend the steps of the Federal Courthouse in New York: William Z. Foster, Benjamin J. Davis, Eugene Dennis, Henry Winston, John Williamson, and Jack Stachel. | People’s World Archives

Yet the real effect of the Second Red Scare went far beyond the suppression of active Party members and other Marxists. Paul Robeson, C.L.R. James, W.E.B. Du Bois, Dorothy Healey, Mike Gold, John Garfield, William Patterson, Richard Wright, Arthur Miller, Leonard Bernstein, Herbert Aptheker, and Claudia Jones were only some of the artists and intellectuals who were deported, lost their jobs, fled the country, had their passports revoked, and/or were jailed under the Smith Act.

And numerous popular, populist civil rights and labor organizations that were either led by Communists or loosely affiliated with the CPUSA as “front” organizations were banned or hemorrhaged members, including a broad base of noncommunists, from the Council on African Affairs, the Civil Rights Congress, The Committee for the Protection of the Foreign Born, the anti-Zionist Jewish People’s Fraternal Order, the Yiddish language newspaper Morgen Freiheit , anti-fascist organizations such as the American League against War and Fascism and the Hollywood Anti-Nazi League (later American Peace Mobilization), even early environmentalist organizations such as Friends of the Earth.

Such broad-based organizations made connections among anti-fascist, anti-racist and ecological thinking in a socialist framework. Fighting capitalism was to fight racism, and vice versa. The liberal, national, and often pro-business framework of the post–Second Red Scare early civil rights movement looked very different from the politics of the Civil Rights Congress or the National Negro Congress.

That it took until the late 1960s for organizations such as Students for a Democratic Society ( SDS ) or the Student Non-Violent Coordinating Committee ( SNCC ) to connect racism, imperialism, and capitalism together suggests how much the later movements were impacted by the absence of a strong, already-existing Marxist left. It’s an open question how much of the later fracturing of the New Left might have owed to the length of time it took to develop such keen intersectional thinking. Movements of the post–New Left fractured over questions of race as against class; Marxists of the Popular Front era often saw these as co-constitutive.

Anatomy of a Curtain Call

While the Second Red Scare touched nearly every facet of American life, from small-town Parent Teacher Association meetings, to the state department, to neighborhood foreign language clubs, to arson and vigilante campaigns waged against union halls and socialist summer camps, one story perhaps like no other encapsulates the level of coordination among state, civic, and cultural institutions to censor and destroy the Left and eradicate wholesale all left cultural or political expression: the suppression of a single film, Salt of the Earth .

Salt was created by blacklisted filmmakers Herbert Biberman, Michael Wilson, and Paul Jarrico, all of whom lost their jobs (and, in Biberman’s case, spent a year in jail under the Smith Act). They formed their own film company in response to their newfound unemployment in the hopes they might be able to secure private funding to make progressive films. While they considered several biographical plots — John Brown’s raid on Harper’s Ferry, a single mother who lost her children after being investigated by the House Un-American Activities Committee (HUAC) — when Jarrico witnessed a miner’s strike in New Mexico by a left-led, mostly Chicano union fighting against a racist pay gap, he knew he found his “story.”

The union, Mine-Mill Local 890, faced a Taft-Hartley injunction that forbade picketing by striking miners — yet did not include the miners’ wives. Taft-Hartley was part of a panoply of lawfare, key to the Second Red Scare, directed at curtailing strikes, firing Communist union officials, and ending the ability for unions to act in solidarity with one another through a ban on “secondary boycotts.” While Mine-Mill did not defeat Taft-Hartley, in a move prescient of the later feminist movement, a picket line of miners’ wives took over the strike, repeatedly chasing off waves of strike breakers. The union crossed the gender line and proved that unions are not only organizations that represent their own workers, but entire communities.

A group of miners are depicted in this still from the 1954 film Salt of the Earth. (Paul Jarrico / Independent Productions)

When Biberman, Jarrico, and Wilson wrote the script, they also submitted it to the union for democratic review by members and, in a remarkable worker-artist collaboration, rewrote several scenes the miners deemed stereotypical or offensive to the Catholic sensibilities of the community. (Also with much alarm by Biberman and Wilson, union members removed most of the references to the Korean War or US imperialism.) Yet what emerged was a pared-down, tightly woven script that blended a workers’ struggle for safety, an anti-racist struggle for equal wages, and a feminist struggle for recognition and equality within the home.

Although it was one of the finest films of the 1950s, Hollywood executives, Hollywood union leadership, and the FBI met during shooting to prevent the film from being finished. They succeeded in closing film processing centers and stopping sound technicians from being able to process their work, a score from being recorded, and the film from being distributed or shown in the United States. They deported the lead actress, Rosaura Revueltas, to Mexico. Vigilantes showed up on the set and shot at crew members; the Mine-Mill labor hall was burnt down, and the Mine-Mill staffer Clint Jencks was severely beaten and forced, through the Taft-Hartley affidavit, to resign from the union on threat of other union leadership facing prison.

Despite heroic efforts to finish the film (including sneaking film into processing centers, concluding the filming in Mexico, and lying to an orchestra about its content), the film was screened only two times in the United States before no other theater would carry it. The film company was bankrupted with legal fees. The union, Mine-Mill, after decades of waging a struggle for equality in Southwest mines, was raided by another union, the Steelworkers, until it too, went bankrupt, effectively ending its drive to equalized pay and treatment between Chicano and Anglo miners.

This story was an exceptional one, given that it was about a major film production. But in many ways, it was entirely common, revealing the dramatic coordination between vigilante and far-right movements, the state, major corporations, and right-leaning labor unions to commit violence, deportation, censorship, and institutional destruction. The suppression of Salt of the Earth is a story of just how far the Second Red Scare extended past the lives of Hollywood directors and even members of the Communist Party, to destroy an independent film company and a racially integrated, left-led union, relying on vigilante violence and the surveillance power of corporations and the state to carry out its dictates. It was a microcosm of the way state, capital, and conservative forces within the labor movement coordinated to suppress the Left.

Red Scare Governance

Historian and theorist Charisse Burden-Stelly frames red scares as a flexible “mode of governance” which fuses both coercive “public authority” as well as “societal self-regulation” as it reaches its zenith. The United States, Burden-Stelly writes, has a history of red scare governance, from the White Terror that ended Reconstruction to the mass hanging and arrests after the Haymarket Riot, to the deportations and mass imprisonment of the First Red Scare, to the “anti-syndicalist” and “red flag” laws of the early twentieth century, to the Second Red Scare and, later, the assassinations under the FBI regime of COINTELPRO.

Red scares are not singular events, writes Burden-Stelly, but a form of counterrevolutionary governance. They are a portable set of tropes, racial scripts, constructions, and legal forms of repression that can be deployed against the Left, yet require state, business, and political consolidation to be enacted. The Second Red Scare was key in part because it fashioned a legal apparatus still with us today, as the ongoing attempted deportation of Mahmoud Khalil attests. And perhaps more importantly, because it was the first time a red scare of its type systematically went after not only organizations, but all of civil society.

Burden-Stelly notes the Second Red Scare was not only a destructive form of coercion; the advent of the Cold War created the civil and cultural infrastructure of modern liberalism. Civil rights liberals, the Democratic Party, and Jewish and African American organizations accepted anti-communism as the condition for reform. “Loyalty oaths” also created affective if imaginary bonds with the nation and the notion of universal citizenship. When Kamala Harris named Trump a “ communist ” recently, she likely did not believe that MAGA wishes to seize the means of production, but rather she sought to evoke the Cold War liberal coalition of multiethnic anti-communism as the civic religion of a New Deal state.

Yet as much as the Second Red Scare remade the left wing of the New Deal coalition from “multiethnic social democracy” to “multiethnic anti-communist liberalism,” it’s important to point out that the Communists, even the Left, were not the only — maybe not even the primary — big-picture target of anti-communism. As historian Landon Storrs makes clear, the purging of Communists and socialists from the civil service, federal government, universities, and labor unions not only limited the scope of their politics, it undid many of the more far-reaching reforms of the New Deal itself.

Whether it was the Taft-Hartley Act limiting the right to strike and engage in boycotts, the ending of price controls after World War II, or the adoption of homophobic and patriarchal family policy, the purges not only affected the lives of thousands of state and federal workers (overrepresented among whom were Black, Jewish, and queer employees), but severely curtailed the era of social reform reached by the height of the New Deal. The purging of the state department of scholars and diplomats with expertise in China alone accelerated the Cold War and helped lead to the foreign (and domestic) policy catastrophes of the Korean and later Vietnam Wars.

Historian Kim Phillips-Fein argues similarly that the era of the UAW’s landmark “Treaty of Detroit” with General Motors in 1950 and the supposed cessation of hostilities between the two sides that the contract allowed for hides a longer war of big business against labor. This grand compromise was predicated on further capitalist consolidation, a curtailment of union militancy, a narrowing of union demands to wages and benefits, and, above all, surrender of control of day-to-day working conditions on the shop floor to management. In the 1940s, the CIO wrested an important degree of control of work away from the boss: setting limits on the speed of the assembly line, hiring and firing, and, above all, setting limits to worker discipline by management. In many of the Communist-led unions, such worker self-activity also centered on ending racial segregation in plants and among shop foremen. George Lipsitz, in his work on radical labor movements shortly before the Second Red Scare, recounted one CEO lamenting, “any businessman who says they have control over their factory is a damned liar.”

While the “Treaty of Detroit” has been celebrated for creating an industrial “middle class,”  “GM…got a bargain,” wrote Fortune magazine in 1950, as it “regained control over … crucial management functions.” Much in the same way the purging of the “peace camp” from the state department paved the way for the invasion of Vietnam a decade later, so too did the defanging of the labor movement set the stage for the deindustrialization and destruction of working class communities from Detroit to South Shore to Toledo.

Members of the Black Panther Party circle the Alameda County courthouse on July 15, 1968, as the murder trial of party cofounder Huey P. Newton begins. (United Press International via Getty Images)

Some of the effects of the Second Red Scare were incalculably cultural. When Black Panther activist Assata Shakur first encountered socialist anti-colonial movements, she writes in her autobiography that she felt confused, thinking that socialism was a “white man’s concoction.” In hindsight, her “image of a communist came from a cartoon.” She came to realize that her understanding of anti-colonialism was entirely American: that much of the Third World embraced, if not communism, some form of socialist emancipation. Unless anti-colonial movements were socialist in orientation, “white colonialists would simply be replaced by Black neocolonialists,” concluding

We’re taught at such an early age to be against the communists, yet most of us don’t have the faintest idea what communism is. Only a fool lets somebody else tell him who his enemy is . . .  It’s got to be one of the most basic principles of living: always decide who your enemies are for yourself, and never let your enemies choose your enemies for you.

The “early age” during which Shakur learned to be “against communists” was in the late 1950s and early 1960s, in the immediate aftermath of McCarthyism’s peak. Even decades later, anti-communism continues to structure the contours of the law, including legal restraints on labor unions passed in the McCarthy era that are still with us and anti-terror deportation statutes and boycott bans (most currently, bans on boycotting Israel). And anti-communism is deployed discursively to police the boundaries of acceptable politics, from derailing single-payer health care as “socialism,” to scholars such as Timothy Snyder referring to the violent, racist rhetoric of Trump right-hand man Stephen Miller as “communist.” If one compares the United States to industrialized nations that never experienced a comparable red scare, such as France and Holland, one can’t help speculate if their high wages and generous social benefits may owe in no small part to the state’s unwillingness or inability to purge the Left from civil society.

Fascism in an Age of Spectacle

This returns us to the question posed by the Trump administration: what is the relationship between this red scare and the previous one? There are two ways to understand this question. Not only did the hollowing out of liberalism — the decimation of left-led unions, the narrowing of civil rights — help to construct conditions under which the neoliberal counterrevolution could undo the last political vestiges of the New Deal and Great Society, the Second Red Scare also created a cultural legitimacy for anti-radicalism. As a recent Politico article claimed, even while real damage was done, the Second Red Scare “petered out” after McCarthy was defeated, and liberalism benefitted as it was no longer tainted with the unpatriotic association with Communism.

There is, in one sense, a historical lineage; there is also a rupture. As Ellen Schrecker recently argued on Democracy Now! , this current red scare under Trump is “ worse ” than what came before, as it no longer only targets admitted radicals, but destroys the very institutions of liberalism itself: universities, federal agencies, even the idea of the rule of law. For all of its many crimes, HUAC did at least attempt to make the appearance of adhering to formal liberalism. The current red scare is, like everything under the Trump administration, shambolic, haphazard, and chaotic: it often feels like watching a political tornado more than a concerted effort by a unitary state.

While some of these differences owe to the particular and peculiar geniuses of J. Edgar Hoover and Donald Trump — the former ruthless, methodical, exacting, and programmatic; the latter spectacular, chaotic, and gaudy — perhaps the salient difference is that our current red scare emerges at a vastly different historical confluence of events.

Not only is the far right ascendant globally, decades of neoliberalism have hollowed out the state and produced a social fabric far more segregated , unequal, alienated, and precarious than the 1950s and 1960s. Trump’s assault on liberalism itself is in part due to the fact that there is not only no organized radical left to assault, but that public institutions have far less social support and state investment in the reproduction of civil society than they did eighty years ago.

Richard Seymour calls this form of far-right chaos and devastation “ disaster nationalism ,” noting how the pastiche of conspiracy theory, doomerism, end-times millenarianism, apocalypse fantasy, blood-and-soil revanchism, terminally online hypermasculinity and hyper-racism are key affective elements of a world that has long since abandoned rational capital accumulation, currency controls, and regulation of high Keynesianism and the welfare state. J. Edgar Hoover was a product of the technocratic organization of the Progressive Era; Trump, a product of the dissolution of postmodern late fascism.

Perfect Jumps in SM64 - Arctan Straining Explained

Lobsters
youtu.be
2025-10-20 05:08:45
Comments...

Mars pathfinder disaster

Lobsters
guitarvydas.github.io
2025-10-20 05:03:53
Comments...
Original Article

I see that https://ntrs.nasa.gov/citations/20230012154 classifies the Mars Pathfinder disaster as a coding/logic problem.

I see it more as a religious problem.

The religion is that of deeply believing that every programming problem can be solved by applying function-based, synchronous thinking, augmented by ever-more-complicated type checking. Religious zealots believe that this approach will remove all need for testing and remove all forms of ad-hoc design. Clearly, this was not the case in the Mars Pathfinder disaster.

IMO, the Mars Pathfinder problem was caused by the over-removal of asynchrony - true concurrency - from the problem and replacing it by step-wise simultaneity, then mis-labelling this approach as “concurrency”. This led to the problem that Facts were ignored. The bug was encountered - apparently “randomly” - before launch, but, ignored. The use of the function-based, overly-synchronous paradigm caused an unforeseen gotcha , but, it looked liked a random Act of God, and was, thus, ignored.

The “problem” was solved by gluing an ad-hoc solution - called “priority inheritance” - onto the function-based, synchronous meme to force that single meme to continue working in that domain - well away from the meme’s sweet spot . These kinds of fixes - ignoring reality and forcefully “fixing” them - were called “epicycles” in Copernicus’ day.

Today, so-called “computer science” is not a science about computers, it is a science about how to use computers to solve one narrow problem - computation. Not all problems in Reality can be reduced to straight-forward computational problems .

We live with true concurrency every day.

For example, we hold meetings at work involving many asynchronous components - people - and, we have developed protocols on how to deal with late arrivals of attendees and different protocols on how to deal with late arrivals of principals, e.g. presenters and CEOs. Everyone - except programmers - inherently knows how to deal with asynchronicity.

For example, hard-realtime notation and implementation is not considered a problem for 5 year olds (“music lessons”), but, hard-realtime is, seemingly, a problem for programmers when the wrong meme is employed, i.e. a function-based, overly-synchronous approach.

Today, most programmers believe that the CALL opcode was invented to support functions . This is not the case. CALL was invented to reduce code size and memory footprint on a single kind of single-threaded sequencing chip, called a “CPU”. FORTRAN applied the CALL instruction to subroutines. Subroutines later became converted to functions. Functions on clay tablets and papyrus don’t work like CALL/RETURN works in CPUs. You can fake it, but, eventually the fakery comes back to bite you, especially if your forget that mapping functions onto CALL/RETURN with shared, mutable registers and shared, mutable memory was just a simplifying assumption.

Today, the over-use of a single, function-based, synchronous meme is causing much more memetic damage than is recognized.

For example, today most programmers think that programming consists of the act of sequentially arranging non-overlapping cells of fixed-sized bitmaps on strict grids using something called a “programming editor”. Programming really is just: making a machine, based on changeable electronics, do what is intended by the engineers. Then, getting the engineers’ ideas about what should be intended to match up with users’ ideas of what is actually needed.

The memetic damage influenced by this single function-based, synchronous meme, causes programmers to avoid whole swaths of different approaches to solving problems, e.g. diagrammatic programming, true concurrency, multiple CPUs, “creativity”, etc.

Religious zealotry makes it look like these other approaches are more difficult, when, in fact, they are easy when one avoids the one-size-fits-all approach to problem solving.

See Also

Blogs

blog

obsidian blogs (see blogs that begin with a date 202x-xx-xx-)

Videos

videos - programming simplicity playlist

Books

leanpub’ed (disclaimer: leanpub encourages publishing books before they are finalized - these books are WIPs)
Programming Simplicity Takeaways, and, Programming Simplicity Broad Brush

Discord

Programming Simplicity all welcome, I invite more discussion of these topics, esp. regarding Drawware and 0D

Twitter

@paul_tarvydas

Mastodon

(tbd, advice needed re. most appropriate server(s))

Sunday Science: Save the Whales. But Save the Microbes, Too.

Portside
portside.org
2025-10-20 04:59:48
Sunday Science: Save the Whales. But Save the Microbes, Too. Ira Sun, 10/19/2025 - 23:59 ...
Original Article

Colorful microbial mats, composed of thermophilic microbes, surround a bubbling hot spring in Yellowstone National Park in Wyoming.,Jon G. Fuller/VWPics, via Associated Press

Hundreds of scientists have joined together to save a group of species from extinction, a group that might not seem like it needs saving: microbes.

Microbes need protection for many reasons, researchers say, including the fact that other species — indeed, entire ecosystems — depend on their well-being. “We need them in order to help conserve the pandas and the rainforests and the whales and the oceans and everything else in between,” said Jack Gilbert, a microbiologist at the University of California, San Diego.

We are living through an extinction crisis , but nearly all of the extinctions conservation biologists have definitively documented so far have involved animals or plants. On Friday , the International Union for Conservation of Nature, which publishes a list of endangered and vulnerable species, announced the newest additions, including the slender-billed curlew , a migratory shorebird and an ebonylike tree called Diospyros angulata .

Humans are driving these species to extinction by hunting them, destroying wetlands and other habitats, cutting down forests, and heating the planet. But as powerful as these pressures are, extinction is not inevitable. On Friday, the I.U.C.N. also announced that the green sea turtle had rebounded, thanks to measures such as protecting the eggs the animals lay on beaches. The organization has officially changed the status of the species from endangered to least concern.

But animals and plants represent only a fraction of the biological diversity on the planet. Fungi — which are a separate lineage from animals and plants — include millions , or perhaps even tens of millions , of species. But I.U.C.N. scientists started assessing their status only a decade ago, and so far they’ve evaluated just 1,300 species. That’s enough species to reveal that fungi are also gravely threatened. One-third of those 1,300 species are at risk of extinction .

Now I.U.C.N. researchers are casting an even wider net: They’re setting out to protect the world’s diversity of bacteria and other microbes.

This may be the most daunting effort in the history of conservation. Microbes make up the most of the genetic diversity on Earth, but microbiologists have barely begun cataloging them. All told, there may be 100 billion species of microbes.

Making the task even harder, microbes are largely invisible to the naked eye; even under a microscope, two distinct species may look identical. And microbes live just about everywhere on Earth, in clouds miles overhead and rocks miles underground .

Even without documenting every microbe species, biologists can see that microbial diversity is under threat. Soil may contain half of all microbial species, for instance, but as forests are cut down and grasslands are converted to farm fields, much of that soil is destroyed and some of its diversity of microbes is lost.

The Microbial Conservation Specialist Group , as the newly formed group is known, plans to map hot spots for microbial conservation around the world, including bare stretches of rocky earth in Antarctica and the insides of animals and plants. Samples will be collected, placed in vaults safe from extinction, and studied.

The specialist group will also help make plans to save the ecosystems where these microbes live. “The way forward is to preserve the ecosystem so you can preserve the microorganisms, and the microorganisms can in turn improve the ecosystem in a positive loop,” said Raquel Peixoto, a microbiologist at King Abdullah University of Science and Technology in Saudi Arabia and a chair of the group alongside Dr. Gilbert.

Elinne Becket, a microbial ecologist at California State University, San Marcos, who is not a member of the group, praised it for establishing connections with people in different parts of the world whose lives are affected by microbes.

“They have a strong short- and long-term plan to bring microbial conservation to the forefront of conservation efforts,” Dr. Becket said.

Conserving the microbes in coral reefs, for example, could greatly benefit the people who live near them. Healthy coral reefs break ocean waves, protecting coasts from flooding . They also serve as nurseries for fish . Coral reefs, which harbor one-third of all marine species , are considered the rainforests of the ocean.

But the coral animals that build the reefs can’t survive on their own. They depend on a coral microbiome, a collection of species that help them break down their food and fight off pathogens.

Conserving microbes in coral reefs could benefit people who live near them . Credit...Antonio Bronic/Reuters

Dr. Peixoto, who studies the coral microbiome, has seen it suffer as the oceans become polluted and grow warmer from climate change.

“I can’t say they are going extinct, but I can clearly and definitely say there is a decline,” Dr. Peixoto said. “You have less beneficial microbes and more pathogens.”

Dr. Peixoto is running experiments on some of these beneficial microbes, using them as probiotics for the reefs. Her preliminary research suggests that they can restore coral health.

Preserving microbes in other habitats could also enable them to provide us with their own services. Deserts and arid lands are topped with fragile microbial crusts that pull carbon dioxide out of the air, for example. The more microbes that can be saved, the more work they can do to slow climate change. “Five hundred acres of that desert will sequester a hell of a lot more carbon than one acre will,” Dr. Gilbert said.

Dr. Peixoto also argues that saving microbial diversity could benefit farmers. To spur the growth of their crops, farmers often blanket them with nitrogen-rich fertilizers. But recent studies have shown that farmers can inoculate their fields with soil bacteria that draw nitrogen from the air and provide it to the crops, saving farmers billions of dollars .

“Even economically, these are good things,” Dr. Peixoto said.


Subscribe to the New York Times

When Women Say “Ta-Ta” to Ta-Tas
Arianna Huhn
Sapiens.org

An anthropologist fighting cancer navigates the social pressure to get breast reconstruction after a mastectomy.
October 1, 2025

No Military Funding (2004)

Lobsters
web.eecs.umich.edu
2025-10-20 04:57:32
Comments...
Original Article

Why don't I take military funding?

Benjamin Kuipers

I don't take funding from military agencies. Why not?

Mostly it's a testimony that it's possible to have a successful career in computer science without taking military funding. My position has its roots in the Vietnam War, when I was a conscientious objector, did alternative service instead of submitting to the draft, and joined the Society of Friends (Quakers). During the 1980s and 90s, the position seemed to lose some of its urgency, so it became more of a testimony about career paths.

Since September 11, 2001, all the urgency is back. The defense of our country is at stake, so this testimony becomes critical. In short, I believe that non-violent methods of conflict resolution provide the only methods for protecting our country against the deadly threats we face in the long run. Military action, with its inevitable consequences to civilian populations, creates and fuels deadly threats, and therefore increases the danger that our country faces.

I will come back to this, but first some other thoughts.

How did you get started with this?

In 1978, after completing my PhD thesis on cognitive maps, I found that the only funding agency that was interested in supporting my research wanted to build smart cruise missiles that could find their way to their targets. This was not what I wanted my life's work to support. So I changed areas, and started working on AI in Medicine, which led to some very productive work on qualitative reasoning about physical systems with incomplete knowledge.

Well before that, I had been a conscientious objector to the Vietnam War, and had done alternative service to the draft from 1970 to 1972 before starting grad school. Since most of my graduate studies were funded by an NSF Fellowship, I didn't think much about military funding and AI research at that time. After finishing my PhD, I did a year of post-doctoral research funded by a grant that Al Stevens and I negotiated directly with Craig Fields at DARPA. It was at the end of that year, looking for continuation funding, that I confronted the cruise missile scenario and had to decide what my research life is for, and who I am willing to have pay for it.

But how can you fund your research?

Defense Department agencies like DARPA, ONR, AFOSR, and ARO are certainly among the larger pots of money out there, and I have put these off limits for myself.

I have had funding from NSF, NASA, and NIH instead. There is a State of Texas Advanced Research Program that has supported several of my projects. And I have had small amounts of funding from several companies such as Tivoli and IBM.

These other agencies typically don't provide grants as large as one can get from DARPA, for example. So, there are limits to the size of research group I can have. With very few exceptions, I have decided that I will fund only grad students, and not try to support research staff or post-docs, who are much more expensive than grad students. I have sometimes had quite a few grad students, and a large lab, but the funding requirements remain moderate.

When I first decided to refuse military funding, I felt I would be making a serious sacrifice. As it has worked out, research money has sometimes been tight, but never disastrously so. And as I watched my colleagues dealing with DARPA's demands for reports, PI meetings, bake-offs, delays and reductions in promised funding, and other hassles, I began to wonder whether I hadn't gotten the best side of the deal after all.

It's important to remember that the bottom line in research is productivity of ideas, not dollars brought in. At some point, the hassle of dealing with an agency may decrease one's intellectual productivity more than the money they provide increases it. But that's a practical issue, not a matter of conscience.

The bottom line here is that refusing military funding puts a limit on how large a research budget I can sustain. But that's not the same as limiting my intellectual productivity.

What's wrong with taking military money? They have funded lots of great research!

Certainly so: AI and the Internet being two large categories of them.

That kind of research is enormously important, and I am glad that our society finds a way to fund it.

However, the goal of the military is to settle international conflict through violence. As a friend of mine was told by a general, "Everything we do ultimately has one of two goals: killing people or destroying things." I believe that this attitude towards conflict resolution has become a "clear and present danger" to our world and our country. The world has become so small through transportation and communication, and our weapons have become so deadly, nuclear and biological, that we cannot afford the illusion that violence makes us safer.

A true defense of our country would require both resources and research into non-violent conflict resolution methods. Both of these exist, but are starved compared with the technologies of warfare.

My stand is a testimony, saying "I will not devote my life's work toward making warfare more effective." I am also trying to show, by example, that one can be a successful and productive computer scientist, even while taking this stand.

Do you try to keep others from taking military funding?

No. Mine is an individual testimony, and each person makes an individual decision about how they will spend their life's work.

Many years ago, when William Penn converted to Quakerism and pacifism, he was troubled by the thought of having to give up the sword that he wore, a great honor at the time. He asked George Fox, the founder of Quakerism, what he should do. Fox told him, "William, wear thy sword as long as thee can."

Why not use military funding for virtuous research?

First, it's a testimony, and a testimony has to be clear and visible to be useful. Certainly there is virtuous research funded by military agencies. Many colleagues whom I respect highly take this approach and I honor them for it. But it doesn't send a clear message to others, and I want to do that.

Second, there's a slippery slope. You can start with a research project as pure as the driven snow. But a few years later, money is tight in the pure research category, and you get offered a research grant from a more applied office within the same agency. Do research on the same topic, but frame it in terms of a military mission. Step by step, you can slide into battlefield management and smart cruise missiles. One thing that makes the slope so slippery is that you have accumulated responsibility for a lab full of graduate students, and the consequences of a major drop in funding will be even more painful for them than it is for you.

Another thing that makes the slope slippery is that military problems are often very interesting. It's easy to get caught up in an interesting technical challenge, and lose sight of what is actually happening: that the objects in the plan are human beings, and that the actions that are being planned are to kill them.

With a little cleverness, you can find similarly fascinating problems in the space program, where there is NASA funding, or in the economic sphere, where there is private funding. Or in other areas of science, where NSF and NIH do the funding.

Is everything the military does tainted?

Certainly not. Most people don't realize that the US military is perhaps the largest educational institution in the world. It provides valuable academic and vocational training to a huge population, many of whom might not have access to it otherwise. It also provides training in character and discipline that are hard to match elsewhere.

There are even signs that the professional military is reaching a clearer understanding than civilian policy-makers of the weaknesses of violence, and the strengths of non-violent approaches to conflict resolution. We may be moving toward the day when trained, disciplined soldiers will be able to move into a situation of conflict and restore civility and peace without loss of life.

That's a day worth working for.

The military can use your research anyway, from the open literature. Why not have them pay for it?

Many things have both good and evil uses. If I create new knowledge that can be used for either good or evil, and present it and evaluate in terms of the good purposes, then someone who converts it to evil use bears that responsibility. If I present it and evaluate it in terms of the evil purpose, then I make it that much easier and more likely for it to be used for evil. I must then bear the responsibility.

This argument is not very robust against speciousness and rationalization. If I make a rapid-fire machine gun firing armor-piercing bullets, and present it and evaluate it for the sport of target-shooting, I am deceiving myself (or more likely, not). Whoever funds the work, I am responsible for anticipating who is likely to use it.

At the same time, if I develop a new scheduling methodology for industrial processes, the military is likely to benefit, since it includes many industrial processes. But peaceful economic activity will benefit more, and the military benefits only in the aspects it shares with peaceful enterprises.

Do work that makes the world a better place. The fact that the military becomes better too is not a problem.

(From a graduating senior) Should I consider military involvement when I choose a graduate school?

Probably not too much, but keep your eyes and ears open when you visit the different schools. Most top graduate schools in computer science will have substantial amounts of military funding, but most will also have faculty who are seriously concerned about the militarization of research. You should look for a balance that leads to productive discussions, rather than a "party line."

Look for faculty members who can guide you in directions you want to go. This means looking for both intellect and integrity.

Are you ever tempted by large military grants?

Yes, of course. Recently a friend of mine, whom I respect highly, took a leadership position in a major agency, and created a research program I find enormously attractive.

After struggling with the question for several weeks, I decided that the need for testimonies like mine was becoming greater, not less, in these difficult times, so I have reluctantly passed on this possibility. Sigh.

The fact that a course of action is right does not necessarily make it easy.

What about September 11? We're under attack!

Our country suffered horrific losses from a terrible attack. The criminal gang responsible must be brought to justice, and we must protect ourselves against possible future attacks.

However, violent actions taken in the name of defense against terrorism are very likely to increase the likelihood and magnitude of future terrorist attacks. We need a combination of short-term vigilance and protection, and long-term efforts to reduce the problems that breed terrorism, both in non-violent ways.

  • Much more to be said about this, probably in other opinion pieces.

I am writing to ask for advice. I am one year away from graduating with a BS in computer science and am considering graduate school. When I started looking around my department for some research to get involved in, I was surprised to find how much of it relies on military funding. This lead me to find your essay on why you don't take military funding. I share your views and as tempting as it is, and as much as I feel I'm missing out on some really interesting projects, I've decided I will not work on anything that receives military support. So, I'm hoping you can offer further advice on how and where to look for grad programs. How do I find other faculty who share this concern for the militarization of research? Will I find more options overseas? How and when do I tell prospective schools about my decision?

Let me applaud you for your principled stand. As you have surely noticed, these are times that require good people to stand up and be counted, publically.

Although I did alternative service as a conscientious objector during the Vietnam war, I did not decide to avoid military funding until a year after completing my PhD. I was fortunate to have obtained NSF and Danforth Fellowships that funded almost all of my graduate studies. After I became a faculty member, I got quite good at raising grants from NSF, NIH, NASA, and other places.

You will need to do similar things, just starting earlier. There are a number of competitive fellowships for graduate study that you can apply for as an individual, and carry with you to your choice of graduate school. Many of these, like the NSF, the Hertz, the Gates, etc, are very competitive. It is a big advantage in such competitions to be clear on your own beliefs and your own priorities. Make sure you can express yourself in a clear and compelling way, and you have a significantly better chance. If you succeed in obtaining your own funding, it makes you much more desirable at top graduate programs.

A couple of useful quotes for this enterprise are, "Momma may have, and Poppa may have, but God bless the child who's got his own!" and "Be wise as serpents and gentle as doves." (Look them up.)

Even if you don't get this kind of fellowship, there are plenty of options for supporting yourself through graduate school without military funding. You can be a teaching assistant; you can be a research assistant to a faculty member with other kinds of funding; you can find work maintaining computers for a lab in another department; you can get a part-time outside job; and so on. Generally, rejecting the single largest funder will require you to be more creative about looking at other funding possibilities. This creativity will serve you well. One of the fortunate things about working in computer science is that you have a practical skill that is needed by people in many different areas, and they are often willing to pay for your services.

On finding faculty with similar beliefs, I would suggest just asking. A quick scan of each faculty member's web page, and especially the acknowledgements on publications, will tell you where they get their funding. Find a few people whose research you find attractive who have non-military funding, and talk to them.

Personally, I find it most productive to be clear and straight-forward, without being judgmental or confrontational. You will very likely find plenty of people who are very sympathetic to your values, but who aren't willing to make what they perceive as too large a sacrifice. In my personal opinion, it is more important to encourage people to see their choice of work, how it's funded, and what it's used for as an important moral decision that must reflect their own fundamental values, than to pressure them to make the same moral decisions that I have.

I doubt you will find better options overseas. I believe there is generally less funding available outside the US, and little of that would be available to a US student. There are some very fine graduate schools in other countries, but on average, the US has the best graduate schools in the world. Again, personally, I love this country, and I want my work and my life to help strengthen its good parts and help fix its problems. So I wouldn't want to leave.

How and when to tell is another judgment call. It depends on your own style, and how vocal a testimony you want to make. You may legitimately decide that this point is not relevant on the application for graduate school, or on the other hand, you may feel that it is central. You are not obliged to explain or justify every belief you have, however strongly held or controversial, to everyone you meet. You have to decide when you think it is relevant.

A final point. I think you are doing a good and noble thing. Following this path will be demanding, and maybe quite difficult, but I believe and hope it will also be rewarding in many ways, including practical ones. However, getting the education you need to make the best use of your gifts through the rest of your life is also an important value. You should not participate in activities that you believe are morally wrong, but there may be times in your life when preparing yourself for your future takes priority over making a visible testimony. There will be time and need for that later, you can be sure.

With my best wishes,

Ben Kuipers


Originally written by 8-8-2004. Update (11-15-2021):

Dan Koditschek of the University of Pennsylvania, a friend whom I respect highly, recently encountered a piece of the slippery slope I describe here. Some of his former students, proud of their experience and training in his lab, have created a company that now produces and sells legged robots carrying deadly weapons. Dan's letter eloquently responds.


BJK

When it comes to MCPs, everything we know about API design is wrong

Lobsters
blog.fsck.com
2025-10-20 04:34:15
Comments...
Original Article

TL;DR: I built a lightweight Chrome MCP. Scroll to the end to learn how to install it. Read the whole post to learn a little bit about the Zen of MCP design.

Claude Code has built in tools to fetch web pages and to search the web – they actually run through Anthropic's servers, if I recall correctly. They do clever things to carefully manage context and to return information in a format that's easy for Claude to digest.

These tools work really well.

Right up to the point where they completely fall apart.

pasted image 20251019 180931

An uncoached testimonial from the only customer who matters.

Last week, I somehow got it into my head that I should update my custom blogging client to use Apple's new Liquid Glass look and feel.

The first issue I ran into was that Claude was absolutely sure that macOS 26 wasn't out yet. (Amusingly, when asked to review a draft of this post, one of the things it flagged was: ' Inconsistent model naming - You refer to "macOS 26" but I believe you mean "macOS 15" (Sequoia). macOS 26 would be way in the future.')

Claude was, however, happy to speculate about what a "Liquid Glass" UI might look like. Once I reminded the model that it had memory issues and Apple had indeed released the new version of their operating system, it was ready to get to work.

I told it to go read Apple's Human Interface Guidelines and make a plan. This is what Claude saw:

<div class="noscript"><h1 class="noscript-title">This page requires JavaScript.</h1><p>Please turn on JavaScript in your browser and refresh the page to view its content.</p></div></noscript><div id="app"></div>

It turns out that Apple no longer offer a downloadable version of the HIG. And the online version requires JavaScript . After a bit of flailing, Claude reached for the industry-standard Playwright MCP from Microsoft.

pasted image 20251019 181028

The Playwright MCP is a collection of 21 tools covering all aspects of driving a browser and debugging webapps, from browser_install to browser_click to browser_navigate_back . Just having the Playwright MCP available costs 13,678 tokens (7% of the whole context window) in every single session, even if you never use it. (Yes, the Google Chrome team has their own Chrome MCP. Its API surface is even bigger )

And once you do start using it, things get worse. Some of its tools return the entire DOM of the webpage you're working with. This means that simple requests fail because they return more tokens than Claude can handle in a response:

> Please use the playwright MCP to fetch the headlines from https://www.bbc.com 

⏺ I'll use the Playwright MCP to navigate to BBC.com and fetch the headlines for you.

⏺ **playwright - Navigate to a URL (MCP)**(url: "https://www.bbc.com")
  ⎿  Error: MCP tool "browser_navigate" response (25837 tokens) exceeds maximum allowed tokens (25000). Please use pagination, filtering, or limit parameters to reduce the response size.

⏺ Let me take a snapshot instead to get a more structured view of the page:
  ⎿  Error: MCP tool "browser_snapshot" response (36196 tokens) exceeds maximum allowed tokens (25000). Please use pagination, filtering, or limit parameters to reduce the response size.

⏺ The page is quite large. Let me try to extract just the headlines using JavaScript evaluation: [...]

It's frustrating to see a coding agent trying over and over to use a tool the way it's supposed to and having that tool just fail to return useful data.

A simple solution that's not an MCP #

After hearing me complain about this a few times, Dan Grigsby commented that he'd had success just asking Claude to teach itself a skill: Using the raw Dev Tools remote control protocol to drive Chrome.

This seemed like a neat trick, so I asked my Claude to take a swing at it.

Claude was only too happy to try to speak raw JSON-RPC to Chrome on port 9292. It...just worked. But it was also very clunky and wasteful feeling. Claude was writing raw JSON-RPC command lines for each and every interaction. It was very verbose and required the LLM to get a whole bunch of details right on every single command invocation.

It was time to make a proper Skill.

After thinking about it for a moment, I asked Claude to write a little zero-dependency command-line tool called chrome-ws that it could run with the Bash tool to control Chrome, as well as a new SKILL.md file explaining how to use that script . chrome-ws encapsulated the complexity and made Chrome easily scriptable from the command line.

The skill sets up the basics of web browsing with its tools and uses progressive disclosure to tell Claude how to get more information, but only when it has a need to know. For example, these examples of how to use the chrome-ws tool .

Claude didn't always reach for the skill, so it wasn't aware of its new command-line tool, but once I pointed it in the right direction, it worked surprisingly well .

This setup was incredibly token efficient – Nothing in the context window at startup other than a skill name and description in the system prompt. What was a little frustrating for me was that any time Claude wanted to do anything with the browser, it had to run a custom Bash command that I had to approve. Every click. Every navigation. Every javascript expression. It got old really, really fast.

There's no real way to fix that without creating a custom MCP.

Giving up and making an MCP #

But that would put us right back where we were with the official Playwright MCP, right? Nearly two dozen tools and 13k tokens spilled on the floor every time we started a session. Even trimming things down to only the dozen most important commands is still a bunch of tools, most of which Claude won't use in a given session.

If you've ever done API design, you probably know how important it is to name your methods well. You know that every method should do one thing and only one thing. You know that you really need to type (and validate) all your parameters to make sure your callers can tell what they're supposed to be passing in and to make bad method calls fail as soon as possible.

It would be absolutely unhinged to have a method called use_browser that took a parameter called action that was itself a method dispatcher, a parameter called selector , and a parameter called payload .

You'd have to be crazy to think that it's acceptable API design to have the optional, untyped payload field just have a description like ("Action-specific data: navigate=URL | type=text (append \\n to submit) | extract=format (text|html|markdown) | screenshot=filename | eval=JavaScript | select=option value | attr=attribute name | await_text=text to wait for

And yet. That is exactly how I designed it.

And it's just great.

The high-level tool description reads:

Control persistent Chrome browser via DevTools Protocol. Use the superpowers-chrome:browsing skill for detailed guidance.

CRITICAL: Selectors support CSS or XPath (XPath must start with / or //). Append \\n to payload in 'type' action to submit forms. State persists across calls.
  
Examples: {action:"click", selector:"//button[@type='submit']"} | {action:"extract", payload:"text", selector:"//h2"}

Workflows: navigate→await_element→extract | navigate→type(payload:"text\\n")→await_text

At session startup, the whole MCP config weighs in at just 947 tokens. I'm pretty sure I can shave at least 30-40 more.

It's optimized to make Claude's life as easy as possble. Rather than having a method to start the browser, the MCP...just does it when it needs to. Same with opening a new tab if there wasn't one waiting.

The tool description tells Claude what to do and where to read up when it needs more help. At least so far, it works just great for me.

One of the mistakes I made while developing the MCP was to instruct Claude to cut down the API surface by only accepting CSS selectors, rather than accepting CSS or XPath. It seemed natural to me that a smaller, simpler API would be easier for Claude to work with and reason about. Right up until I saw the MCP tool description containing multiple admonitions like CRITICAL: CSS selectors only (NOT XPath) . The whole thing just...worked better when I let the selector fields accept either CSS or XPath.

Another thing that Claude got not-quite-right when it first implemented the MCP was that it included detailed human-readable description text for all the use_browser method parameters. Because LLMs that are using MCPs can see both the description and the actual JSON schema, you don't need to repeat things like lists of values for an enum or type validations. One trick you can use is to ask your agent to tell you exactly what it can see about how to use an API.

One of the weirdest realizations I had while building superpowers-chrome is this: I have no doubt that there are a dozen similar tools out there, but it was literally faster and easier to build the tool that I thought should exist than to test out a dozen tools to see if any of them work the way I think they should.

I have feels about designing for LLMs #

Over the last couple of decades, the common wisdom has become that Postel's Law (aka the robustness principle) is dated and wrong and that APIs should be rigid and rigorous. That's the wrong choice when you're designing for use by LLMs.

This might be a hard lesson to hear, but tools you build for LLMs are going to work much, much better if you think of your end-user as a "person" rather than a computer. Build your tools like they're a set of scripts you're handing to that undertrained kid who just got hired in the NOC. They are going to page you at 2AM when they can't figure out what's going on or when they misuse the tools in a way they can't unwind.

Names and method descriptions matter far more than they ever have before.

Automatic recovery is hugely important. Designing for error recovery rather than failing fast will make the whole system more reliable and less expensive to operate.

When errors are unaviodable, your error messages should tell the user how to fix or work around the problem in plain English.

If you can't give the user exactly what they asked for, but you can give them a partial answer or related information, do that.

Claude absolutely does not care about the architectural purity of your API. It just wants to help you get work done with the limited resources at its disposal.

Actually using it #

This new MCP and skill for Claude Code, is called superpowers-chrome .

You can install it like this:

/plugin marketplace add obra/superpowers-marketplace
/plugin install superpowers-chrome@superpowers-marketplace

If you're already using Superpowers , you can just type /plugin, navigate to 'Install plugins', pick 'superpowers-marketplace' and then you should see superpowers-chrome .

I'd love to hear from you if you find it helpful. I'd also love patches and pull requests.

The Radical Legacy of the “Poorest President in the World”

Portside
portside.org
2025-10-20 04:33:14
The Radical Legacy of the “Poorest President in the World” Ira Sun, 10/19/2025 - 23:33 ...
Original Article

I met the “poorest president in the world” in late 2023. I had seen him before, at an event in Washington, DC, ten years earlier, but at that time, he was surrounded by wild throngs of adulators — shaking hands, snapping selfies, kissing babies. You might have thought he was a pop-star influencer rather than a man pushing eighty and the sitting president of a small, far-flung South American country. But this was the allure of Pepe: ex–guerrilla fighter, political prisoner turned president, viral phenomenon, philosopher, farmer, survivor.

When I finally had a chance to talk with José “Pepe” Mujica in person, he was no longer the president of Uruguay and had recently resigned from his seat in the country’s Senate due to declining health. We met at El Quincho de Varela, a modest straw-roofed room built around a barbecue, just down the road from his farm. Over the years, it had become hallowed ground, drawing politicians, activists, celebrities, and thinkers from Angela Davis to Brazilian president Lula da Silva. Walking in felt like arriving at a hermit’s retreat in the mountains. We talked about the state of social movements in the United States and our place in the longue durée of history. Even though he had stepped away from public life, Pepe was still curious about social movements abroad and open to being introduced to new ideas.

It is easy to see why Mujica was the source of endless fascination. After becoming head of state in 2009, he refused to move into the presidential palace, instead opting to stay in his ramshackle three-room farmhouse on the edge of Montevideo — guarded only by two cops and his three-legged dog, Manuela. He kept tending to his flower farm, drove himself to work in a powder-blue 1987 VW Beetle, gave away 90 percent of his salary to charity, and kicked off his term with less than $2,000 to his name. This lack of pretension and materialism gave him a global reputation as the most modest head of state one could find.

A onetime member of an armed Marxist revolutionary movement, Mujica spent more than a decade in solitary confinement after the 1973 military coup in his country. When democracy was restored in 1985, he emerged from prison advocating disarmament and a turn to electoral politics. He cofounded the leftist Movimiento de Participación Popular (MPP) and began an improbable rise in democratic government. By the early 2000s, he became a high-ranking cabinet member and then, in 2009, was elected president himself. During his five-year term, he ran a tight ship, made international headlines with major progressive reforms, and saw his country maintain strong momentum in both economic and social indicators. Afterward he continued to serve as one of the country’s most influential senators.

Throughout his time in power, Pepe came across like an unfiltered grandpa, quick with stories and lessons from life, history, and philosophy. He did not present himself as a politician, nor did he speak like one. He did not prepare talking points, and he almost never wore a suit. It was as if he was just another neighbor chatting at a bar, sipping maté at the plaza.

In interviews, Mujica would answer journalistic clichés with a mix of irreverent frankness and philosophical digression, throwing in an enigmatic statement of wisdom every so often. When the media tried to point out that his lifestyle was strange for a president, he shrugged : “It’s the other presidents’ fault, not mine,” he said:

They live like the minority in their countries. But republics came about to uphold a concept: no one is above anyone else. They were a response to feudalism and absolute monarchies: those with powdered wigs, red carpets, and the vassals playing their fanfare when the lord went hunting, all of that. And democracy is supposed to be about the government of the majority.

“I am living like the majority of people in my country,” Pepe insisted. “Otherwise you end up being swayed by how you live.”

All told, Mujica, who died this past spring at age eighty-nine, was the rarest species of politician. Not only was he wildly popular , but he was politically shrewd, an inspiring communicator, and an effective administrator. He never sold out. Pepe defied every cynical expectation: he neither watered down his politics for centrist approval nor fell in love with his own power. He was scrupulously democratic, and as his health began to decline, he stepped back from politics, ushering in a younger generation of progressives — including the recently elected president, Yamandú Orsi, a successor from the MPP. Although he was practically minded, he stayed true to his ideals, he continued to critique the status quo, and urged the struggle to continue.

Mujica is gone now, but there is much he can still teach advocates for social and economic justice, in the United States and across the globe. Pepe’s life and political career offer lessons in inside-outside politics: in how movements can send champions into office and confront the contradictions of electoral politics without abandoning the broader struggle for a better world.

His refusal to live like those in high office typically do was not just symbolic. It pointed toward a different form of democratic governance grounded in accountability to the people, not the elite.

Who Was Pepe Mujica?

José Alberto Mujica Cordano, called “Pepe” from a young age, was born in 1934 on a small farm west of Montevideo. In the 1940s, rural poverty in Uruguay was extreme. Before the future president reached age six, his father died, and his family lost its farm. Mujica grew up on the geographic and political margins of a country that would see its booming wartime economy quickly crumble as a rebuilding Europe focused inward and no longer demanded the agricultural products of the Southern Cone.

In the north of Uruguay, the cañeros — sugarcane plantation workers — endured cruel working conditions and, during the economic downturn, were reduced to famine. The frustrated unionization drive of these workers, combined with the victory of the Cuban Revolution in 1959, radicalized a young Mujica. He soon broke from his early activism in the youth wing of the traditionalist Partido Nacional, one of Uruguay’s two dominant ruling parties, and joined the Tupamaros, a Leninist urban guerrilla movement made up of political activists, trade unionists, students, and former cañeros.

The group gained notoriety for bold operations throughout the 1960s, including sabotage, bank robberies, kidnapping, and weapons raids targeting police, local elites, and foreign actors. But by 1972, the Tupamaros had collapsed under relentless military and police repression. Beginning in 1973, a brutally oppressive military dictatorship took control of the country, and it would ultimately rule Uruguay for twelve years. Throughout the late 1960s and early 1970s, Pepe was wanted by police. He was shot six times in one confrontation and arrested four times, twice escaping in mass prison breaks.

In 1972, he was captured for the last time. For the next twelve years, Mujica and other political prisoners were shuttled between secret locations and held in solitary confinement. Pepe spent two years in a damp well and seven years without anything to read. He endured months with his hands bound behind his back with wire and survived long stretches in total darkness, cut off from human contact, hallucinating, with only spiders for company.

The dictatorship finally began to lose its grip in the early 1980s, weakened by economic crisis, mass civil resistance, and mounting international pressure. Popular mobilizations — including huge strikes, student protests, and a growing human rights movement — forced the regime to begin negotiations with the opposition. In 1985, after a democratic agreement was reached, civilian rule was restored. That same year, Mujica, then fifty years old, was released under a general amnesty for political prisoners. Pepe emerged from more than a decade of captivity into a country transformed: a still-fragile democracy but once again open to social and political contestation.

The shift to electoral politics sparked intense internal debate among activists in the late 1980s. Mujica never apologized for his participation in armed struggle; he saw it as a necessary response to state oppression and systemic injustice. But he also argued that it was no promised land — that it had ultimately proven just as ineffective at transforming society as electoral politics had during that period.

“Armed struggle cannot be a life goal,” he argued . “In certain circumstances, it might have seemed like a way forward — but it can’t last forever. Because societies can’t be built on that basis. It doesn’t make sense.”

Over the next two decades, Mujica’s party, the MPP, became the most influential political faction in Uruguayan politics, distinguished by its sophisticated grassroots base, highly effective electoral machinery, and clearly defined progressive ideology. In 1989, it joined a coalition known as the Frente Amplio, and in 2004, they achieved a landmark victory. Campaigning in the aftermath of a 2002 financial crisis, which had devastated Uruguay’s economy and sharply eroded public trust in the political establishment, Tabaré Vázquez — a former oncologist and mayor of Montevideo — was elected president in a landslide popular vote. It marked the first time in Uruguay’s history that the presidency was won by someone outside the country’s two traditional parties.

Pepe Mujica with his wife, Lucía Topolansky, in October 2023. (Casa Rosada / Wikimedia Commons)

The successes of the first Frente Amplio administration led to continued victories, and in 2009 Uruguay elected an even more progressive president in Pepe Mujica. Although Uruguayan presidents cannot serve consecutive terms, the coalition won three straight presidential elections, securing Mujica’s term spanning the period between 2010 and 2014, as well as Tabaré Vázquez’s return for a second term from 2015 to 2019. The two presidents became key members in Latin America’s broader Pink Tide, a wave of left-leaning governments that swept across the region in the 2000s and early 2010s, challenging neoliberal orthodoxy and prioritizing social justice.

Between 2005 and 2019, Uruguay experienced a period of robust and inclusive growth, and was hailed as a regional success story. Social spending rose dramatically, making possible an expanded cash transfer program that reached over 30 percent of households and sweeping health care reforms that ensured universal, equitable access. These policies contributed to a dramatic drop in poverty, which fell from nearly 40 percent in 2005 to under 9 percent by 2019. This positioned Uruguay as the South American country with the lowest poverty and inequality levels.

By 2019, Pepe’s health had begun to worsen, and he chose not to run for a second term as president. Shortly after, he retired from national politics altogether, yet he continued his activism through his last years. In February 2024, he helped organize a transnational gathering of progressive activists, labor leaders, and elected politicians in Foz do Iguaçu, Brazil, that made a call for greater internationalism as a strategy for expanding the field of possibility in Latin America.

The Substance of Authenticity

There are many lessons that can be drawn from Pepe’s story. But for activists outside of Latin America, three ideas are particularly relevant.

A first lesson is that authenticity cannot be bought from a public relations consultant. There is a truism often repeated in electoral politics and policy advocacy that you must speak authentically to the people you want to represent — that they must feel that you are one of them. And yet, no amount of messaging and polling is enough to make Kamala Harris feel genuine. Hakeem Jeffries has put on hoodies , Nancy Pelosi has knelt in a Kente cloth, Andrew Cuomo took off his tie and filmed himself in a park in Manhattan, but their efforts never seem to achieve what they are seeking to achieve.

What is it that makes someone like Pepe Mujica different? What makes them feel authentic?

Pepe showed that the real substance of authenticity lies in the politics themselves — in the work, the commitments, the choices. Miss that, and you end up with a generation of Barack Obama knockoffs, a parade of shallow imitations. Mujica demonstrated that authenticity is more than messaging or representation; it is a function of political commitment. His viral global image as “ the poorest president in the world ” was no stunt. He lived his austere lifestyle with his partner Lucía Topolansky — a political force in her own right — for his forty years of post-dictatorship life. It was less a choice than a reflection of what he saw as his obligations to the people of Uruguay and the world.

Before he won the presidency, Pepe was seen as an unrealistic contender by much of the mainstream Uruguayan media. He was too rough around the edges. He was charismatic, sure, but he consistently made gaffes and broke the established protocols of elected officials. Political commentators saw his unwillingness to adopt the trappings and privileges of higher office as evidence of his lack of seriousness.

But Mujica spoke of his minimalist way of living with pride, declaring , “I’m not poor. My definition of poor are those who need too much. Because those who need too much are never satisfied.”

This was the expression of a unique worldview, shaped by lived experience and informed by the insights of Marxist political economy. In a moving interview , he explained his philosophy: “Either you manage to be happy with very little — light of luggage because happiness is inside of you — or you won’t achieve anything,” he argued. “But since we invented a consumerist society, and the economy must grow — because if it doesn’t, it’s a tragedy — we have created a mountain of superfluous needs. And you have to throw things away and live by buying and throwing away, while what we are really wasting is our time to live.”

A journalist who covered multiple Uruguayan presidents told me how Mujica stood apart — even from fellow Frente Amplio president Tabaré Vázquez. While most presidents kept the press at arm’s length, Mujica treated reporters as equals. On foreign trips, he would ditch his security detail and wander down from his hotel room in decidedly unpresidential pajamas to share a drink with the press at the hotel bar.

But he did not use his friendships with journalists to pump up his own image. When tossed a softball about his administration’s record on poverty, Mujica refused the self-congratulatory script. “We lifted quite a few people out of extreme poverty,” he admitted , “but we didn’t make them citizens — we made them better consumers, and that is our failing.”

And yet, the numbers speak for themselves . Under Mujica, poverty plunged from 21 percent to 9.7 percent, real wages climbed nearly 4 percent a year, and GDP per capita grew at an average 4.4 percent annually. Social spending rose from 21 percent to 23 percent of GDP, expanding the safety net and strengthening the services that sustain working- and middle-class families.

Inequality narrowed too: the real minimum wage rose 37 percent, while the wealthiest tenth saw their share of national income drop by more than 10 percent. By any conventional measure, Mujica delivered on the core promises of the Left, even if he refused to take credit for it.

Pepe’s lifestyle was a reflection of who he was and his political commitments to society. The lesson is that our leaders should not represent us; they should be one with us. What matters is how they remain in a reciprocal relationship with us — how they show deference, how they honor the debt they owe to the community.

All too often, we see the opposite play out in politics, especially in liberal circles. Politicians who once belonged to our communities imply that we owe them deference for their personal success. But in reality, what they have achieved is an exit — a detachment — from the very communities they claim to represent. As they embrace the trappings of elected office, the protocols, the privileges, they tie themselves to that position of power and create a material commitment to elite class status. Eventually they begin to believe that reelection is what matters most.

A material commitment to the majority can help mitigate the tendencies that come with holding state power. Of course, you do not become president without a degree of hunger for power, and some on the Left still privately describe Mujica as a caudillo, a strongman of sorts. But his mode of life kept him close to regular people and gave him critical perspective when receiving the advice of professional pundits and the economic elite.

Accountability Is a Structure, Not a Vibe

The second lesson to learn from Pepe is about how to ensure that personal commitments become political outcomes. It is difficult to maintain genuine accountability to the working class and social movements in modern democratic systems. Time and time again, activists have placed allies in office only to see them caving to corporate and neoliberal pressures. Or conversely, they witness leaders trying to establish control over a political system who end up turning toward authoritarianism. There are too few institutional mechanisms that allow elected officials to avoid these paths and assert a consistent progressive mandate, especially when facing sustained resistance from the media, corporate interests, and the political establishment.

Exceptional political talent is not enough. Without structural support, the likelihood of disappointment is high. Without a durable, independent structure behind electoral efforts, accountability cannot be guaranteed.

No doubt Pepe Mujica was an exceptional political talent, but it was the unique structural accountability of the Frente Amplio that allowed him to stay steady and responsive to his base. It stands as a key institutionalized example of a true mass party in Latin America. Through its creation of mechanisms to keep it beholden to the interests and concerns of its grassroots supporters, the Frente is a uniquely valuable instance of participatory politics.

The Frente Amplio is not a single party. It is a coalition of political movements, including the MPP, in combination with a network of local comités de base , or base committees. These grassroots hubs — which are independent of internal factions — organize members in their neighborhoods, spark debate, mobilize supporters, and connect communities directly to party leadership. The committees are the source of volunteer labor for electoral campaigns. But they also meet regularly to discuss policy proposals and elect delegates that take part in important national decision-making processes. Half of the delegates in the Frente Amplio’s National Plenary, the party’s top decision-making body, as well as significant numbers in other positions of responsibility, come from these grassroots committees.

While the committees are not as active as they once were, they continue to shape the internal dynamics of the Frente Amplio, and they inspire an uncommon degree of political participation . Internal decision-making power in the coalition is not based solely on vote shares won in the general election: the system rewards factions that can build strong, active local networks. In practice, it does not work perfectly. The committees are sometimes ignored, or they become overly partisan. But it does achieve a degree of accountability to grassroots voices that is seldom seen in modern electoral politics.

Pepe Mujica in his 1987 Volkswagen Beetle with Lula da Silva in 2023. (Palácio do Planalto / Wikimedia Commons)

This structure was crucial in Mujica’s rise. Moderate factions within the Frente Amplio — representing wealthier middle-class professionals and government employees — often opposed him, voicing many of the same criticisms that centrist Democrats in the United States level against progressives. These groups have held significant influence, producing leaders including Tabaré Vázquez, key ministers of finance, and many of the coalition’s policy experts and administrators. Yet despite their mainstream appeal and resources, they were balanced by the coalition’s established internal structure, which amplified the voices of organized activists.

Not only did this structure help Mujica rise, it also helped keep him accountable once in office. It was organizational grounding — not just individual virtue — that enabled Mujica to stay aligned with the communities that brought him to power.

Electoral structures in the United States make this hard to replicate, but activists can take steps in this direction by codifying the internal democracy of organizations that make up the wider ecology of progressive electoral efforts. We can ask: In what ways are these groups — beyond electoral efforts — formalizing their accountability to social movements? There is much in the world of nonprofit community organizing, policy research, and advocacy that claims to reflect the interests of everyday people but, in fact, is driven by the choices of nonprofit boards, philanthropic funders, and political consultants. When these organizations do seek to engage their base, it is usually in a limited advisory capacity — for example, by placing a token community representative on a board where the representative is then expected to assimilate and sign off.

Instead, following the example of the Frente Amplio, groups should prioritize building structures where elected allies are not only supported but also held accountable to the people who mobilized them. Without that kind of durable infrastructure, progressive leaders will always be fighting alone against systems designed to absorb, co-opt, or isolate them.

Keep the Heat On — Even With Friends in Office

This focus on internal structures of accountability overlaps with a third lesson we can draw from Mujica’s career: the importance of active and independent social movements outside the elected administration or political party.

Mujica’s personal commitments and the Frente Amplio’s organizational structure both shaped his approach as a leader. Nevertheless, the critical force behind many of Uruguay’s progressive reforms during his presidency came from a broader set of forces operating beyond the halls of power. The external pressure applied by social movements — the “outside game” — proved essential in driving meaningful change.

Recently I spoke with a close adviser to Pepe, a lifelong leftist who is a committed Leninist and a former elected official. He was embarrassed to admit that there really is not a formal cadre or strict structure of cogovernance in Uruguay that goes beyond the electoral coalition. Although the Frente Amplio has a strong grassroots base, its relations with civil society and wider social movements have been scattershot, sometimes informal, sometimes even antagonistic. Even with its most progressive administration in office, key executive decisions were ultimately made by Mujica and his trusted inner circle.

Yet many of the victories that came to define Mujica’s presidency were not his chosen battles. The legalization of marijuana, legal abortion, and same-sex marriage became iconic reforms of Pepe’s administration, but they were not originally on his agenda. Rather, they were the result of pressure from well-organized social movements.

Mujica was both politically savvy enough and connected enough to his grassroots base to be receptive to these external demands. He recognized the momentum behind these ideas, remained flexible, and avoided rigid attachment to a predetermined plan. But even though these reforms would become some of his signature accomplishments of his time in office, he had to be pushed to champion them.

The legalization of marijuana , for example, was driven by a coalition of activists combating both entrenched international drug policy norms and local conservatism. Groups including the Asociación de Estudios de Cannabis del Uruguay and Proderechos organized marches, lobbied lawmakers, and kept the issue in the media spotlight, making changes politically feasible. Their efforts coincided with mounting pressure to find creative strategies to combat drug trafficking in the wake of the failed “war on drugs” promoted by the United States.

When cannabis activists began mobilizing in the early 2000s, public opinion was initially against them — two-thirds of Uruguayans opposed legalization. Mujica himself expressed reservations. Interviewed on the topic, he said, “Do not think that I am defending marijuana. . . . Love is the only healthy addiction on the face of the Earth. All the other addictions are a plague, whose harms have varying degrees.” But the movement built momentum by tying the cause to a broader rights agenda championed by students, LGBTQ groups, unions, and other civil society forces.

By 2012, rising fears over urban insecurity cracked the debate wide open. Legalization was reframed as a security measure, a way to cut into narcotraffickers’ profits and separate cannabis from harder drugs. In 2013, Uruguay became the first country in the world to legalize recreational marijuana. In the end, Mujica’s backing and the Frente Amplio’s congressional majority pushed Law 19.172, the country’s legalization measure, over the finish line. But it was relentless social movement organizing that transformed the issue from an improbable and unpopular cause into a political winner.

This is a crucial lesson for social movements seeking progressive change: having allies in office, even a president with deep political commitments to regular people, is not enough. Social movements drive political possibility, expanding the Overton window and forcing elected officials to move beyond their comfort zones. Political power, even when wielded by well-intentioned leaders, is constrained by institutional inertia, competing interests, and the limits of political will.

Uruguay’s story underscores the importance of sustained pressure, strategic agitation, and public engagement — keeping issues alive beyond election cycles and inside the corridors of power. Allies in office may open doors, but movements must push through them. Mujica often described himself as reluctant, even skeptical, but he was also unusually open to being pressed. Unlike leaders who wall themselves off once in office, Mujica kept the door ajar.

More to Carry On

Pepe Mujica’s legacy comes not only from the laws passed under his watch but also the way he embodied a different kind of politics: a grounded politics, a politics of accountability, and a politics of democratic participation. He insisted that leadership must emerge out of everyday life, and that democratic institutions only thrive when civil society is strong enough to hold power to account. In 2024, at Orsi’s postelection victory event, Mujica told those assembled that he was battling death. “I’m an old man who is very close to beginning the departure from which there is no return,” he said. “But I’m happy! . . . Because when my arms are gone, there will be thousands more to carry on the struggle. All my life I’ve said that the best leaders are the ones who leave behind a team that surpasses them by far — and today, you are here.”

Shortly thereafter, reflecting on his journey, he added : “I spent my years dreaming, fighting, struggling. They beat the hell out of me and everything else. It doesn’t matter, I have no debts to collect.”


Guillermo “Guille” Bervejillo is a senior research associate at the Whirlwind Institute and a research fellow with the Ayni Institute. An economic geographer, writer, and activist, his work focuses on Latin American geopolitics, political economy, and social movements.

Jacobin is a leading voice of the American left, offering socialist perspectives on politics, economics, and culture. The print magazine is released quarterly and reaches 75,000 subscribers, in addition to a web audience of over 3,000,000 a month.

Subscribe to Jacobin today, get four beautiful editions a year, and help us build a real, socialist alternative to billionaire media.

Entire Linux Network stack diagram (2024)

Hacker News
zenodo.org
2025-10-20 04:33:11
Comments...
Original Article

Published November 18, 2024 | Version v7

Poster Open

  • 1. Ericsson Nikola Tesla

Description

Diagram of entire Linux Network Stack, including:

  • Virtualization and Linux containers:
    • Emulation and Paravirtualization.
  • Network sockets.
  • Network stack:
    • Upper layer of Network stack (TCP, UDP).
    • Low layer of Network stack with GRO, RPS, RFS and GSO.
  • Network Scheduler.
  • NetFilter and traffic controll:
    • Bridge and Bond interfaces.
    • Tap interface, ...
  • Device Driver:
    • Queue.
    • NAPI.
    • IRQ handler.
  • Network functions accelerated by NIC:
    • Checksum offload, VLAN, VxLAN, GRE, TSO, LRO, RSS, ...
  • Network card.

All (above) sections (layers) include tips for optimizations and/or statistics.

This diagram is part of the book:

Operativni sustavi i računalne mreže - Linux u primjeni

https://doi.org/10.5281/zenodo.8119310

Files

Linux Network Stack - EN.pdf

Files (5.4 MB)

Additional details